This is the mail archive of the cluster-cvs@sourceware.org mailing list for the cluster.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Cluster Project branch, RHEL4, updated. gfs-kernel_2_6_9_76-42-g69a3d82


This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "Cluster Project".

http://sources.redhat.com/git/gitweb.cgi?p=cluster.git;a=commitdiff;h=69a3d82a661d0c67dadfc0c02a6839c94ab0cdfb

The branch, RHEL4 has been updated
       via  69a3d82a661d0c67dadfc0c02a6839c94ab0cdfb (commit)
       via  24945860f4de6118437688aa3e4b7bb3e3edbbf0 (commit)
       via  070de9cc3db1a66d427a2aaaef0dbf71d4d404b9 (commit)
       via  179da28103b843845cad5eaec61a48855b5cf807 (commit)
       via  7c3f3ad887d460c2bd92a7a398048be58f1b4a07 (commit)
       via  d35287124adf15a981703c3e0785fa7c644223c1 (commit)
       via  fed6e7804f9b092b09a1d243abd03eb87747e566 (commit)
       via  71a51fb8ed4c91c5963665940afca5fc44f42559 (commit)
       via  652fe516bfa1efd5ceff9e1551e8dabc573e294d (commit)
       via  0d3a7a758217efbfec3a662269856a30d68fdd63 (commit)
       via  00a395dd63da4728c30b61a04a71224a4f05b82d (commit)
      from  5f03c06d964894819c182b5112af75e5ee44a256 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
commit 69a3d82a661d0c67dadfc0c02a6839c94ab0cdfb
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 15 11:02:33 2008 -0400

    [rgmanager] Fix several bugzillas (see below)
    
    - 245381 - restart counter/thresholds before switching to
               relocation during service recovery
    - 247772 - one service following another (event scripting
               ability).  This isn't fixed in the usual way, as
               this feature has limited use.  Instead, we allow
               an alternative method of operation for rgmanager
               in which administrators can define event triggers
               for whatever they would like. (Called "RIND")
    - 247945 - rgmanager restarting services when noncritical
               parameters are changed
    - 247980 - strong / weak service dependencies.  These are
               trivial.  Weak dependencies are only allowed with
               the event scripting as noted for 247772
    - 250101 - RG event API (internals change).  This was
               required for RIND to operate as well as 247980
    - 439948 - clushutdown reference in clusvcadm man page
    - 440006 - rgmanager stuck on stop in some cases
    - 440645 - quotaoff causes hanges in some circumstances
    - 441577 - Symlinks in mount point path cause erroneous failures

commit 24945860f4de6118437688aa3e4b7bb3e3edbbf0
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 8 00:32:04 2008 -0400

    [rgmanager] Fix minor crash bug

commit 070de9cc3db1a66d427a2aaaef0dbf71d4d404b9
Author: Lon Hohberger <lhh@redhat.com>
Date:   Fri Apr 4 13:54:28 2008 -0400

    [rgmanager] Fixes for RIND

commit 179da28103b843845cad5eaec61a48855b5cf807
Author: Lon Hohberger <lhh@redhat.com>
Date:   Fri Apr 4 11:20:09 2008 -0400

    [rgmanager] More RIND merges.

commit 7c3f3ad887d460c2bd92a7a398048be58f1b4a07
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 1 16:25:08 2008 -0400

    [rgmanager] Fix clusvcadm build

commit d35287124adf15a981703c3e0785fa7c644223c1
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 1 16:23:17 2008 -0400

    [rgmanager] Fix build

commit fed6e7804f9b092b09a1d243abd03eb87747e566
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 1 16:22:15 2008 -0400

    [rgmanager] Remove extraneous copy of event_script.txt

commit 71a51fb8ed4c91c5963665940afca5fc44f42559
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 1 16:17:55 2008 -0400

    [rgmanager] Misc fixes. Update auto testing

commit 652fe516bfa1efd5ceff9e1551e8dabc573e294d
Author: Lon Hohberger <lhh@redhat.com>
Date:   Tue Apr 1 12:54:15 2008 -0400

    [rgmanager] Make RIND build on RHEL4 branch

commit 0d3a7a758217efbfec3a662269856a30d68fdd63
Author: Lon Hohberger <lhh@redhat.com>
Date:   Mon Mar 10 09:58:08 2008 -0400

    Merge, part 2

commit 00a395dd63da4728c30b61a04a71224a4f05b82d
Author: Lon Hohberger <lhh@redhat.com>
Date:   Mon Mar 10 09:56:40 2008 -0400

    Commit phase 1 of update from rhel5 branch

-----------------------------------------------------------------------

Summary of changes:
 rgmanager/event-script.txt                         |  311 +++++
 rgmanager/include/clulog.h                         |    2 +-
 rgmanager/include/event.h                          |  147 +++
 rgmanager/include/res-ocf.h                        |   20 +
 rgmanager/include/resgroup.h                       |   60 +-
 rgmanager/include/reslist.h                        |   64 +-
 rgmanager/include/restart_counter.h                |   32 +
 rgmanager/include/rg_queue.h                       |   24 +-
 rgmanager/include/sets.h                           |   39 +
 rgmanager/man/clusvcadm.8                          |   13 +-
 rgmanager/src/clulib/Makefile                      |    3 +-
 rgmanager/src/clulib/rg_strings.c                  |  208 +++-
 rgmanager/src/clulib/sets.c                        |  370 ++++++
 rgmanager/src/clulib/signals.c                     |   18 +
 rgmanager/src/clulib/tmgr.c                        |  128 ++
 rgmanager/src/clulib/vft.c                         |   87 +-
 rgmanager/src/daemons/Makefile                     |   11 +-
 rgmanager/src/daemons/event_config.c               |  540 ++++++++
 rgmanager/src/daemons/fo_domain.c                  |  117 ++-
 rgmanager/src/daemons/groups.c                     |  575 ++++++++--
 rgmanager/src/daemons/main.c                       |  167 ++--
 rgmanager/src/daemons/members.c                    |   41 +
 rgmanager/src/daemons/reslist.c                    |   94 ++-
 rgmanager/src/daemons/resrules.c                   |  143 ++-
 rgmanager/src/daemons/restart_counter.c            |  205 ++++
 rgmanager/src/daemons/restree.c                    |  542 ++++-----
 rgmanager/src/daemons/rg_event.c                   |  500 ++++++++
 rgmanager/src/daemons/rg_forward.c                 |  110 ++
 rgmanager/src/daemons/rg_state.c                   |  427 +++++---
 rgmanager/src/daemons/rg_thread.c                  |    5 +
 rgmanager/src/daemons/service_op.c                 |  207 ++++
 rgmanager/src/daemons/slang_event.c                | 1286 ++++++++++++++++++++
 rgmanager/src/daemons/test.c                       |  101 ++-
 .../daemons/tests/delta-test001-test002.expected   |   34 +
 .../daemons/tests/delta-test002-test003.expected   |   48 +-
 .../daemons/tests/delta-test003-test004.expected   |   52 +-
 .../daemons/tests/delta-test004-test005.expected   |   53 +-
 .../daemons/tests/delta-test005-test006.expected   |   56 +-
 .../daemons/tests/delta-test006-test007.expected   |   56 +-
 .../daemons/tests/delta-test007-test008.expected   |   68 +-
 .../daemons/tests/delta-test008-test009.expected   |   89 +-
 .../daemons/tests/delta-test009-test010.expected   |   98 +-
 .../daemons/tests/delta-test010-test011.expected   |  135 ++-
 .../daemons/tests/delta-test011-test012.expected   |  157 ++-
 .../daemons/tests/delta-test012-test013.expected   |  160 ++-
 .../daemons/tests/delta-test013-test014.expected   |  224 +++--
 .../daemons/tests/delta-test014-test015.expected   |  264 +++--
 .../daemons/tests/delta-test015-test016.expected   |  256 +++--
 .../daemons/tests/delta-test016-test017.expected   |  309 ++++--
 .../daemons/tests/delta-test017-test018.expected   |  558 +++++++++
 rgmanager/src/daemons/tests/runtests.sh            |    6 +-
 rgmanager/src/daemons/tests/test001.expected       |   21 +
 rgmanager/src/daemons/tests/test002.expected       |   21 +
 rgmanager/src/daemons/tests/test003.expected       |   33 +-
 rgmanager/src/daemons/tests/test004.expected       |   33 +-
 rgmanager/src/daemons/tests/test005.expected       |   36 +-
 rgmanager/src/daemons/tests/test006.expected       |   36 +-
 rgmanager/src/daemons/tests/test007.expected       |   36 +-
 rgmanager/src/daemons/tests/test008.expected       |   52 +-
 rgmanager/src/daemons/tests/test009.expected       |   53 +-
 rgmanager/src/daemons/tests/test010.expected       |   63 +-
 rgmanager/src/daemons/tests/test011.expected       |   88 +-
 rgmanager/src/daemons/tests/test012.expected       |   89 +-
 rgmanager/src/daemons/tests/test013.expected       |   89 +-
 rgmanager/src/daemons/tests/test014.expected       |  147 ++-
 rgmanager/src/daemons/tests/test015.expected       |  147 ++-
 rgmanager/src/daemons/tests/test016.expected       |  147 ++-
 rgmanager/src/daemons/tests/test017.expected       |  176 ++-
 rgmanager/src/daemons/tests/test018.expected       |  291 +++++
 rgmanager/src/resources/Makefile                   |    9 +-
 rgmanager/src/resources/clusterfs.sh               |   61 +-
 rgmanager/src/resources/default_event_script.sl    |  314 +++++
 rgmanager/src/resources/fs.sh                      |   53 +-
 rgmanager/src/resources/netfs.sh                   |    2 +-
 rgmanager/src/resources/ocf-shellfuncs             |    4 +
 rgmanager/src/resources/script.sh                  |    2 +-
 rgmanager/src/resources/service.sh                 |  104 ++-
 rgmanager/src/resources/svclib_nfslock             |   28 +
 .../src/resources/utils/named-parse-config.pl      |   26 +
 rgmanager/src/resources/utils/ra-skelet.sh         |    2 +-
 rgmanager/src/utils/clusvcadm.c                    |   45 +-
 81 files changed, 9563 insertions(+), 1865 deletions(-)
 create mode 100644 rgmanager/event-script.txt
 create mode 100644 rgmanager/include/event.h
 create mode 100644 rgmanager/include/restart_counter.h
 create mode 100644 rgmanager/include/sets.h
 create mode 100644 rgmanager/src/clulib/sets.c
 create mode 100644 rgmanager/src/clulib/tmgr.c
 create mode 100644 rgmanager/src/daemons/event_config.c
 create mode 100644 rgmanager/src/daemons/restart_counter.c
 create mode 100644 rgmanager/src/daemons/rg_event.c
 create mode 100644 rgmanager/src/daemons/service_op.c
 create mode 100644 rgmanager/src/daemons/slang_event.c
 create mode 100644 rgmanager/src/daemons/tests/delta-test017-test018.expected
 create mode 100644 rgmanager/src/daemons/tests/test018.expected
 create mode 100644 rgmanager/src/resources/default_event_script.sl
 create mode 100644 rgmanager/src/resources/utils/named-parse-config.pl

diff --git a/rgmanager/event-script.txt b/rgmanager/event-script.txt
new file mode 100644
index 0000000..00a8b4c
--- /dev/null
+++ b/rgmanager/event-script.txt
@@ -0,0 +1,311 @@
+TODO:
+* Return correct error codes to clusvcadm (currently it always returns
+  "Unknown")
+* Write glue for 'migrate' operations and migrate-enabled services
+
+Basic configuration specification:
+
+  <rm>
+    <events>
+      <event class="node"/>        <!-- all node events -->
+      <event class="node"
+             node="bar"/>     <!-- events concerning 'bar' -->
+      <event class="node"
+             node="foo"
+             node_state="up"/>     <!-- 'up' events for 'foo' -->
+      <event class="node"
+             node_id="3"
+             node_state="down"/>   <!-- 'down' events for node ID 3 -->
+
+          (note, all service ops and such deal with node ID, not
+           with node names)
+
+      <event class="service"/>     <!-- all service events-->
+      <event class="service"
+             service_name="A"/>    <!-- events concerning 'A' -->
+      <event class="service"
+             service_name="B"
+	     service_state="started"/> <!-- when 'B' is started... -->
+      <event class="service"
+             service_name="B"
+	     service_state="started"/>
+	     service_owner="3"/> <!-- when 'B' is started on node 3... -->
+
+      <event class="service"
+             priority="1"
+	     service_state="started"/>
+	     service_owner="3"/> <!-- when 'B' is started on node 3, do this
+				      before the other event handlers ... -->
+
+
+    </events>
+    ...
+  </rm>
+
+General globals available from all scripts:
+
+   node_self - local node ID
+   event_type - event class, either:
+       EVENT_NONE - unspecified / unknown
+       EVENT_NODE - node transition
+       EVENT_SERVICE - service transition
+       EVENT_USER - a user-generated request
+       EVENT_CONFIG - [NOT CONFIGURABLE]
+
+Node event globals (i.e. when event_type == EVENT_NODE):
+  
+   node_id - node ID which is transitioning
+   node_name - name of node which is transitioning
+   node_state - new node state (NODE_ONLINE or NODE_OFFLINE, or if you prefer,
+                1 or 0, respectively)
+   node_clean - 0 if the node has not been fenced, 1 if the node has been
+                fenced
+
+Service event globals (i.e. when event_type == EVENT_SERVICE):
+
+   service_name - Name of service which transitioned
+   service_state - new state of service
+   service_owner - new owner of service (or <0 if service is no longer
+		   running)
+   service_last_owner - Last owner of service if known.  Used for when
+                   service_state = "recovering" generally, in order to
+                   apply restart/relocate/disable policy.
+
+User event globals (i.e. when event_type == EVENT_USER):
+
+   service_name - service to perform request upon
+   user_request - request to perform (USER_ENABLE, USER_DISABLE,
+                   USER_STOP, USER_RELOCATE, [TODO] USER_MIGRATE)
+   user_target - target node ID if applicable
+
+
+Scripting functions - Informational:
+
+  node_list = nodes_online();
+
+	Returns a list of all online nodes.
+
+  service_list = service_list();
+
+	Returns a list of all configured services.
+
+  (restarts, last_owner, owner, state) = service_status(service_name);
+
+	Returns the state, owner, last_owner, and restarts.  Note that
+	all return values are optional, but are right-justified per S-Lang
+	specification.  This means if you only want the 'state', you can use:
+	
+	(state) = service_status(service_name);
+
+	However, if you need the restart count, you must provide all four 
+	return values as above.
+
+  (nofailback, restricted, ordered, node_list) =
+		service_domain_info(service_name);
+
+	Returns the failover domain specification, if it exists, for the
+	specified service name.  The node list returned is an ordered list
+	according to priority levels.  In the case of unordered domains, 
+	the ordering of the returned list is pseudo-random.
+
+Scripting functions - Operational:
+
+  err = service_start(service_name, node_list, [avoid_list]);
+
+	Start a non-running, (but runnable, i.e. not failed)
+	service on the first node in node_list.  Failing that, start it on
+	the second node in node_list and so forth.  One may also specify
+	an avoid list, but it's better to just use the subtract() function
+	below.  If the start is successful, the node ID running the service
+	is returned.  If the start is unsuccessful, a value < 0 is returned.
+
+  err = service_stop(service_name, [0 = stop, 1 = disable]);
+
+	Stop a running service.  The second parameter is optional, and if
+	non-zero is specified, the service will enter the disabled state.
+
+  ... stuff that's not done but needs to be:
+
+  err = service_relocate(service_name, node_list);
+
+	Move a running service to the specified node_list in order of
+	preference.  In the case of VMs, this is actually a migrate-or-
+	relocate operation.
+
+Utility functions - Node list manipulation
+
+  node_list = union(left_node_list, right_node_list);
+
+	Calculates the union between the two node list, removing duplicates
+	and preserving ordering according to left_node_list.  Any added
+	values from right_node_list will appear in their order, but
+	after left_node_list in the returned list.
+
+  node_list = intersection(left_node_list, right_node_list);
+
+	Calculates the intersection (items in both lists) between the two
+	node lists, removing duplicates and preserving ordering according
+	to left_node_list.  Any added values from right_node_list will
+	appear in their order, but after left_node_list in the returned list.
+
+  node_list = delta(left_node_list, right_node_list);
+
+	Calculates the delta (items not in both lists) between the two
+	node lists, removing duplicates and preserving ordering according
+	to left_node_list.  Any added values from right_node_list will
+	appear in their order, but after left_node_list in the returned list.
+
+  node_list = subtract(left_node_list, right_node_list);
+
+	Removes any duplicates as well as items specified in right_node_list
+	from left_node_list.  Example:
+
+	all_nodes = nodes_online();
+	allowed_nodes = subtract(nodes_online, node_to_avoid);
+
+  node_list = shuffle(node_list_old);
+
+	Rearranges the contents of node_list_old randomly and returns a
+	new node list.
+
+Utility functions - Logging:
+
+  debug(item1, item2, ...);	LOG_DEBUG level
+  info(...);			LOG_INFO level
+  notice(...);			LOG_NOTICE level
+  warning(...);			LOG_WARNING level
+  err(...);			LOG_ERR level
+  crit(...);			LOG_CRIT level
+  alert(...);			LOG_ALERT level
+  emerg(...);			LOG_EMERG level
+
+	items - These can be strings, integer lists, or integers.  Logging
+		string lists is not supported.
+
+	level - the level is consistent with syslog(8)
+
+  stop_processing();
+
+	Calling this function will prevent further event scripts from being
+	executed on a particular event.  Call this script if, for example,
+	you do not wish for the default event handler to process the event.
+
+	Note: This does NOT terminate the caller script; that is, the
+	script being executed will run to completion.
+
+Event scripts are written in a language called S-Lang; documentation specifics
+about the language are available at http://www.s-lang.org
+
+Example script (creating a follows-but-avoid-after-start behavior):
+%
+% If the main queue server and replication queue server are on the same
+% node, relocate the replication server somewhere else if possible.
+%
+define my_sap_event_trigger()
+{
+	variable state, owner_rep, owner_main;
+	variable nodes, allowed;
+
+	%
+	% If this was a service event, don't execute the default event
+	% script trigger after this script completes.
+	%
+	if (event_type == EVENT_SERVICE) {
+		stop_processing();
+	}
+
+	(owner_main, state) = service_status("service:main_queue");
+	(owner_rep, state) = service_status("service:replication_server");
+
+	if ((event_type == EVENT_NODE) and (owner_main == node_id) and
+	    (node_state == NODE_OFFLINE) and (owner_rep >= 0)) {
+		%
+		% uh oh, the owner of the main server died.  Restart it
+		% on the node running the replication server
+		%
+		notice("Starting Main Queue Server on node ", owner_rep);
+		()=service_start("service:main_queue", owner_rep);
+		return;
+	}
+
+	%
+	% S-Lang doesn't short-circuit prior to 2.1.0
+	%
+	if ((owner_main >= 0) and
+	    ((owner_main == owner_rep) or (owner_rep < 0))) {
+
+		%
+		% Get all online nodes
+		%
+		nodes = nodes_online();
+
+		%
+		% Drop out the owner of the main server
+		%
+		allowed = subtract(nodes, owner_main);
+		if ((owner_rep >= 0) and (length(allowed) == 0)) {
+			%
+			% Only one node is online and the rep server is
+			% already running.  Don't do anything else.
+			%
+			return;
+		}
+
+		if ((length(allowed) == 0) and (owner_rep < 0)) {
+			%
+			% Only node online is the owner ... go ahead
+			% and start it, even though it doesn't increase
+			% availability to do so.
+			%
+			allowed = owner_main;
+		}
+
+		%
+		% Move the replication server off the node that is
+		% running the main server if a node's available.
+		%
+		if (owner_rep >= 0) {
+			()=service_stop("service:replication_server");
+		}
+		()=service_start("service:replication_server", allowed);
+	}
+
+	return;
+}
+
+my_sap_event_trigger();
+
+
+Relevant <rm> section from cluster.conf:
+
+        <rm central_processing="1">
+                <events>
+                        <event name="main-start" class="service"
+				service="service:main_queue"
+				service_state="started"
+				file="/tmp/sap.sl"/>
+                        <event name="rep-start" class="service"
+				service="service:replication_server"
+				service_state="started"
+				file="/tmp/sap.sl"/>
+                        <event name="node-up" node_state="up"
+				class="node"
+				file="/tmp/sap.sl"/>
+
+                </events>
+                <failoverdomains>
+                        <failoverdomain name="all" ordered="1" restricted="1">
+                                <failoverdomainnode name="molly"
+priority="2"/>
+                                <failoverdomainnode name="frederick"
+priority="1"/>
+                        </failoverdomain>
+                </failoverdomains>
+                <resources/>
+                <service name="main_queue"/>
+                <service name="replication_server" autostart="0"/>
+		<!-- replication server is started when main-server start
+		     event completes -->
+        </rm>
+
+
diff --git a/rgmanager/include/clulog.h b/rgmanager/include/clulog.h
index 856d83c..e6b7ff8 100644
--- a/rgmanager/include/clulog.h
+++ b/rgmanager/include/clulog.h
@@ -38,7 +38,7 @@ extern "C" {
 #include <syslog.h>
 #include <sys/types.h>
 
-#define LOGLEVEL_DFLT         LOG_INFO
+#define LOGLEVEL_DFLT         LOG_NOTICE
 #define MAX_LOGMSG_LEN        512
 
 /*
diff --git a/rgmanager/include/event.h b/rgmanager/include/event.h
new file mode 100644
index 0000000..bd8bc0a
--- /dev/null
+++ b/rgmanager/include/event.h
@@ -0,0 +1,147 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+#ifndef _EVENT_H
+#define _EVENT_H
+
+/* 128 is a bit big, but it should be okay */
+typedef struct __rge_q {
+	char rg_name[128];
+	uint32_t rg_state;
+	uint32_t pad1;
+	int rg_owner;
+	int rg_last_owner;
+} group_event_t;
+
+typedef struct __ne_q {
+	uint64_t ne_nodeid;
+	int ne_local;
+	int ne_state;
+	int ne_clean;
+	int pad1;
+} node_event_t;
+
+typedef struct __cfg_q {
+	int cfg_version;
+	int cfg_oldversion;
+} config_event_t;
+
+typedef struct __user_q {
+	char u_name[128];
+	uint64_t u_target;		/* Node ID */
+	int pad1;
+	int u_fd;
+	int u_request;
+	int u_arg1;
+	int u_arg2;
+} user_event_t;
+
+typedef enum {
+	EVENT_NONE=0,
+	EVENT_CONFIG,
+	EVENT_NODE,
+	EVENT_RG,
+	EVENT_USER
+} event_type_t;
+
+/* Data that's distributed which indicates which
+   node is the event master */
+typedef struct __rgm {
+	uint64_t m_nodeid;
+	uint64_t m_master_time;
+	uint32_t m_magic;
+	uint8_t  m_reserved[108];
+} event_master_t;
+
+#define swab_event_master_t(ptr) \
+{\
+	swab64((ptr)->m_nodeid);\
+	swab64((ptr)->m_master_time);\
+	swab32((ptr)->m_magic);\
+}
+
+/* Just a magic # to help us ensure we've got good
+   date from VF */
+#define EVENT_MASTER_MAGIC 0xfabab0de
+
+/* Event structure - internal to the event subsystem; use
+   the queueing functions below which allocate this struct
+   and pass it to the event handler */
+typedef struct _event {
+	/* Not used dynamically - part of config info */
+	list_head();
+	char *ev_name;
+	char *ev_script;
+	char *ev_script_file;
+	int ev_prio; 
+	int ev_pad;
+	/* --- end config part */
+	int ev_type;		/* config & generated by rgmanager*/
+	int ev_transaction;
+	union {
+		group_event_t group;
+		node_event_t node;
+		config_event_t config;
+		user_event_t user;
+	} ev;
+} event_t;
+
+#define EVENT_PRIO_COUNT 100
+
+typedef struct _event_table {
+	int max_prio;
+	int pad;
+	event_t *entries[0];
+} event_table_t;
+
+
+int construct_events(int ccsfd, event_table_t **);
+void deconstruct_events(event_table_t **);
+void print_events(event_table_t *);
+
+/* Does the event match a configured event? */
+int event_match(event_t *pattern, event_t *actual);
+
+/* Event queueing functions. */
+void node_event_q(int local, uint64_t nodeID, int state, int clean);
+void rg_event_q(char *name, uint32_t state, uint64_t owner, uint64_t last);
+void user_event_q(char *svc, int request, int arg1, int arg2,
+		  uint64_t target, int fd);
+void config_event_q(int old_version, int new_version);
+
+/* Call this to see if there's a master. */
+int event_master_info_cached(event_master_t *);
+
+/* Call this to get the node ID of the current 
+   master *or* become the master if none exists */
+uint64_t event_master(void);
+
+/* Setup */
+int central_events_enabled(void);
+void set_central_events(int flag);
+int slang_process_event(event_table_t *event_table, event_t *ev);
+
+/* For distributed events. */
+void set_transition_throttling(int nsecs);
+
+/* Simplified service start. */
+int service_op_start(char *svcName, uint64_t *target_list, int target_list_len,
+		     uint64_t *new_owner);
+int service_op_stop(char *svcName, int do_disable, int event_type);
+
+
+#endif
diff --git a/rgmanager/include/res-ocf.h b/rgmanager/include/res-ocf.h
index 4459b67..b55bf44 100644
--- a/rgmanager/include/res-ocf.h
+++ b/rgmanager/include/res-ocf.h
@@ -31,6 +31,7 @@
 #define OCF_RESOURCE_INSTANCE_STR "OCF_RESOURCE_INSTANCE"
 #define OCF_CHECK_LEVEL_STR "OCF_CHECK_LEVEL"
 #define OCF_RESOURCE_TYPE_STR "OCF_RESOURCE_TYPE"
+#define OCF_REFCNT_STR "OCF_RESKEY_RGMANAGER_meta_refcnt"
 
 /*
    LSB return codes 
@@ -45,4 +46,23 @@
 #define OCF_RA_NOT_RUNNING	7
 #define OCF_RA_MAX		7
 
+/*
+      Resource operations - not ocf-specified
+ */
+#define RS_START	(0)
+#define RS_STOP		(1)
+#define RS_STATUS	(2)
+#define RS_RESINFO	(3)
+#define RS_RESTART	(4)
+#define RS_RELOAD	(5)
+#define RS_CONDRESTART  (6)
+#define	RS_RECOVER	(7)
+#define RS_CONDSTART	(8)	/** Start if flagged with RF_NEEDSTART */
+#define RS_CONDSTOP	(9)	/** STOP if flagged with RF_NEEDSTOP */
+#define RS_MONITOR	(10)
+#define RS_META_DATA	(11)
+#define RS_VALIDATE	(12)
+#define RS_MIGRATE	(13)
+#define RS_RECONFIG	(14)
+
 #endif
diff --git a/rgmanager/include/resgroup.h b/rgmanager/include/resgroup.h
index 56f7f3c..c52dcc0 100644
--- a/rgmanager/include/resgroup.h
+++ b/rgmanager/include/resgroup.h
@@ -54,6 +54,11 @@ typedef struct {
 #define RG_SERVICE_GROUP "usrm::manager"
 
 #define RG_ACTION_REQUEST	/* Message header */ 0x138582
+/* Argument to RG_ACTION_REQUEST */
+#define RG_ACTION_MASTER        0xfe0db143
+#define RG_ACTION_USER          0x3f173bfd
+/* */
+#define RG_EVENT                0x138583
 
 #define RG_SUCCESS	  0
 #define RG_FAIL		  1
@@ -67,7 +72,7 @@ typedef struct {
 #define RG_EXITING	  9 
 #define RG_INIT		  10
 #define RG_ENABLE	  11
-#define RG_STATUS_INQUIRY 12
+#define RG_STATUS_NODE	  12
 #define RG_RELOCATE	  13
 #define RG_CONDSTOP	  14
 #define RG_CONDSTART	  15
@@ -77,11 +82,12 @@ typedef struct {
 #define RG_LOCK		  19
 #define RG_UNLOCK	  20
 #define RG_QUERY_LOCK	  21
+/* #define RG_MIGRATE        22 */
+/* Compat: FREEZE = 23, UNFREEZE = 24 */
+/* #define RG_STATUS_INQUIRY 25 */
 #define RG_NONE		  999
 
-extern const char *rg_req_strings[];
-
-#define rg_req_str(req) (rg_req_strings[req])
+const char *rg_req_str(int req);
 
 int handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 			uint64_t *new_owner);
@@ -101,23 +107,28 @@ int handle_start_remote_req(char *svcName, int req);
 #define RG_STATE_ERROR			117	/** Recoverable error */
 #define RG_STATE_RECOVER		118	/** Pending recovery */
 #define RG_STATE_DISABLED		119	/** Resource not allowd to run */
+/* #define RG_STATE_MIGRATE		120	*/
 
 #define DEFAULT_CHECK_INTERVAL		10
 
-extern const char *rg_state_strings[];
+const char *rg_state_str(int val);
+int rg_state_str_to_id(const char *val);
+const char *agent_op_str(int val);
 
-#define rg_state_str(state) (rg_state_strings[state - RG_STATE_BASE])
+int eval_groups(int local, uint64_t nodeid, int nodeStatus);
 
 int rg_status(const char *resgroupname);
 int group_op(char *rgname, int op);
 void rg_init(void);
 
-/* FOOM */
 int svc_start(char *svcName, int req);
 int svc_stop(char *svcName, int error);
 int svc_status(char *svcName);
 int svc_disable(char *svcName);
 int svc_fail(char *svcName);
+int check_restart(char *svcName);
+int add_restart(char *svcName);
+
 int rt_enqueue_request(const char *resgroupname, int request, int response_fd,
        		       int max, uint64_t target, int arg0, int arg1);
 
@@ -135,6 +146,7 @@ int set_rg_state(char *name, rg_state_t *svcblk);
 int get_rg_state(char *servicename, rg_state_t *svcblk);
 uint64_t best_target_node(cluster_member_list_t *allowed, uint64_t owner,
 			  char *rg_name, int lock);
+char *c_name(char *svcName);
 
 #ifdef DEBUG
 int _rg_lock_dbg(char *, void **, char *, int);
@@ -155,6 +167,24 @@ int rg_unlock(char *name, void *p);
 void member_list_update(cluster_member_list_t *new_ml);
 cluster_member_list_t *member_list(void);
 uint64_t my_id(void);
+int member_online(uint64_t nid);
+void member_set_state(uint64_t nid, int state);
+
+
+/* Return codes */
+#define RG_EDEPEND	-17		/* Dependency rule would be violated */
+#define RG_EEXCL        -16             /* Service not runnable due to
+					   the fact that it is tagged 
+					   exclusive and there are no
+					   empty nodes. */
+#define RG_EDOMAIN      -15             /* Service not runnable given the
+					   set of nodes and its failover
+					   domain */
+#define RG_ESCRIPT      -14             /* S/Lang script failed */
+#define RG_EFENCE       -13             /* Fencing operation pending */
+#define RG_ENODE        -12             /* Node is dead/nonexistent */
+#define RG_EINVAL       -11              /* Invalid operation for resource */
+#define RG_EQUORUM      -10              /* Operation requires quorum */
 
 #define RG_ERELO	-9 /* Operation cannot complete here */
 #define RG_ENODEDEATH	-8 /* Processing node died */
@@ -166,14 +196,12 @@ uint64_t my_id(void);
 #define RG_EABORT	-2 /* Request cancelled */
 #define RG_EFAIL	-1 /* Generic error */
 #define RG_ESUCCESS	0
+#define RG_YES		1
+#define RG_NO		2
+
 
+const char *rg_strerror(int val);
 
-#define FORWARD -3
-#define ABORT -2
-#define FAIL -1
-#define SUCCESS 0
-#define YES 1
-#define NO 2
 
 /*
  * Fail-over domain states
@@ -190,6 +218,12 @@ uint64_t my_id(void);
 #define FOD_RESTRICTED		(1<<1)
 #define FOD_NOFAILBACK		(1<<2)
 
+/*
+   Status tree flags
+ */
+#define SFL_FAILURE		(1<<0)
+#define SFL_RECOVERABLE		(1<<1)
+
 //#define DEBUG
 #ifdef DEBUG
 
diff --git a/rgmanager/include/reslist.h b/rgmanager/include/reslist.h
index 7b98f23..23129ff 100644
--- a/rgmanager/include/reslist.h
+++ b/rgmanager/include/reslist.h
@@ -25,31 +25,34 @@
 #include <libxml/xpath.h>
 
 
+#define RA_PRIMARY	(1<<0)	/** Primary key */
+#define RA_UNIQUE	(1<<1)	/** Unique for given type */
+#define RA_REQUIRED	(1<<2)	/** Required (or an error if not present */
+#define RA_INHERIT	(1<<3)	/** Inherit a parent resource's attr */
+#define RA_RECONFIG	(1<<4)	/** Allow inline reconfiguration */
+
 #define RF_INLINE	(1<<0)
 #define RF_DEFINED	(1<<1)
 #define RF_NEEDSTART	(1<<2)	/** Used when adding/changing resources */
 #define RF_NEEDSTOP	(1<<3)  /** Used when deleting/changing resources */
 #define RF_COMMON	(1<<4)	/** " */
+#define RF_INDEPENDENT	(1<<5)  /** Define this for a resource if it is
+				  otherwise an independent subtree */
+#define RF_RECONFIG	(1<<6)
+
+#define RF_INIT		(1<<7)	/** Resource rule: Initialize this resource
+				  class on startup */
+#define RF_DESTROY	(1<<8)	/** Resource rule flag: Destroy this
+				  resource class if you delete it from
+				  the configuration */
+#define RF_ROOT		(1<<9)
+
+
 
 #define RES_STOPPED	(0)
 #define RES_STARTED	(1)
 #define RES_FAILED	(2)
 
-/*
-   Resource operations
- */
-#define RS_START	(0)
-#define RS_STOP		(1)
-#define RS_STATUS	(2)
-#define RS_RESINFO	(3)
-#define RS_RESTART	(4)
-#define RS_RELOAD	(5)
-#define RS_CONDRESTART  (6)
-#define	RS_RECOVER	(7)
-#define RS_CONDSTART	(8)	/** Start if flagged with RF_NEEDSTART */
-#define RS_CONDSTOP	(9)	/** STOP if flagged with RF_NEEDSTOP */
-
-
 #ifndef SHAREDIR
 #define SHAREDIR		"/usr/share/rgmanager"
 #endif
@@ -65,33 +68,20 @@
 #include <res-ocf.h>
 
 
-typedef enum {
-/*
-#define RA_PRIMARY	(1<<0)
-#define RA_UNIQUE	(1<<1)
-#define RA_REQUIRED	(1<<2)
-#define RA_INHERIT	(1<<3)
- */
-	RA_PRIMARY = (1<<0),
-	RA_UNIQUE  = (1<<1),
-	RA_REQUIRED= (1<<2),
-	RA_INHERIT = (1<<3),
-	RA_SPEC    = (1<<4)
-} ra_flag_t;
-
 typedef struct _resource_attribute {
 	char	*ra_name;
 	char	*ra_value;
-	ra_flag_t ra_flags;
+	int	ra_flags;
+	int	_pad_;
 } resource_attr_t;
 
 
 typedef struct _resource_child {
+	char	*rc_name;
 	int	rc_startlevel;
 	int	rc_stoplevel;
 	int	rc_forbid;
 	int	rc_flags;
-	char	*rc_name;
 } resource_child_t;
 
 
@@ -110,7 +100,7 @@ typedef struct _resource_rule {
 	char *	rr_type;
 	char *	rr_agent;
 	char *	rr_version;	/** agent XML spec version; OCF-ism */
-	int	rr_root;
+	int	rr_flags;
 	int	rr_maxrefs;
 	resource_attr_t *	rr_attrs;
 	resource_child_t *	rr_childtypes;
@@ -137,6 +127,7 @@ typedef struct _rg_node {
 	struct _rg_node	*rn_child, *rn_parent;
 	resource_t	*rn_resource;
 	resource_act_t	*rn_actions;
+	restart_counter_t rn_restart_counter;
 	int	rn_state; /* State of this instance of rn_resource */
 	int	rn_flags;
 	int	rn_last_status;
@@ -149,7 +140,7 @@ typedef struct _fod_node {
 	list_head();
 	char	*fdn_name;
 	int	fdn_prio;
-	int	_pad_; /* align */
+	uint64_t fdn_nodeid; /* on rhel4 this will be 64-bit int */
 } fod_node_t;
 
 typedef struct _fod {
@@ -169,7 +160,11 @@ int res_stop(resource_node_t **tree, resource_t *res, void *ret);
 int res_status(resource_node_t **tree, resource_t *res, void *ret);
 int res_condstart(resource_node_t **tree, resource_t *res, void *ret);
 int res_condstop(resource_node_t **tree, resource_t *res, void *ret);
+int res_exec(resource_node_t *node, int op, const char *arg, int depth);
 /*int res_resinfo(resource_node_t **tree, resource_t *res, void *ret);*/
+int expand_time(char *val);
+int store_action(resource_act_t **actsp, char *name, int depth, int timeout, int interval);
+
 
 /*
    Calculate differences
@@ -208,6 +203,8 @@ void deconstruct_domains(fod_t **domains);
 void print_domains(fod_t **domains);
 int node_should_start(uint64_t nodeid, cluster_member_list_t *membership,
 		      char *rg_name, fod_t **domains);
+int node_domain_set(fod_t *domain, uint64_t **ret, int *retlen);
+int node_domain_set_safe(char *domainname, uint64_t **ret, int *retlen, int *flags);
 
 
 /*
@@ -216,6 +213,7 @@ int node_should_start(uint64_t nodeid, cluster_member_list_t *membership,
 resource_t *find_resource_by_ref(resource_t **reslist, char *type, char *ref);
 resource_t *find_root_by_ref(resource_t **reslist, char *ref);
 resource_rule_t *find_rule_by_type(resource_rule_t **rulelist, char *type);
+void res_build_name(char *, size_t, resource_t *);
 
 /*
    Internal functions; shouldn't be needed.
diff --git a/rgmanager/include/restart_counter.h b/rgmanager/include/restart_counter.h
new file mode 100644
index 0000000..2f158ad
--- /dev/null
+++ b/rgmanager/include/restart_counter.h
@@ -0,0 +1,32 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+/* Time-based restart counters for rgmanager */
+
+#ifndef _RESTART_COUNTER_H
+#define _RESTART_COUNTER_H
+
+typedef void *restart_counter_t;
+
+int restart_add(restart_counter_t arg);
+int restart_clear(restart_counter_t arg);
+int restart_count(restart_counter_t arg);
+int restart_treshold_exceeded(restart_counter_t arg);
+restart_counter_t restart_init(time_t expire_timeout, int max_restarts);
+int restart_cleanup(restart_counter_t arg);
+
+#endif
diff --git a/rgmanager/include/rg_queue.h b/rgmanager/include/rg_queue.h
index ac26ce8..d40793e 100644
--- a/rgmanager/include/rg_queue.h
+++ b/rgmanager/include/rg_queue.h
@@ -1,9 +1,27 @@
+/*
+  Copyright Red Hat, Inc. 2004-2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
 #ifndef _RG_QUEUE_H
 #define _RG_QUEUE_H
 #include <list.h>
 #include <stdint.h>
 #include <sys/time.h>
 #include <unistd.h>
+#include <magmamsg.h>
 
 
 /** 
@@ -15,12 +33,12 @@ typedef struct _request {
 	uint32_t	rr_request;		/** Request */
 	uint32_t	rr_errorcode;		/** Error condition */
 	uint32_t	rr_orig_request;	/** Original request */
-	uint32_t	rr_resp_fd;		/** FD to send response */
 	uint64_t	rr_target;		/** Target node */
 	uint32_t	rr_arg0;		/** Integer argument */
 	uint32_t	rr_arg1;		/** Integer argument */
+	uint32_t	rr_arg2;		/** Integer argument */
 	uint32_t	rr_line;		/** Line no */
-	uint32_t	_pad_;			/** pad */
+	uint32_t	rr_resp_fd;		/** FD to send response */
 	char 		*rr_file;		/** Who made req */
 	time_t		rr_when;		/** time to execute */
 } request_t;
@@ -41,5 +59,7 @@ int rq_queue_empty(request_t **q);
 void rq_free(request_t *foo);
 
 void forward_request(request_t *req);
+void forward_message(int fd, void *msg, uint64_t nodeid);
+
 
 #endif
diff --git a/rgmanager/include/sets.h b/rgmanager/include/sets.h
new file mode 100644
index 0000000..8cc271b
--- /dev/null
+++ b/rgmanager/include/sets.h
@@ -0,0 +1,39 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+/**
+ @file sets.h - Header file for sets.c
+ @author Lon Hohberger <lhh at redhat.com>
+ */
+#ifndef _SETS_H
+#define _SETS_H
+
+#include <stdint.h>
+typedef uint64_t set_type_t;
+
+int s_add(set_type_t *, int *, set_type_t);
+int s_union(set_type_t *, int, set_type_t *,
+	    int, set_type_t **, int *);
+
+int s_intersection(set_type_t *, int, set_type_t *,
+		   int, set_type_t **, int *);
+int s_delta(set_type_t *, int, set_type_t *,
+	    int, set_type_t **, int *);
+int s_subtract(set_type_t *, int, set_type_t *, int, set_type_t **, int *);
+int s_shuffle(set_type_t *, int);
+
+#endif
diff --git a/rgmanager/man/clusvcadm.8 b/rgmanager/man/clusvcadm.8
index dcc5691..20ae823 100644
--- a/rgmanager/man/clusvcadm.8
+++ b/rgmanager/man/clusvcadm.8
@@ -49,12 +49,8 @@ service
 Lock the local resource group manager.  This should only be used if the 
 administrator intends to perform a global, cluster-wide shutdown.  This
 prevents starting resource groups on the local node, allowing 
-services will not fail over during the shutdown of the cluster.  Generally,
-administrators should use the
-.B
-clushutdown(8)
-command to accomplish this.  Once the cluster quorum is dissolved, this
-state is reset.
+services will not fail over during the shutdown of the cluster.
+Once the cluster quorum is dissolved, this state is reset.
 .IP "\-m <member>"
 When used in conjunction with either the
 .B
@@ -88,11 +84,10 @@ service
 until a member transition or until it is enabled again.
 .IP \-u
 Unlock the cluster's service managers.  This allows services to transition
-again.  It will be necessary to re-enable all services in the stopped state
-if this is run after \fB clushutdown(8)\fR.
+again. 
 
 .IP \-v
 Display version information and exit.
 
 .SH "SEE ALSO"
-clustat(8), clushutdown(8)
+clustat(8)
diff --git a/rgmanager/src/clulib/Makefile b/rgmanager/src/clulib/Makefile
index 343c2e9..5531e82 100644
--- a/rgmanager/src/clulib/Makefile
+++ b/rgmanager/src/clulib/Makefile
@@ -30,7 +30,8 @@ install: all
 uninstall:
 
 libclulib.a: clulog.o daemon_init.o signals.o msgsimple.o \
-		vft.o gettid.o rg_strings.o wrap_lock.o
+		vft.o gettid.o rg_strings.o wrap_lock.o \
+		sets.o
 	${AR} cru $@ $^
 	ranlib $@
 
diff --git a/rgmanager/src/clulib/rg_strings.c b/rgmanager/src/clulib/rg_strings.c
index 4728789..5d561b7 100644
--- a/rgmanager/src/clulib/rg_strings.c
+++ b/rgmanager/src/clulib/rg_strings.c
@@ -1,35 +1,179 @@
-const char *rg_state_strings[] = {
-	"stopped",
-	"starting",
-	"started",
-	"stopping",
-	"failed",
-	"uninitialized",
-	"checking",
-	"recoverable",
-	"recovering",
-	"disabled",
-	""
+/*
+  Copyright Red Hat, Inc. 2004-2006
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License as published by the
+  Free Software Foundation; either version 2, or (at your option) any
+  later version.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+#include <res-ocf.h>
+#include <resgroup.h>
+
+struct string_val {
+	int val;
+	char *str;
+};
+
+
+const struct string_val rg_error_strings[] = {
+	{ RG_EEXCL,	"Service not runnable: cannot run exclusive" },
+	{ RG_EDOMAIN,   "Service not runnable: restricted failover domain offline" },
+	{ RG_ESCRIPT,   "S/Lang Script Error" },
+	{ RG_EFENCE,    "Fencing operation pending; try again later" },
+	{ RG_ENODE,     "Target node dead / nonexistent" },
+	{ RG_ERUN,      "Service is already running" },
+	{ RG_EQUORUM,	"Operation requires quorum" },
+	{ RG_EINVAL,	"Invalid operation for resource" },
+	{ RG_EDEPEND,	"Operation violates dependency rule" },
+	{ RG_EAGAIN,	"Temporary failure; try again" },
+	{ RG_EDEADLCK,	"Operation would cause a deadlock" },
+	{ RG_ENOSERVICE,"Service does not exist" },
+	{ RG_EFORWARD,	"Service not mastered locally" },
+	{ RG_EABORT,	"Aborted; service failed" },
+	{ RG_EFAIL,	"Failure" },
+	{ RG_ESUCCESS,	"Success" },
+	{ RG_YES,	"Yes" },
+	{ RG_NO, 	"No" },
+	{ 0,		NULL }
+};
+
+
+const struct string_val rg_req_strings[] = {
+	{RG_SUCCESS, "success" },
+	{RG_FAIL, "fail"},
+	{RG_START, "start"},
+	{RG_STOP, "stop"},
+	{RG_STATUS, "status"},
+	{RG_DISABLE, "disable"},
+	{RG_STOP_RECOVER, "stop (recovery)"},
+	{RG_START_RECOVER, "start (recovery)"},
+	{RG_RESTART, "restart"},
+	{RG_EXITING, "exiting"},
+	{RG_INIT, "initialize"},
+	{RG_ENABLE, "enable"},
+	{RG_STATUS_NODE, "status inquiry"},
+	{RG_RELOCATE, "relocate"},
+	{RG_CONDSTOP, "conditional stop"},
+	{RG_CONDSTART, "conditional start"},
+	{RG_START_REMOTE,"remote start"},
+	{RG_STOP_USER, "user stop"},
+	{RG_STOP_EXITING, "stop (shutdown)"},
+	{RG_LOCK, "locking"},
+	{RG_UNLOCK, "unlocking"},
+	{RG_QUERY_LOCK, "lock status inquiry"},
+	//{RG_MIGRATE, "migrate"},
+	//{RG_STATUS_INQUIRY, "out of band service status inquiry"},
+	{RG_NONE, "none"},
+	{0, NULL}
+};
+
+
+const struct string_val rg_state_strings[] = {
+	{RG_STATE_STOPPED, "stopped"},
+	{RG_STATE_STARTING, "starting"},
+	{RG_STATE_STARTED, "started"},
+	{RG_STATE_STOPPING, "stopping"},
+	{RG_STATE_FAILED, "failed"},
+	{RG_STATE_UNINITIALIZED, "uninitialized"},
+	{RG_STATE_CHECK, "checking"},
+	{RG_STATE_ERROR, "recoverable"},
+	{RG_STATE_RECOVER, "recovering"},
+	{RG_STATE_DISABLED, "disabled"},
+	//{RG_STATE_MIGRATE, "migrating"},
+	{0, NULL}
 };
 
-const char *rg_req_strings[] = {
-	"success",
-	"fail",
-	"start",
-	"stop",
-	"status",
-	"disable",
-	"stop (recovery)",
-	"start (recovery)",
-	"restart",
-	"exiting",
-	"initialize",
-	"enable",
-	"status inquiry",
-	"relocate",
-	"conditional stop",
-	"conditional start",
-	"remote start",
-	"user stop",
-	""
+
+const struct string_val agent_ops[] = {
+	{RS_START, "start"},
+	{RS_STOP, "stop"},
+	{RS_STATUS, "status"},
+	{RS_RESINFO, "resinfo"},
+	{RS_RESTART, "restart"},
+	{RS_RELOAD, "reload"},
+	{RS_CONDRESTART, "condrestart"},		/* Unused */
+	{RS_RECOVER, "recover"},		
+	{RS_CONDSTART, "condstart"},
+	{RS_CONDSTOP, "condstop"},
+	{RS_MONITOR, "monitor"},
+	{RS_META_DATA, "meta-data"},		/* printenv */
+	{RS_VALIDATE, "validate-all"},
+	//{RS_MIGRATE, "migrate"},
+	{RS_RECONFIG, "reconfig"},
+	{0 , NULL}
 };
+
+
+static inline const char *
+rg_search_table(const struct string_val *table, int val)
+{
+	int x;
+
+	for (x = 0; table[x].str != NULL; x++) {
+		if (table[x].val == val) {
+			return table[x].str;
+		}
+	}
+
+	return "Unknown";
+}
+
+
+static inline int
+rg_search_table_by_str(const struct string_val *table, const char *val)
+{
+	int x;
+
+	for (x = 0; table[x].str != NULL; x++) {
+		if (!strcasecmp(table[x].str, val))
+			return table[x].val;
+	}
+
+	return -1;
+}
+
+
+
+const char *
+rg_strerror(int val)
+{
+	return rg_search_table(rg_error_strings, val);
+}
+	
+const char *
+rg_state_str(int val)
+{
+	return rg_search_table(rg_state_strings, val);
+}
+
+
+int
+rg_state_str_to_id(const char *val)
+{
+	return rg_search_table_by_str(rg_state_strings, val);
+}
+
+
+
+const char *
+rg_req_str(int val)
+{
+	return rg_search_table(rg_req_strings, val);
+}
+
+
+const char *
+agent_op_str(int val)
+{
+	return rg_search_table(agent_ops, val);
+}
diff --git a/rgmanager/src/clulib/sets.c b/rgmanager/src/clulib/sets.c
new file mode 100644
index 0000000..f734064
--- /dev/null
+++ b/rgmanager/src/clulib/sets.c
@@ -0,0 +1,370 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+/**
+ @file sets.c - Order-preserving set functions (union / intersection / delta)
+                (designed for integer types; a la int, uint64_t, etc...)
+ @author Lon Hohberger <lhh at redhat.com>
+ */
+#include <stdio.h>
+#include <malloc.h>
+#include <string.h>
+#include <stdlib.h>
+#include <sets.h>
+#include <sys/time.h>
+
+
+/**
+ Add a value to a set.  This function disregards an add if the value is already
+ in the set.  Note that the maximum length of set s must be preallocated; this
+ function doesn't do error or bounds checking. 
+
+ @param s		Set to modify
+ @param curlen		Current length (modified if added)
+ @param val		Value to add
+ @return		0 if not added, 1 if added
+ */
+int
+s_add(set_type_t *s, int *curlen, set_type_t val)
+{
+	int idx=0;
+
+	for (; idx < *curlen; idx++)
+		if (s[idx] == val)
+			return 0;
+	s[*curlen] = val;
+	++(*curlen);
+	return 1;
+}
+
+
+/**
+ Union-set function.  Allocates and returns a new set which is the union of
+ the two given sets 'left' and 'right'.  Also returns the new set length.
+
+ @param left		Left set - order is preserved on this set; that is,
+			this is the set where the caller cares about ordering.
+ @param ll		Length of left set.
+ @param right		Right set - order is not preserved on this set during
+			the union operation
+ @param rl		Length of right set
+ @param ret		Return set.  Should * not * be preallocated.
+ @param retl		Return set length.  Should be ready to accept 1 integer
+			upon calling this function
+ @return 		0 on success, -1 on error
+ */
+int
+s_union(set_type_t *left, int ll, set_type_t *right, int rl,
+	set_type_t **ret, int *retl)
+{
+	int l, r, cnt = 0, total;
+
+	total = ll + rl; /* Union will never exceed both sets */
+
+	*ret = malloc(sizeof(set_type_t)*total);
+	if (!*ret) {
+		return -1;
+	}
+	memset((void *)(*ret), 0, sizeof(set_type_t)*total);
+
+	cnt = 0;
+
+	/* Add all the ones on the left */
+	for (l = 0; l < ll; l++)
+		s_add(*ret, &cnt, left[l]);
+
+	/* Add the ones on the left */
+	for (r = 0; r < rl; r++)
+		s_add(*ret, &cnt, right[r]);
+
+	*retl = cnt;
+
+	return 0;
+}
+
+
+/**
+ Intersection-set function.  Allocates and returns a new set which is the 
+ intersection of the two given sets 'left' and 'right'.  Also returns the new
+ set length.
+
+ @param left		Left set - order is preserved on this set; that is,
+			this is the set where the caller cares about ordering.
+ @param ll		Length of left set.
+ @param right		Right set - order is not preserved on this set during
+			the union operation
+ @param rl		Length of right set
+ @param ret		Return set.  Should * not * be preallocated.
+ @param retl		Return set length.  Should be ready to accept 1 integer
+			upon calling this function
+ @return 		0 on success, -1 on error
+ */
+int
+s_intersection(set_type_t *left, int ll, set_type_t *right, int rl,
+	       set_type_t **ret, int *retl)
+{
+	int l, r, cnt = 0, total;
+
+	total = ll; /* Intersection will never exceed one of the two set
+		       sizes */
+
+	*ret = malloc(sizeof(set_type_t)*total);
+	if (!*ret) {
+		return -1;
+	}
+	memset((void *)(*ret), 0, sizeof(set_type_t)*total);
+
+	cnt = 0;
+	/* Find duplicates */
+	for (l = 0; l < ll; l++) {
+		for (r = 0; r < rl; r++) {
+			if (left[l] != right[r])
+				continue;
+			if (s_add(*ret, &cnt, right[r]))
+				break;
+		}
+	}
+
+	*retl = cnt;
+	return 0;
+}
+
+
+/**
+ Delta-set function.  Allocates and returns a new set which is the delta (i.e.
+ numbers not in both sets) of the two given sets 'left' and 'right'.  Also
+ returns the new set length.
+
+ @param left		Left set - order is preserved on this set; that is,
+			this is the set where the caller cares about ordering.
+ @param ll		Length of left set.
+ @param right		Right set - order is not preserved on this set during
+			the union operation
+ @param rl		Length of right set
+ @param ret		Return set.  Should * not * be preallocated.
+ @param retl		Return set length.  Should be ready to accept 1 integer
+			upon calling this function
+ @return 		0 on success, -1 on error
+ */
+int
+s_delta(set_type_t *left, int ll, set_type_t *right, int rl,
+	set_type_t **ret, int *retl)
+{
+	int l, r, cnt = 0, total, found;
+
+	total = ll + rl; /* Union will never exceed both sets */
+
+	*ret = malloc(sizeof(set_type_t)*total);
+	if (!*ret) {
+		return -1;
+	}
+	memset((void *)(*ret), 0, sizeof(set_type_t)*total);
+
+	cnt = 0;
+
+	/* not efficient, but it works */
+	/* Add all the ones on the left */
+	for (l = 0; l < ll; l++) {
+		found = 0;
+		for (r = 0; r < rl; r++) {
+			if (right[r] == left[l]) {
+				found = 1;
+				break;
+			}
+		}
+		
+		if (found)
+			continue;
+		s_add(*ret, &cnt, left[l]);
+	}
+
+
+	/* Add all the ones on the right*/
+	for (r = 0; r < rl; r++) {
+		found = 0;
+		for (l = 0; l < ll; l++) {
+			if (right[r] == left[l]) {
+				found = 1;
+				break;
+			}
+		}
+		
+		if (found)
+			continue;
+		s_add(*ret, &cnt, right[r]);
+	}
+
+	*retl = cnt;
+
+	return 0;
+}
+
+
+/**
+ Subtract-set function.  Allocates and returns a new set which is the
+ subtraction of the right set from the left set.
+ Also returns the new set length.
+
+ @param left		Left set - order is preserved on this set; that is,
+			this is the set where the caller cares about ordering.
+ @param ll		Length of left set.
+ @param right		Right set - order is not preserved on this set during
+			the union operation
+ @param rl		Length of right set
+ @param ret		Return set.  Should * not * be preallocated.
+ @param retl		Return set length.  Should be ready to accept 1 integer
+			upon calling this function
+ @return 		0 on success, -1 on error
+ */
+int
+s_subtract(set_type_t *left, int ll, set_type_t *right, int rl,
+	   set_type_t **ret, int *retl)
+{
+	int l, r, cnt = 0, total, found;
+
+	total = ll; /* Union will never exceed left set length*/
+
+	*ret = malloc(sizeof(set_type_t)*total);
+	if (!*ret) {
+		return -1;
+	}
+	memset((void *)(*ret), 0, sizeof(set_type_t)*total);
+
+	cnt = 0;
+
+	/* not efficient, but it works */
+	for (l = 0; l < ll; l++) {
+		found = 0;
+		for (r = 0; r < rl; r++) {
+			if (right[r] == left[l]) {
+				found = 1;
+				break;
+			}
+		}
+		
+		if (found)
+			continue;
+		s_add(*ret, &cnt, left[l]);
+	}
+
+	*retl = cnt;
+
+	return 0;
+}
+
+
+/**
+ Shuffle-set function.  Weakly randomizes ordering of a set in-place.
+
+ @param set		Set to randomize
+ @param sl		Length of set
+ @return		0
+ */
+int
+s_shuffle(set_type_t *set, int sl)
+{
+	int x, newidx;
+	unsigned r_state = 0;
+	set_type_t t;
+	struct timeval tv;
+
+	gettimeofday(&tv, NULL);
+	r_state = (int)(tv.tv_usec);
+
+	for (x = 0; x < sl; x++) {
+		newidx = (rand_r(&r_state) % sl);
+		if (newidx == x)
+			continue;
+		t = set[x];
+		set[x] = set[newidx];
+		set[newidx] = t;
+	}
+
+	return 0;
+}
+
+
+#ifdef STANDALONE
+/* Testbed */
+/*
+  gcc -o sets sets.c -DSTANDALONE -ggdb -I../../include \
+       -Wall -Werror -Wstrict-prototypes -Wextra
+ */
+int
+main(int __attribute__ ((unused)) argc, char __attribute__ ((unused)) **argv)
+{
+	set_type_t a[] = { 1, 2, 3, 3, 3, 2, 2, 3 };
+	set_type_t b[] = { 2, 3, 4 };
+	set_type_t *i;
+	int ilen = 0, x;
+
+	s_union(a, 8, b, 3, &i, &ilen);
+
+	/* Should return length of 4 - { 1 2 3 4 } */
+	printf("set_union [%d] = ", ilen);
+	for ( x = 0; x < ilen; x++) {
+		printf("%d ", (int)i[x]);
+	}
+	printf("\n");
+
+	s_shuffle(i, ilen);
+	printf("shuffled [%d] = ", ilen);
+	for ( x = 0; x < ilen; x++) {
+		printf("%d ", (int)i[x]);
+	}
+	printf("\n");
+
+
+	free(i);
+
+	/* Should return length of 2 - { 2 3 } */
+	s_intersection(a, 8, b, 3, &i, &ilen);
+
+	printf("set_intersection [%d] = ", ilen);
+	for ( x = 0; x < ilen; x++) {
+		printf("%d ", (int)i[x]);
+	}
+	printf("\n");
+
+	free(i);
+
+	/* Should return length of 2 - { 1 4 } */
+	s_delta(a, 8, b, 3, &i, &ilen);
+
+	printf("set_delta [%d] = ", ilen);
+	for ( x = 0; x < ilen; x++) {
+		printf("%d ", (int)i[x]);
+	}
+	printf("\n");
+
+	free(i);
+
+	/* Should return length of 1 - { 1 } */
+	s_subtract(a, 8, b, 3, &i, &ilen);
+
+	printf("set_subtract [%d] = ", ilen);
+	for ( x = 0; x < ilen; x++) {
+		printf("%d ", (int)i[x]);
+	}
+	printf("\n");
+
+	free(i);
+
+
+	return 0;
+}
+#endif
diff --git a/rgmanager/src/clulib/signals.c b/rgmanager/src/clulib/signals.c
index 1d49ee5..fa9f4a6 100644
--- a/rgmanager/src/clulib/signals.c
+++ b/rgmanager/src/clulib/signals.c
@@ -1,3 +1,21 @@
+/*
+  Copyright Red Hat, Inc. 2003-2006
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License as published by the
+  Free Software Foundation; either version 2, or (at your option) any
+  later version.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
 #include <signal.h>
 #include <stdlib.h>
 #include <string.h>
diff --git a/rgmanager/src/clulib/tmgr.c b/rgmanager/src/clulib/tmgr.c
new file mode 100644
index 0000000..2565f26
--- /dev/null
+++ b/rgmanager/src/clulib/tmgr.c
@@ -0,0 +1,128 @@
+/*
+  Copyright Red Hat, Inc. 2007
+  Copyright Crosswalk 2006-2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License as published by the
+  Free Software Foundation; either version 2, or (at your option) any
+  later version.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+#ifdef WRAP_THREADS
+#include <stdio.h>
+#include <sys/types.h>
+#include <gettid.h>
+#include <pthread.h>
+#include <string.h>
+#include <errno.h>
+#include <malloc.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/types.h>
+#include <pthread.h>
+#include <list.h>
+#include <execinfo.h>
+
+typedef struct _thr {
+	list_head();
+	void *(*fn)(void *arg);
+	char **name;
+	pthread_t th;
+} mthread_t;
+
+static mthread_t *_tlist = NULL;
+static int _tcount = 0;
+static pthread_rwlock_t _tlock = PTHREAD_RWLOCK_INITIALIZER;
+
+void
+dump_thread_states(FILE *fp)
+{
+	int x;
+	mthread_t *curr;
+	fprintf(fp, "Thread Information\n");
+	pthread_rwlock_rdlock(&_tlock);
+	list_for(&_tlist, curr, x) {
+		fprintf(fp, "  Thread #%d   id: %d   function: %s\n",
+			x, (unsigned)curr->th, curr->name[0]);
+	}
+	pthread_rwlock_unlock(&_tlock);
+	fprintf(fp, "\n\n");
+}
+
+
+int __real_pthread_create(pthread_t *, const pthread_attr_t *,
+			  void *(*)(void*), void *);
+int
+__wrap_pthread_create(pthread_t *th, const pthread_attr_t *attr,
+	 	      void *(*start_routine)(void*),
+	 	      void *arg)
+{
+	void *fn = start_routine;
+	mthread_t *new;
+	int ret;
+
+	new = malloc(sizeof (*new));
+
+	ret = __real_pthread_create(th, attr, start_routine, arg);
+	if (ret) {
+		if (new)
+			free(new);
+		return ret;
+	}
+
+	if (new) {
+		new->th = *th;
+		new->fn = start_routine;
+		new->name = backtrace_symbols(&fn, 1);
+		pthread_rwlock_wrlock(&_tlock);
+		list_insert(&_tlist, new);
+		++_tcount;
+		pthread_rwlock_unlock(&_tlock);
+	}
+
+	return ret;
+}
+
+
+void __real_pthread_exit(void *);
+void
+__wrap_pthread_exit(void *exitval)
+{
+	mthread_t *old;
+	int ret = 0, found = 0;
+	pthread_t me = pthread_self();
+
+	pthread_rwlock_rdlock(&_tlock);
+	list_for(&_tlist, old, ret) {
+		if (old->th == me) {
+			found = 1;
+			break;
+		}
+	}
+	if (!found)
+		old = NULL;
+	pthread_rwlock_unlock(&_tlock);
+
+	if (!old)
+		__real_pthread_exit(exitval);
+
+	pthread_rwlock_wrlock(&_tlock);
+	list_remove(&_tlist, old);
+	--_tcount;
+	pthread_rwlock_unlock(&_tlock);
+
+	if (old->name)
+		free(old->name);
+	free(old);
+	__real_pthread_exit(exitval);
+}
+#endif
diff --git a/rgmanager/src/clulib/vft.c b/rgmanager/src/clulib/vft.c
index 859fc3e..29e51bc 100644
--- a/rgmanager/src/clulib/vft.c
+++ b/rgmanager/src/clulib/vft.c
@@ -187,6 +187,9 @@ static void
 close_all(int *fds)
 {
 	int x;
+
+	if (!fds)
+		return;
 	for (x = 0; fds[x] != -1; x++) {
 		msg_close(fds[x]);
 	}
@@ -1246,10 +1249,10 @@ vf_write(cluster_member_list_t *membership, uint32_t flags, char *keyid,
 	 void *data, uint32_t datalen)
 {
 	uint64_t nodeid;
-	int *peer_fds;
+	int *peer_fds = NULL;
 	int count;
-	key_node_t *key_node;
-	vf_msg_t *join_view;
+	key_node_t *key_node = NULL;
+	vf_msg_t *join_view = NULL;
 	int remain = 0, x, y, rv = 1;
 	uint32_t totallen;
 	struct timeval start, end, dif;
@@ -1263,19 +1266,11 @@ vf_write(cluster_member_list_t *membership, uint32_t flags, char *keyid,
 		return -1;
 
 	pthread_mutex_lock(&vf_mutex);
-	/* Obtain cluster lock on it. */
-	snprintf(lock_name, sizeof(lock_name), "usrm::vf");
-	l = clu_lock(lock_name, CLK_EX, &lockp);
-	if (l < 0) {
-		pthread_mutex_unlock(&vf_mutex);
-		return l;
-	}
 
 	/* set to -1 */
 	count = sizeof(int) * (membership->cml_count + 1);
 	peer_fds = malloc(count);
 	if(!peer_fds) {
-		clu_unlock(lock_name, lockp);
 		pthread_mutex_unlock(&vf_mutex);
 		return -1;
 	}
@@ -1285,6 +1280,14 @@ vf_write(cluster_member_list_t *membership, uint32_t flags, char *keyid,
 	getuptime(&start);
 
 retry_top:
+	/* Obtain cluster lock on it. */
+	snprintf(lock_name, sizeof(lock_name), "usrm::vf");
+	l = clu_lock(lock_name, CLK_EX, &lockp);
+	if (l < 0) {
+		pthread_mutex_unlock(&vf_mutex);
+		return l;
+	}
+
 	/*
 	 * Connect to everyone, except ourself.  We separate this from the
 	 * initial send cycle because the connect cycle can cause timeouts
@@ -1316,29 +1319,40 @@ retry_top:
 			printf("VF: Connect to %d failed: %s\n", (int)nodeid,
 			       strerror(errno));
 #endif
-			if (flags & VFF_RETRY)
-				goto retry_top;
 			if (flags & VFF_IGN_CONN_ERRORS)
 				continue;
-			close_all(peer_fds);
-			free(peer_fds);
 
-			clu_unlock(lock_name, lockp);
-			pthread_mutex_unlock(&vf_mutex);
-			return -1;
+			if (flags & VFF_RETRY) {
+				clu_unlock(lock_name, lockp);
+				lockp = NULL;
+				usleep(20000); /* XXX */
+				goto retry_top;
+			}
+
+			/* No retry and no connection ignoring */
+			rv = -1;
+			goto out_cleanup;
 		}
 		++y;
 	}
 
+	if (y == 0) {
+		/* No hosts online / all refused connection (including
+		   ourself) - guess we're good */
+		rv = 1;
+		goto out_cleanup;
+	}
+
 	pthread_mutex_lock(&key_list_mutex);
 	key_node = kn_find_key(keyid);
 	if (!key_node) {
 
 		if ((vf_key_init_nt(keyid, 10, NULL, NULL) < 0)) {
 			pthread_mutex_unlock(&key_list_mutex);
-			clu_unlock(lock_name, lockp);
-			pthread_mutex_unlock(&vf_mutex);
-			return -1;
+			/* key_list_mutex is only held here; so we do
+			   not put it in the cleanup section */
+			rv = -1;
+			goto out_cleanup;
 		}
 		key_node = kn_find_key(keyid);
 		assert(key_node);
@@ -1350,9 +1364,8 @@ retry_top:
 	pthread_mutex_unlock(&key_list_mutex);
 
 	if (!join_view) {
-		clu_unlock(lock_name, lockp);
-		pthread_mutex_unlock(&vf_mutex);
-		return -1;
+		rv = -1;
+		goto out_cleanup;
 	}
 
 #ifdef DEBUG
@@ -1386,11 +1399,8 @@ retry_top:
 	 */
 	if (ret_status == -1) {
 		vf_send_abort(peer_fds);
-		close_all(peer_fds);
-		free(join_view);
-		clu_unlock(lock_name, lockp);
-		pthread_mutex_unlock(&vf_mutex);
-		return -1;
+		rv = -1;
+		goto out_cleanup;
 	}
 
 #ifdef DEBUG
@@ -1408,21 +1418,28 @@ retry_top:
 #endif
 	}
 
+
+out_cleanup:
 	/*
 	 * Clean up
 	 */
-	close_all(peer_fds);
+	if (peer_fds) {
+		close_all(peer_fds);
+		free(peer_fds);
+	}
 
 	/*
 	 * unanimous returns 1 for true; 0 for false, so negate it and
 	 * return our value...
 	 */
-	free(join_view);
-	free(peer_fds);
-	clu_unlock(lock_name, lockp);
+	if (join_view)
+		free(join_view);
+	if (lockp)
+		clu_unlock(lock_name, lockp);
+
 	pthread_mutex_unlock(&vf_mutex);
 
-	if (rv) {
+	if (rv >= 0) {
 		getuptime(&end);
 
 		dif.tv_usec = end.tv_usec - start.tv_usec;
@@ -1439,7 +1456,7 @@ retry_top:
 #endif
 	}
 
-	return (rv?0:-1);
+	return ((rv>=0)?0:-1);
 }
 
 
diff --git a/rgmanager/src/daemons/Makefile b/rgmanager/src/daemons/Makefile
index f4413b7..55b37d5 100644
--- a/rgmanager/src/daemons/Makefile
+++ b/rgmanager/src/daemons/Makefile
@@ -17,12 +17,13 @@ include ${top_srcdir}/make/defines.mk
 INCLUDE += -I $(top_srcdir)/include
 
 CFLAGS+= -g -I${incdir} -I/usr/include/libxml2 -L${libdir}
+CFLAGS+= -I/usr/include/slang
 
 CFLAGS+= -g -Wstrict-prototypes -Wshadow -fPIC -D_GNU_SOURCE
 
 CFLAGS+= -L ../clulib 
 
-LDFLAGS+= -lclulib -lxml2 -lmagmamsg -lmagma -lpthread -ldl
+LDFLAGS+= -lclulib -lxml2 -lmagmamsg -lmagma -lpthread -ldl -lslang
 TARGETS=clurgmgrd clurmtabd rg_test
 
 all: ${TARGETS}
@@ -40,8 +41,9 @@ uninstall:
 
 clurgmgrd: rg_thread.o rg_locks.o main.o groups.o rg_state.o \
 		rg_queue.o members.o rg_forward.o reslist.o \
-		resrules.o restree.o fo_domain.o nodeevent.o \
-		watchdog.o
+		resrules.o restree.o fo_domain.o \
+		watchdog.o restart_counter.o event_config.o \
+		slang_event.o rg_event.o service_op.o
 	$(CC) -o $@ $^ $(INCLUDE) $(CFLAGS) $(LDFLAGS) -lccs
 
 #
@@ -59,7 +61,8 @@ clurgmgrd: rg_thread.o rg_locks.o main.o groups.o rg_state.o \
 # packages should run 'make check' as part of the build process.
 #
 rg_test: rg_locks-noccs.o test-noccs.o reslist-noccs.o \
-		resrules-noccs.o restree-noccs.o fo_domain-noccs.o
+		resrules-noccs.o restree-noccs.o fo_domain-noccs.o \
+		event_config-noccs.o restart_counter.o
 	$(CC) -o $@ $^ $(INCLUDE) $(CFLAGS) -llalloc $(LDFLAGS)
 
 clurmtabd: clurmtabd.o clurmtabd_lib.o
diff --git a/rgmanager/src/daemons/event_config.c b/rgmanager/src/daemons/event_config.c
new file mode 100644
index 0000000..c679866
--- /dev/null
+++ b/rgmanager/src/daemons/event_config.c
@@ -0,0 +1,540 @@
+/**
+  Copyright Red Hat, Inc. 2002-2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge,
+  MA 02139, USA.
+*/
+/** @file
+ * CCS event parsing, based on failover domain parsing
+ */
+#include <string.h>
+#include <list.h>
+#include <clulog.h>
+#include <resgroup.h>
+#include <restart_counter.h>
+#include <reslist.h>
+#include <ccs.h>
+#include <pthread.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <reslist.h>
+#include <ctype.h>
+#include <event.h>
+
+#define CONFIG_NODE_ID_TO_NAME \
+   "/cluster/clusternodes/clusternode[@nodeid=\"%d\"]/@name"
+#define CONFIG_NODE_NAME_TO_ID \
+   "/cluster/clusternodes/clusternode[@name=\"%s\"]/@nodeid"
+
+void deconstruct_events(event_table_t **);
+void print_event(event_t *ev);
+
+//#define DEBUG
+
+#ifdef DEBUG
+#define ENTER() clulog(LOG_DEBUG, "ENTER: %s\n", __FUNCTION__)
+#define RETURN(val) {\
+	clulog(LOG_DEBUG, "RETURN: %s line=%d value=%d\n", __FUNCTION__, \
+	       __LINE__, (val));\
+	return(val);\
+}
+#else
+#define ENTER()
+#define RETURN(val) return(val)
+#endif
+
+#ifdef NO_CCS
+#define ccs_get(fd, query, ret) conf_get(query, ret)
+#endif
+
+/*
+   <events>
+     <event name="helpful_name_here" class="node"
+            node="nodeid|nodename" nodestate="up|down">
+	    slang_script_stuff();
+	    start_service();
+     </event>
+   </events>
+ */
+int
+event_match(event_t *pattern, event_t *actual)
+{
+	if (pattern->ev_type != EVENT_NONE &&
+	    actual->ev_type != pattern->ev_type)
+		return 0;
+
+	/* If there's no event class specified, the rest is
+	   irrelevant */
+	if (pattern->ev_type == EVENT_NONE)
+		return 1;
+
+	switch(pattern->ev_type) {
+	case EVENT_NODE:
+		if (pattern->ev.node.ne_nodeid >= 0 &&
+		    actual->ev.node.ne_nodeid !=
+				pattern->ev.node.ne_nodeid) {
+			return 0;
+		}
+		if (pattern->ev.node.ne_local >= 0 && 
+		    actual->ev.node.ne_local !=
+				pattern->ev.node.ne_local) {
+			return 0;
+		}
+		if (pattern->ev.node.ne_state >= 0 && 
+		    actual->ev.node.ne_state !=
+				pattern->ev.node.ne_state) {
+			return 0;
+		}
+		if (pattern->ev.node.ne_clean >= 0 && 
+		    actual->ev.node.ne_clean !=
+				pattern->ev.node.ne_clean) {
+			return 0;
+		}
+		return 1; /* All specified params match */
+	case EVENT_RG:
+		if (pattern->ev.group.rg_name[0] &&
+		    strcasecmp(actual->ev.group.rg_name, 
+			       pattern->ev.group.rg_name)) {
+			return 0;
+		}
+		if (pattern->ev.group.rg_state != (uint32_t)-1 && 
+		    actual->ev.group.rg_state !=
+				pattern->ev.group.rg_state) {
+			return 0;
+		}
+		if (pattern->ev.group.rg_owner >= 0 && 
+		    actual->ev.group.rg_owner !=
+				pattern->ev.group.rg_owner) {
+			return 0;
+		}
+		return 1;
+	case EVENT_CONFIG:
+		if (pattern->ev.config.cfg_version >= 0 && 
+		    actual->ev.config.cfg_version !=
+				pattern->ev.config.cfg_version) {
+			return 0;
+		}
+		if (pattern->ev.config.cfg_oldversion >= 0 && 
+		    actual->ev.config.cfg_oldversion !=
+				pattern->ev.config.cfg_oldversion) {
+			return 0;
+		}
+		return 1;
+	case EVENT_USER:
+		if (pattern->ev.user.u_name[0] &&
+		    strcasecmp(actual->ev.user.u_name, 
+			       pattern->ev.user.u_name)) {
+			return 0;
+		}
+		if (pattern->ev.user.u_request != 0 && 
+		    actual->ev.user.u_request !=
+				pattern->ev.user.u_request) {
+			return 0;
+		}
+		if (pattern->ev.user.u_target != 0 && 
+		    actual->ev.user.u_target !=
+				pattern->ev.user.u_target) {
+			return 0;
+		}
+		return 1;
+	default:
+		break;
+	}
+			
+	return 0;
+}
+
+
+char *
+ccs_node_id_to_name(int ccsfd, int nodeid)
+{
+	char xpath[256], *ret = 0;
+
+	snprintf(xpath, sizeof(xpath), CONFIG_NODE_ID_TO_NAME,
+		 nodeid);
+	if (ccs_get(ccsfd, xpath, &ret) == 0)
+		return ret;
+	return NULL;
+}
+
+
+int
+ccs_node_name_to_id(int ccsfd, char *name)
+{
+	char xpath[256], *ret = 0;
+	int rv = 0;
+
+	snprintf(xpath, sizeof(xpath), CONFIG_NODE_NAME_TO_ID,
+		 name);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		rv = atoi(ret);
+		free(ret);
+		return rv;
+	}
+	return 0;
+}
+
+
+static void 
+deconstruct_event(event_t *ev)
+{
+	if (ev->ev_script)
+		free(ev->ev_script);
+	if (ev->ev_name)
+		free(ev->ev_name);
+	free(ev);
+}
+
+
+static int
+get_node_event(int ccsfd, char *base, event_t *ev)
+{
+	char xpath[256], *ret = NULL;
+
+	/* Clear out the possibilitiies */
+	ev->ev.node.ne_nodeid = -1;
+	ev->ev.node.ne_local = -1;
+	ev->ev.node.ne_state = -1;
+	ev->ev.node.ne_clean = -1;
+
+	snprintf(xpath, sizeof(xpath), "%s/@node_id", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		ev->ev.node.ne_nodeid = atoi(ret);
+		free(ret);
+		if (ev->ev.node.ne_nodeid <= 0)
+			return -1;
+	} else {
+		/* See if there's a node name */
+		snprintf(xpath, sizeof(xpath), "%s/@node", base);
+		if (ccs_get(ccsfd, xpath, &ret) == 0) {
+			ev->ev.node.ne_nodeid =
+				ccs_node_name_to_id(ccsfd, ret);
+			free(ret);
+			if (ev->ev.node.ne_nodeid <= 0)
+				return -1;
+		}
+	}
+
+	snprintf(xpath, sizeof(xpath), "%s/@node_state", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		if (!strcasecmp(ret, "up")) {
+			ev->ev.node.ne_state = 1;
+		} else if (!strcasecmp(ret, "down")) {
+			ev->ev.node.ne_state = 0;
+		} else {
+			ev->ev.node.ne_state = !!atoi(ret);
+		}
+		free(ret);
+	}
+
+	snprintf(xpath, sizeof(xpath), "%s/@node_clean", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		ev->ev.node.ne_clean = !!atoi(ret);
+		free(ret);
+	}
+
+	snprintf(xpath, sizeof(xpath), "%s/@node_local", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		ev->ev.node.ne_local = !!atoi(ret);
+		free(ret);
+	}
+
+	return 0;
+}
+
+
+static int
+get_rg_event(int ccsfd, char *base, event_t *ev)
+{
+	char xpath[256], *ret = NULL;
+
+	/* Clear out the possibilitiies */
+	ev->ev.group.rg_name[0] = 0;
+	ev->ev.group.rg_state = (uint32_t)-1;
+	ev->ev.group.rg_owner = -1;
+
+	snprintf(xpath, sizeof(xpath), "%s/@service", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		strncpy(ev->ev.group.rg_name, ret,
+			sizeof(ev->ev.group.rg_name));
+		free(ret);
+		if (!strlen(ev->ev.group.rg_name)) {
+			return -1;
+		}
+	}
+
+	snprintf(xpath, sizeof(xpath), "%s/@service_state", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		if (!isdigit(ret[0])) {
+			ev->ev.group.rg_state =
+			       	rg_state_str_to_id(ret);
+		} else {
+			ev->ev.group.rg_state = atoi(ret);
+		}	
+		free(ret);
+	}
+
+	snprintf(xpath, sizeof(xpath), "%s/@service_owner", base);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		if (!isdigit(ret[0])) {
+			ev->ev.group.rg_owner =
+			       	ccs_node_name_to_id(ccsfd, ret);
+		} else {
+			ev->ev.group.rg_owner = !!atoi(ret);
+		}	
+		free(ret);
+	}
+
+	return 0;
+}
+
+
+static int
+get_config_event(int __attribute__((unused)) ccsfd,
+		 char __attribute__((unused)) *base,
+		 event_t __attribute__((unused)) *ev)
+{
+	errno = ENOSYS;
+	return -1;
+}
+
+
+static event_t *
+get_event(int ccsfd, char *base, int idx, int *_done)
+{
+	event_t *ev;
+	char xpath[256];
+	char *ret = NULL;
+
+	*_done = 0;
+	snprintf(xpath, sizeof(xpath), "%s/event[%d]/@name",
+		 base, idx);
+	if (ccs_get(ccsfd, xpath, &ret) != 0) {
+		*_done = 1;
+		return NULL;
+	}
+
+	ev = malloc(sizeof(*ev));
+	if (!ev)
+		return NULL;
+	memset(ev, 0, sizeof(*ev));
+	ev->ev_name = ret;
+
+	/* Get the script file / inline from config */
+	ret = NULL;
+	snprintf(xpath, sizeof(xpath), "%s/event[%d]/@file",
+		 base, idx);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		ev->ev_script_file = ret;
+	} else {
+		snprintf(xpath, sizeof(xpath), "%s/event[%d]",
+		         base, idx);
+		if (ccs_get(ccsfd, xpath, &ret) == 0) {
+			ev->ev_script = ret;
+		} else {
+			goto out_fail;
+		}
+	}
+
+	/* Get the priority ordering (must be nonzero) */
+	ev->ev_prio = 99;
+	ret = NULL;
+	snprintf(xpath, sizeof(xpath), "%s/event[%d]/@priority",
+		 base, idx);
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		ev->ev_prio = atoi(ret);
+		if (ev->ev_prio <= 0 || ev->ev_prio > EVENT_PRIO_COUNT) {
+			clulog(LOG_ERR,
+			       "event %s: priority %s invalid\n",
+			       ev->ev_name, ret);
+			goto out_fail;
+		}
+		free(ret);
+	}
+
+	/* Get the event class */
+	snprintf(xpath, sizeof(xpath), "%s/event[%d]/@class",
+		 base, idx);
+	ret = NULL;
+	if (ccs_get(ccsfd, xpath, &ret) == 0) {
+		snprintf(xpath, sizeof(xpath), "%s/event[%d]",
+		 	 base, idx);
+		if (!strcasecmp(ret, "node")) {
+			ev->ev_type = EVENT_NODE;
+			if (get_node_event(ccsfd, xpath, ev) < 0)
+				goto out_fail;
+		} else if (!strcasecmp(ret, "service") ||
+			   !strcasecmp(ret, "resource") ||
+			   !strcasecmp(ret, "rg") ) {
+			ev->ev_type = EVENT_RG;
+			if (get_rg_event(ccsfd, xpath, ev) < 0)
+				goto out_fail;
+		} else if (!strcasecmp(ret, "config") ||
+			   !strcasecmp(ret, "reconfig")) {
+			ev->ev_type = EVENT_CONFIG;
+			if (get_config_event(ccsfd, xpath, ev) < 0)
+				goto out_fail;
+		} else {
+			clulog(LOG_ERR,
+			       "event %s: class %s unrecognized\n",
+			       ev->ev_name, ret);
+			goto out_fail;
+		}
+
+		free(ret);
+		ret = NULL;
+	}
+
+	return ev;
+out_fail:
+	if (ret)
+		free(ret);
+	deconstruct_event(ev);
+	return NULL;
+}
+
+
+static event_t *
+get_default_event(void)
+{
+	event_t *ev;
+	char xpath[1024];
+
+	ev = malloc(sizeof(*ev));
+	if (!ev)
+		return NULL;
+	memset(ev, 0, sizeof(*ev));
+	ev->ev_name = strdup("Default");
+
+	/* Get the script file / inline from config */
+	snprintf(xpath, sizeof(xpath), "%s/default_event_script.sl",
+		 RESOURCE_ROOTDIR);
+
+	ev->ev_prio = 100;
+	ev->ev_type = EVENT_NONE;
+	ev->ev_script_file = strdup(xpath);
+	if (!ev->ev_script_file || ! ev->ev_name) {
+		deconstruct_event(ev);
+		return NULL;
+	}
+
+	return ev;
+}
+
+
+/**
+ * similar API to failover domain
+ */
+int
+construct_events(int ccsfd, event_table_t **events)
+{
+	char xpath[256];
+	event_t *ev;
+	int x = 1, done = 0;
+
+	/* Allocate the event list table */
+	*events = malloc(sizeof(event_table_t) +
+			 sizeof(event_t) * (EVENT_PRIO_COUNT+1));
+	if (!*events)
+		return -1;
+	memset(*events, 0, sizeof(event_table_t) +
+	       		   sizeof(event_t) * (EVENT_PRIO_COUNT+1));
+	(*events)->max_prio = EVENT_PRIO_COUNT;
+
+	snprintf(xpath, sizeof(xpath),
+		 RESOURCE_TREE_ROOT "/events");
+
+	do {
+		ev = get_event(ccsfd, xpath, x++, &done);
+		if (ev)
+			list_insert(&((*events)->entries[ev->ev_prio]), ev);
+	} while (!done);
+
+	ev = get_default_event();
+	if (ev)
+		list_insert(&((*events)->entries[ev->ev_prio]), ev);
+	
+	return 0;
+}
+
+
+void
+print_event(event_t *ev)
+{
+	printf("  Name: %s\n", ev->ev_name);
+
+	switch(ev->ev_type) {
+	case EVENT_NODE:
+		printf("    Node %d State %d\n", (int)ev->ev.node.ne_nodeid,
+		       ev->ev.node.ne_state);
+		break;
+	case EVENT_RG:
+		printf("    RG %s State %s\n", ev->ev.group.rg_name,
+		       rg_state_str(ev->ev.group.rg_state));
+		break;
+	case EVENT_CONFIG:
+		printf("    Config change - unsupported\n");
+		break;
+	default:
+		printf("    (Any event)\n");
+		break;
+	}
+	
+	if (ev->ev_script) {
+		printf("    Inline script.\n");
+	} else {
+		printf("    File: %s\n", ev->ev_script_file);
+	}
+}
+
+
+void
+print_events(event_table_t *events)
+{
+	int x, y;
+	event_t *ev;
+
+	for (x = 0; x <= events->max_prio; x++) {
+		if (!events->entries[x])
+			continue;
+		printf("Event Priority Level %d:\n", x);
+		list_for(&(events->entries[x]), ev, y) {
+			print_event(ev);
+		}
+	}
+}
+
+
+void
+deconstruct_events(event_table_t **eventsp)
+{
+	int x;
+	event_table_t *events = *eventsp;
+	event_t *ev = NULL;
+
+	if (!events)
+		return;
+
+	for (x = 0; x <= events->max_prio; x++) {
+		while ((ev = (events->entries[x]))) {
+			list_remove(&(events->entries[x]), ev);
+			deconstruct_event(ev);
+		}
+	}
+
+	free(events);
+	*eventsp = NULL;
+}
+
+
diff --git a/rgmanager/src/daemons/fo_domain.c b/rgmanager/src/daemons/fo_domain.c
index 5dedbd5..9019a10 100644
--- a/rgmanager/src/daemons/fo_domain.c
+++ b/rgmanager/src/daemons/fo_domain.c
@@ -27,11 +27,14 @@
 #include <list.h>
 #include <clulog.h>
 #include <resgroup.h>
+#include <restart_counter.h>
 #include <reslist.h>
 #include <ccs.h>
 #include <pthread.h>
 #include <stdlib.h>
 #include <stdio.h>
+#include <magma.h>
+#include <sets.h>
 
 
 //#define DEBUG
@@ -61,7 +64,7 @@
      </failoverdomain>
    </failoverdomains>
  */
-int group_property(char *, char *, char *, size_t);
+int group_property_unlocked(char *, char *, char *, size_t);
 
 fod_node_t *
 get_node(int ccsfd, char *base, int idx, fod_t *domain)
@@ -94,6 +97,23 @@ get_node(int ccsfd, char *base, int idx, fod_t *domain)
 	fodn->fdn_name = ret;
 	fodn->fdn_prio = 0;
 
+	snprintf(xpath, sizeof(xpath),
+		 "/cluster/clusternodes/clusternode[@name=\"%s\"]/@nodeid",
+		 ret);
+	if (ccs_get(ccsfd, xpath, &ret) != 0) {
+		clulog(LOG_WARNING, "Node %s has no nodeid attribute\n",
+		       fodn->fdn_name);
+		fodn->fdn_nodeid = -1;
+	} else {
+		/* 64-bit-ism on rhel4? */
+		fodn->fdn_nodeid = atoi(ret);
+	}
+
+	/* Don't even bother getting priority if we're not ordered (it's set
+	   to 0 above */
+	if (!(domain->fd_flags & FOD_ORDERED))
+		return fodn;
+
 	snprintf(xpath, sizeof(xpath), "%s/failoverdomainnode[%d]/@priority",
 		 base, idx);
 	if (ccs_get(ccsfd, xpath, &ret) != 0)
@@ -226,6 +246,11 @@ print_domains(fod_t **domains)
 {
 	fod_t *fod;
 	fod_node_t *fodn = NULL;
+	/*
+	int x;
+	int *node_set = NULL;
+	int node_set_len = 0;
+	 */
 
 	list_do(domains, fod) {
 		printf("Failover domain: %s\n", fod->fd_name);
@@ -243,9 +268,21 @@ print_domains(fod_t **domains)
 		}
 
 		list_do(&fod->fd_nodes, fodn) {
-			printf("  Node %s (priority %d)\n",
-			       fodn->fdn_name, fodn->fdn_prio);
+			printf("  Node %s (id %d, priority %d)\n",
+			       fodn->fdn_name, (int)fodn->fdn_nodeid,
+			       fodn->fdn_prio);
 		} while (!list_done(&fod->fd_nodes, fodn));
+
+		/*
+		node_domain_set(fod, &node_set, &node_set_len);
+		printf("  Failover Order = {");
+		for (x = 0; x < node_set_len; x++) {
+			printf(" %d ", node_set[x]);
+		}
+		free(node_set);
+		printf("}\n");
+		*/
+		
 	} while (!list_done(domains, fod));
 }
 
@@ -265,7 +302,7 @@ int
 node_in_domain(char *nodename, fod_t *domain,
 	       cluster_member_list_t *membership)
 {
-	int member_online = 0, member_match = 0, preferred = 100, myprio = -1;
+	int online = 0, member_match = 0, preferred = 100, myprio = -1;
 	fod_node_t *fodn;
 
 	list_do(&domain->fd_nodes, fodn) {
@@ -282,7 +319,7 @@ node_in_domain(char *nodename, fod_t *domain,
 		 * If we get here, we know:
 		 * A member of the domain is online somewhere
 		 */
-		member_online = 1;
+		online = 1;
 		if (!strcmp(nodename, fodn->fdn_name)) {
 			/*
 			 * If we get here, we know:
@@ -296,7 +333,7 @@ node_in_domain(char *nodename, fod_t *domain,
 			preferred = fodn->fdn_prio;
 	} while (!list_done(&domain->fd_nodes, fodn));
 
-	if (!member_online)
+	if (!online)
 		return 0;
 
 	if (!member_match)
@@ -311,6 +348,70 @@ node_in_domain(char *nodename, fod_t *domain,
 }
 
 
+int
+node_domain_set(fod_t *domain, uint64_t **ret, int *retlen)
+{
+	int x, i, j;
+	set_type_t *tmpset;
+	int ts_count;
+
+	fod_node_t *fodn;
+
+	/* Count domain length */
+	list_for(&domain->fd_nodes, fodn, x) { }
+	
+	*retlen = 0;
+	*ret = malloc(sizeof(uint64_t) * x);
+	if (!(*ret))
+		return -1;
+	tmpset = malloc(sizeof(uint64_t) * x);
+	if (!(*tmpset))
+		return -1;
+
+	if (domain->fd_flags & FOD_ORDERED) {
+		for (i = 1; i <= 100; i++) {
+			
+			ts_count = 0;
+			list_for(&domain->fd_nodes, fodn, x) {
+				if (fodn->fdn_prio == i) {
+					s_add(tmpset, &ts_count,
+					      fodn->fdn_nodeid);
+				}
+			}
+
+			if (!ts_count)
+				continue;
+
+			/* Shuffle stuff at this prio level */
+			if (ts_count > 1)
+				s_shuffle(tmpset, ts_count);
+			for (j = 0; j < ts_count; j++)
+				s_add(*ret, retlen, tmpset[j]);
+		}
+	}
+
+	/* Add unprioritized nodes */
+	ts_count = 0;
+	list_for(&domain->fd_nodes, fodn, x) {
+		if (!fodn->fdn_prio) {
+			s_add(tmpset, &ts_count,
+			      fodn->fdn_nodeid);
+		}
+	}
+
+	if (!ts_count)
+		return 0;
+
+	/* Shuffle stuff at this prio level */
+	if (ts_count > 1)
+		s_shuffle(tmpset, ts_count);
+	for (j = 0; j < ts_count; j++)
+		s_add(*ret, retlen, tmpset[j]);
+
+	return 0;
+}
+
+
 /**
  * See if a given nodeid should start a specified service svcid.
  *
@@ -352,7 +453,7 @@ node_should_start(uint64_t nodeid, cluster_member_list_t *membership,
 	nodename = memb_id_to_name(membership, nodeid);
 
 #ifndef NO_CCS /* XXX Testing only */
-	if (group_property(rg_name, "domain",
+	if (group_property_unlocked(rg_name, "domain",
 			    domainname, sizeof(domainname))) {
 		/*
 		 * If no domain is present, then the node in question should
@@ -411,7 +512,7 @@ node_should_start(uint64_t nodeid, cluster_member_list_t *membership,
 			RETURN(FOD_BEST);
 		}
                 
-		if (get_rg_state(rg_name, &svc_state) == FAIL) {
+		if (get_rg_state(rg_name, &svc_state) == RG_EFAIL) {
                 	/*
 			 * Couldn't get the service state, thats odd
 			 */
diff --git a/rgmanager/src/daemons/groups.c b/rgmanager/src/daemons/groups.c
index ebd37dc..5df7ed7 100644
--- a/rgmanager/src/daemons/groups.c
+++ b/rgmanager/src/daemons/groups.c
@@ -1,5 +1,5 @@
 /*
-  Copyright Red Hat, Inc. 2002-2003
+  Copyright Red Hat, Inc. 2002-2006
   Copyright Mission Critical Linux, Inc. 2000
 
   This program is free software; you can redistribute it and/or modify it
@@ -19,21 +19,29 @@
 */
 //#define DEBUG
 #include <platform.h>
-#include <magma.h>
-#include <magmamsg.h>
 #include <resgroup.h>
+#include <restart_counter.h>
 #include <reslist.h>
 #include <vf.h>
 #include <magma.h>
 #include <ccs.h>
 #include <clulog.h>
+#include <magmamsg.h>
 #include <list.h>
 #include <reslist.h>
 #include <assert.h>
+#include <event.h>
+
+int get_rg_state_local(char *name, rg_state_t *svcblk);
+
+/* Use address field in this because we never use it internally,
+   and there is no extra space in the cman_node_t type.
+   */
 
 #define cm_svccount cm_pad[0] /* Theses are uint8_t size */
 #define cm_svcexcl  cm_pad[1]
 
+extern event_table_t *master_event_table;
 
 static int config_version = 0;
 static resource_t *_resources = NULL;
@@ -41,9 +49,17 @@ static resource_rule_t *_rules = NULL;
 static resource_node_t *_tree = NULL;
 static fod_t *_domains = NULL;
 
+#ifdef WRAP_LOCKS
+pthread_mutex_t config_mutex = PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP;
+pthread_mutex_t status_mutex = PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP;
+#else
 pthread_mutex_t config_mutex = PTHREAD_MUTEX_INITIALIZER;
-pthread_rwlock_t resource_lock = PTHREAD_RWLOCK_INITIALIZER;
 pthread_mutex_t status_mutex = PTHREAD_MUTEX_INITIALIZER;
+#endif
+pthread_rwlock_t resource_lock = PTHREAD_RWLOCK_INITIALIZER;
+
+void res_build_name(char *, size_t, resource_t *);
+int group_migratory(char *groupname, int lock);
 
 
 struct status_arg {
@@ -72,10 +88,37 @@ node_should_start_safe(uint64_t nodeid, cluster_member_list_t *membership,
 
 
 int
+node_domain_set_safe(char *domainname, uint64_t **ret, int *retlen, int *flags)
+{
+	fod_t *fod;
+	int rv = -1, found = 0, x = 0;
+
+	pthread_rwlock_rdlock(&resource_lock);
+
+	list_for(&_domains, fod, x) {
+		if (!strcasecmp(fod->fd_name, domainname)) {
+			found = 1;
+			break;
+		}
+	} // while (!list_done(&_domains, fod));
+
+	if (found) {
+		rv = node_domain_set(fod, ret, retlen);
+		*flags = fod->fd_flags;
+	}
+
+	pthread_rwlock_unlock(&resource_lock);
+
+	return rv;
+}
+
+
+int
 count_resource_groups(cluster_member_list_t *ml)
 {
 	resource_t *res;
-	char *rgname, *val;
+	resource_node_t *node;
+	char rgname[64], *val;
 	int x;
 	rg_state_t st;
 	void *lockp;
@@ -88,11 +131,11 @@ count_resource_groups(cluster_member_list_t *ml)
 
 	pthread_rwlock_rdlock(&resource_lock);
 
-	list_do(&_resources, res) {
-		if (res->r_rule->rr_root == 0)
-			continue;
+	list_do(&_tree, node) {
+
+		res = node->rn_resource;
 
-		rgname = res->r_attrs[0].ra_value;
+		res_build_name(rgname, sizeof(rgname), res);
 
 		if (rg_lock(rgname, &lockp) < 0) {
 			clulog(LOG_ERR, "#XX: Unable to obtain cluster "
@@ -103,7 +146,7 @@ count_resource_groups(cluster_member_list_t *ml)
 
 		if (get_rg_state(rgname, &st) < 0) {
 			clulog(LOG_ERR, "#34: Cannot get status "
-			       "for service %s\n", rgname);
+			       "for service %s\n", c_name(rgname));
 			rg_unlock(rgname, lockp);
 			continue;
 		}
@@ -126,10 +169,9 @@ count_resource_groups(cluster_member_list_t *ml)
 			++mp->cm_svcexcl;
 		}
 
-	} while (!list_done(&_resources, res));
+	} while (!list_done(&_tree, node));
 
 	pthread_rwlock_unlock(&resource_lock);
-
 	return 0;
 }
 
@@ -168,12 +210,35 @@ is_exclusive(char *svcName)
 }
 
 
-int get_rg_state_local(char *, rg_state_t *);
+resource_node_t *
+node_by_ref(resource_node_t **tree, char *name)
+{
+	resource_t *res;
+	resource_node_t *node, *ret = NULL;
+	char rgname[64];
+	int x;
+
+	list_for(tree, node, x) {
+
+		res = node->rn_resource;
+		res_build_name(rgname, sizeof(rgname), res);
+
+		if (!strcasecmp(name, rgname)) {
+			ret = node;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+
 int
 count_resource_groups_local(cluster_member_t *mp)
 {
 	resource_t *res;
-	char *rgname, *val;
+	resource_node_t *node;
+	char rgname[64];
 	rg_state_t st;
 
 	mp->cm_svccount = 0;
@@ -181,20 +246,18 @@ count_resource_groups_local(cluster_member_t *mp)
 
 	pthread_rwlock_rdlock(&resource_lock);
 
-	list_do(&_resources, res) {
+	list_do(&_tree, node) {
 
-		if (res->r_rule->rr_root == 0)
-			continue;
+		res = node->rn_resource;
 
-		rgname = res->r_attrs[0].ra_value;
+		res_build_name(rgname, sizeof(rgname), res);
 
 		if (get_rg_state_local(rgname, &st) < 0) {
 			continue;
 		}
 
 		if (st.rs_state != RG_STATE_STARTED &&
-		     st.rs_state != RG_STATE_STARTING &&
-		     st.rs_state != RG_STATE_STOPPING)
+		     st.rs_state != RG_STATE_STARTING)
 			continue;
 
 		if (mp->cm_id != st.rs_owner)
@@ -202,13 +265,12 @@ count_resource_groups_local(cluster_member_t *mp)
 
 		++mp->cm_svccount;
 
-		if (is_exclusive_res(res)) 
+		if (is_exclusive_res(res))
 			++mp->cm_svcexcl;
 
-	} while (!list_done(&_resources, res));
+	} while (!list_done(&_tree, node));
 
 	pthread_rwlock_unlock(&resource_lock);
-
 	return 0;
 }
 
@@ -225,6 +287,7 @@ have_exclusive_resources(void)
 			pthread_rwlock_unlock(&resource_lock);
 			return 1;
 		}
+
 	} while (!list_done(&_resources, res));
 
 	pthread_rwlock_unlock(&resource_lock);
@@ -237,9 +300,8 @@ int
 check_exclusive_resources(cluster_member_list_t *membership, char *svcName)
 {
 	cluster_member_t *mp;
-	int exclusive, count; 
+	int exclusive, count, excl; 
 	resource_t *res;
-	char *val;
 
 	mp = memb_id_to_p(membership, my_id());
 	assert(mp);
@@ -250,14 +312,13 @@ check_exclusive_resources(cluster_member_list_t *membership, char *svcName)
 	res = find_root_by_ref(&_resources, svcName);
 	if (!res) {
 		pthread_rwlock_unlock(&resource_lock);
-		return FAIL;
+		return RG_ENOSERVICE;
 	}
-	val = res_attr_value(res, "exclusive");
+
+	excl = is_exclusive_res(res);
 	pthread_rwlock_unlock(&resource_lock);
-	if (exclusive || (count && val && 
-			(!strcmp(val, "yes") || (atoi(val)>0)))) {
-		return 1;
-	}
+	if (exclusive || (count && excl))
+		return RG_YES;
 
 	return 0;
 }
@@ -343,6 +404,42 @@ best_target_node(cluster_member_list_t *allowed, uint64_t owner,
 }
 
 
+int
+check_depend(resource_t *res)
+{
+	char *val;
+	rg_state_t rs;
+
+	val = res_attr_value(res, "depend");
+	if (!val)
+		/* No dependency */
+		return -1;
+
+	if (get_rg_state_local(val, &rs) == 0)
+		return (rs.rs_state == RG_STATE_STARTED);
+
+	return 1;
+}
+
+
+int
+check_depend_safe(char *rg_name)
+{
+	resource_t *res;
+	int ret;
+
+	pthread_rwlock_rdlock(&resource_lock);
+	res = find_root_by_ref(&_resources, rg_name);
+	if (!res)
+		return -1;
+
+	ret = check_depend(res);
+	pthread_rwlock_unlock(&resource_lock);
+
+	return ret;
+}
+
+
 /**
   Start or failback a resource group: if it's not running, start it.
   If it is running and we're a better member to run it, then ask for
@@ -355,7 +452,7 @@ consider_start(resource_node_t *node, char *svcName, rg_state_t *svcStatus,
 	char *val;
 	cluster_member_t *mp;
 	int autostart, exclusive;
-	void *lockp;
+	void *lockp = NULL;
 
 	mp = memb_id_to_p(membership, my_id());
 	assert(mp);
@@ -382,7 +479,7 @@ consider_start(resource_node_t *node, char *svcName, rg_state_t *svcStatus,
 			/*
 			clulog(LOG_DEBUG,
 			       "Skipping RG %s: Autostart disabled\n",
-			       svcName);
+			       logname);
 			 */
 			/*
 			   Mark non-autostart services as disabled to avoid
@@ -397,7 +494,8 @@ consider_start(resource_node_t *node, char *svcName, rg_state_t *svcStatus,
 
 			if (get_rg_state(svcName, svcStatus) != 0) {
 				clulog(LOG_ERR, "#34: Cannot get status "
-				       "for service %s\n", svcName);
+				       "for service %s\n",
+				       c_name(svcName));
 				rg_unlock(svcName, lockp);
 				return;
 			}
@@ -414,12 +512,20 @@ consider_start(resource_node_t *node, char *svcName, rg_state_t *svcStatus,
 		}
 	}
 
+	/* See if service this one depends on is running.  If not,
+           don't start it */
+	if (check_depend(node->rn_resource) == 0) {
+		clulog(LOG_DEBUG,
+		       "Skipping %s: Dependency missing\n", svcName);
+		return;
+	}
+
 	val = res_attr_value(node->rn_resource, "exclusive");
 	exclusive = val && ((!strcmp(val, "yes") || (atoi(val)>0)));
 
 	if (exclusive && mp->cm_svccount) {
 		clulog(LOG_DEBUG,
-		       "Skipping RG %s: Exclusive and I am running services\n",
+		       "Skipping %s: Exclusive and I am running services\n",
 		       svcName);
 		return;
 	}
@@ -430,7 +536,7 @@ consider_start(resource_node_t *node, char *svcName, rg_state_t *svcStatus,
 	 */
 	if (mp->cm_svcexcl) {
 		clulog(LOG_DEBUG,
-		       "Skipping RG %s: I am running an exclusive service\n",
+		       "Skipping %s: I am running an exclusive service\n",
 		       svcName);
 		return;
 	}
@@ -450,13 +556,14 @@ void
 consider_relocate(char *svcName, rg_state_t *svcStatus, uint64_t nodeid,
 		  cluster_member_list_t *membership)
 {
-	int a, b;
+	int a, b, req = RG_RELOCATE;
 
 	/*
 	   Service must be running locally in order to consider for
 	   a relocate
 	 */
-	if (svcStatus->rs_state != RG_STATE_STARTED ||
+	if ((svcStatus->rs_state != RG_STATE_STARTING &&
+	    svcStatus->rs_state != RG_STATE_STARTED) ||
 	    svcStatus->rs_owner != my_id())
 		return;
 
@@ -476,14 +583,68 @@ consider_relocate(char *svcName, rg_state_t *svcStatus, uint64_t nodeid,
 	if (a <= b)
 		return;
 
-	clulog(LOG_DEBUG, "Relocating group %s to better node %s\n",
+	clulog(LOG_NOTICE, "Relocating %s to better node %s\n",
 	       svcName,
 	       memb_id_to_name(membership, nodeid));
 
-	rt_enqueue_request(svcName, RG_RELOCATE, -1, 0, nodeid, 0, 0);
+	rt_enqueue_request(svcName, req, -1, 0, nodeid, 0, 0);
+}
+
+
+char **
+get_service_names(int *len)
+{
+	resource_node_t *node = NULL;
+	int nservices, ncopied = 0, x;
+	char **ret = NULL;
+	char rg_name[64];
+
+	pthread_rwlock_rdlock(&resource_lock);
+
+	nservices = 0;
+	list_do(&_tree, node) {
+		++nservices;
+	} while (!list_done(&_tree, node));
+	
+	ret = malloc(sizeof(char *) * (nservices + 1));
+	if (!ret)
+		goto out_fail;
+
+	memset(ret, 0, sizeof(char *) * (nservices + 1));
+	nservices = 0;
+	list_for(&_tree, node, nservices) {
+		res_build_name(rg_name, sizeof(rg_name),
+			       node->rn_resource);
+
+		if (!strlen(rg_name))
+			continue;
+
+		ret[ncopied] = strdup(rg_name);
+		if (ret[ncopied]) {
+			ncopied++;
+		} else {
+			goto out_fail;
+		}
+	}
+
+	if (len)
+		*len = ncopied;
+	pthread_rwlock_unlock(&resource_lock);
+	return ret;
+
+out_fail:
+	pthread_rwlock_unlock(&resource_lock);
+	for (x = 0; x < ncopied; x++)
+		free(ret[x]);
+	if (ret)
+		free(ret);
+	return NULL;
 }
 
 
+
+
+
 /**
  * Called to decide what services to start locally during a node_event.
  * Originally a part of node_event, it is now its own function to cut down
@@ -495,7 +656,7 @@ int
 eval_groups(int local, uint64_t nodeid, int nodeStatus)
 {
 	void *lockp = NULL;
-	char *svcName, *nodeName;
+	char svcName[96], *nodeName;
 	resource_node_t *node;
 	rg_state_t svcStatus;
 	cluster_member_list_t *membership;
@@ -516,10 +677,10 @@ eval_groups(int local, uint64_t nodeid, int nodeStatus)
 
 	list_do(&_tree, node) {
 
-		if (node->rn_resource->r_rule->rr_root == 0)
+		if ((node->rn_resource->r_rule->rr_flags & RF_ROOT)== 0)
 			continue;
 
-		svcName = node->rn_resource->r_attrs->ra_value;
+		res_build_name(svcName, sizeof(svcName), node->rn_resource);
 
 		/*
 		 * Lock the service information and get the current service
@@ -537,7 +698,7 @@ eval_groups(int local, uint64_t nodeid, int nodeStatus)
 		if (get_rg_state(svcName, &svcStatus) != 0) {
 			clulog(LOG_ERR,
 			       "#34: Cannot get status for service %s\n",
-			       svcName);
+			       c_name(svcName));
 			rg_unlock(svcName, lockp);
 			continue;
 		}
@@ -557,7 +718,7 @@ eval_groups(int local, uint64_t nodeid, int nodeStatus)
 			continue;
 		}
 
-		clulog(LOG_DEBUG, "Evaluating RG %s, state %s, owner "
+		clulog(LOG_DEBUG, "Evaluating %s, state %s, owner "
 		       "%s\n", svcName,
 		       rg_state_str(svcStatus.rs_state),
 		       nodeName);
@@ -600,6 +761,106 @@ eval_groups(int local, uint64_t nodeid, int nodeStatus)
 
 
 /**
+ * Called to decide what services to start locally after a service event.
+ * 
+ * @see			eval_groups
+ */
+int
+group_event(char *rg_name, uint32_t state, int owner)
+{
+	char svcName[64], *nodeName;
+	resource_node_t *node;
+	rg_state_t svcStatus;
+	cluster_member_list_t *membership;
+	int depend;
+
+	if (rg_locked()) {
+		clulog(LOG_DEBUG,
+		       "Resource groups locked; not evaluating\n");
+		return -EAGAIN;
+	}
+
+	membership = member_list();
+	if (!membership)
+		return -1;
+
+	pthread_rwlock_rdlock(&resource_lock);
+
+	/* Requires read lock */
+	count_resource_groups(membership);
+
+	list_do(&_tree, node) {
+
+		res_build_name(svcName, sizeof(svcName), node->rn_resource);
+
+		if (get_rg_state_local(svcName, &svcStatus) != 0)
+			continue;
+
+		if (svcStatus.rs_owner == 0)
+			nodeName = "none";
+		else
+			nodeName = memb_id_to_name(membership,
+						   svcStatus.rs_owner);
+
+		/* Disabled/failed/in recovery?  Do nothing */
+		if ((svcStatus.rs_state == RG_STATE_DISABLED) ||
+		    (svcStatus.rs_state == RG_STATE_FAILED) ||
+		    (svcStatus.rs_state == RG_STATE_RECOVER)) {
+			continue;
+		}
+
+		depend = check_depend(node->rn_resource);
+
+		/* Skip if we have no dependency */
+		if (depend == -1)
+			continue;
+
+		/*
+		   If we have:
+		   (a) a met dependency
+		   (b) we're in the STOPPED state, and
+		   (c) our new service event is a started service
+
+		   Then see if we should start this other service as well.
+		 */
+		if (depend == 1 &&
+		    svcStatus.rs_state == RG_STATE_STOPPED &&
+		    state == RG_STATE_STARTED) {
+
+			clulog(LOG_DEBUG, "Evaluating %s, state %s, owner "
+			       "%s\n", svcName,
+			       rg_state_str(svcStatus.rs_state),
+			       nodeName);
+			consider_start(node, svcName, &svcStatus, membership);
+			continue;
+		}
+
+		/*
+		   If we lost a dependency for this service and it's running
+		   locally, stop it.
+		 */
+		if (depend == 0 &&
+		    svcStatus.rs_state == RG_STATE_STARTED &&
+		    svcStatus.rs_owner == my_id()) {
+
+			clulog(LOG_WARNING,
+			       "Stopping %s: Dependency missing\n",
+			       svcName);
+			rt_enqueue_request(svcName, RG_STOP, -1, 0, my_id(),
+					   0, 0);
+		}
+
+	} while (!list_done(&_tree, node));
+
+	pthread_rwlock_unlock(&resource_lock);
+	cml_free(membership);
+
+	return 0;
+}
+
+
+
+/**
    Perform an operation on a resource group.  That is, walk down the
    tree for that resource group, performing the given operation on
    all children in the necessary order.
@@ -675,31 +936,50 @@ group_op(char *groupname, int op)
    @return		0 on success, -1 on failure.
  */
 int
-group_property(char *groupname, char *property, char *ret, size_t len)
+_group_property(char *groupname, char *property, char *ret,
+		size_t len, int lock)
 {
 	resource_t *res = NULL;
-	int x = 0;
+	int x = 0, rv=-1;
+
+	if (lock)
+		pthread_rwlock_rdlock(&resource_lock);
 
-	pthread_rwlock_rdlock(&resource_lock);
 	res = find_root_by_ref(&_resources, groupname);
-	if (!res) {
-		pthread_rwlock_unlock(&resource_lock);
-		return -1;
-	}
+	if (!res)
+		goto out;
 
 	for (; res->r_attrs[x].ra_name; x++) {
 		if (strcasecmp(res->r_attrs[x].ra_name, property))
 			continue;
 		strncpy(ret, res->r_attrs[x].ra_value, len);
-		pthread_rwlock_unlock(&resource_lock);
-		return 0;
+		rv = 0;
+		break;
 	}
-	pthread_rwlock_unlock(&resource_lock);
+out:
+	if (lock)
+		pthread_rwlock_unlock(&resource_lock);
+	return rv;
+}
 
-	return -1;
+
+int
+group_property(char *groupname, char *property, char *ret,
+	       size_t len)
+{
+	return _group_property(groupname, property, ret, len, 1);
 }
 
 
+int
+group_property_unlocked(char *groupname, char *property, char *ret,
+			size_t len)
+{
+	return _group_property(groupname, property, ret, len, 0);
+}
+
+
+
 /**
   Send the state of a resource group to a given file descriptor.
 
@@ -746,11 +1026,20 @@ status_check_thread(void *arg)
 {
 	int fd = ((struct status_arg *)arg)->fd;
 	int fast = ((struct status_arg *)arg)->fast;
+	char name[96];
 	resource_t *res;
 	generic_msg_hdr hdr;
 
 	free(arg);
 
+        if (central_events_enabled()) {
+                /* Never call get_rg_state() (distributed) if 
+                   central events are enabled, otherwise we
+                   might overwrite the rg state with 'stopped' 
+                   when it should be 'disabled' (e.g. autostart="0") */
+                fast = 1;
+        }
+
 	/* See if we have a slot... */
 	if (rg_inc_status() < 0) {
 		/* Too many outstanding status checks.  try again later. */
@@ -762,10 +1051,11 @@ status_check_thread(void *arg)
 	pthread_rwlock_rdlock(&resource_lock);
 
 	list_do(&_resources, res) {
-		if (res->r_rule->rr_root == 0)
+		if ((res->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
-		send_rg_state(fd, res->r_attrs[0].ra_value, fast);
+		res_build_name(name, sizeof(name), res);
+		send_rg_state(fd, name, fast);
 	} while (!list_done(&_resources, res));
 
 	pthread_rwlock_unlock(&resource_lock);
@@ -819,19 +1109,25 @@ send_rg_states(int fd, int fast)
 
 
 int
-svc_exists(char *svcname)
+svc_exists(char *name)
 {
 	resource_t *res;
 	int ret = 0;
+	char rgname[64];
 
 	pthread_rwlock_rdlock(&resource_lock);
 
 	list_do(&_resources, res) {
-		if (res->r_rule->rr_root == 0)
+		if ((res->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
-		if (strcmp(res->r_attrs[0].ra_value, 
-			   svcname) == 0) {
+		res_build_name(rgname, sizeof(rgname), res);
+
+		if (!strcmp(name, rgname)) {
+			ret = 1;
+			break;
+		}
+		if (!strcmp(name, c_name(rgname))) {
 			ret = 1;
 			break;
 		}
@@ -847,20 +1143,20 @@ void
 rg_doall(int request, int block, char *debugfmt)
 {
 	resource_node_t *curr;
-	char *name;
+	char name[96];
 	rg_state_t svcblk;
 
 	pthread_rwlock_rdlock(&resource_lock);
 	list_do(&_tree, curr) {
 
-		if (curr->rn_resource->r_rule->rr_root == 0)
+		if ((curr->rn_resource->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
 		/* Group name */
-		name = curr->rn_resource->r_attrs->ra_value;
+		res_build_name(name, sizeof(name), curr->rn_resource);
 
 		if (debugfmt)
-			clulog(LOG_DEBUG, debugfmt, name);
+			clulog(LOG_DEBUG, debugfmt, c_name(name));
 
 		/* Optimization: Don't bother even queueing the request
 		   during the exit case if we don't own it */
@@ -895,7 +1191,7 @@ void *
 q_status_checks(void *arg)
 {
 	resource_node_t *curr;
-	char *name;
+	char name[96];
 	rg_state_t svcblk;
 
 	/* Only one status thread at a time, please! */
@@ -905,11 +1201,11 @@ q_status_checks(void *arg)
 	pthread_rwlock_rdlock(&resource_lock);
 	list_do(&_tree, curr) {
 
-		if (curr->rn_resource->r_rule->rr_root == 0)
+		if ((curr->rn_resource->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
 		/* Group name */
-		name = curr->rn_resource->r_attrs->ra_value;
+		res_build_name(name, sizeof(name), curr->rn_resource);
 
 		/* Local check - no one will make us take a service */
 		if (get_rg_state_local(name, &svcblk) < 0) {
@@ -955,7 +1251,7 @@ void
 do_condstops(void)
 {
 	resource_node_t *curr;
-	char *name;
+	char name[96];
 	rg_state_t svcblk;
 	int need_kill;
 
@@ -964,11 +1260,11 @@ do_condstops(void)
 	pthread_rwlock_rdlock(&resource_lock);
 	list_do(&_tree, curr) {
 
-		if (curr->rn_resource->r_rule->rr_root == 0)
+		if ((curr->rn_resource->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
 		/* Group name */
-		name = curr->rn_resource->r_attrs->ra_value;
+		res_build_name(name, sizeof(name), curr->rn_resource);
 
 		/* If we're not running it, no need to CONDSTOP */
 		if (get_rg_state_local(name, &svcblk) < 0) {
@@ -1002,7 +1298,7 @@ void
 do_condstarts(void)
 {
 	resource_node_t *curr;
-	char *name, *val;
+	char name[96], *val;
 	rg_state_t svcblk;
 	int need_init, new_groups = 0, autostart;
 	void *lockp;
@@ -1013,11 +1309,11 @@ do_condstarts(void)
 	pthread_rwlock_rdlock(&resource_lock);
 	list_do(&_tree, curr) {
 
-		if (curr->rn_resource->r_rule->rr_root == 0)
+		if ((curr->rn_resource->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
 		/* Group name */
-		name = curr->rn_resource->r_attrs->ra_value;
+		res_build_name(name, sizeof(name), curr->rn_resource);
 
 		/* New RG.  We'll need to initialize it. */
 		need_init = 0;
@@ -1063,11 +1359,11 @@ do_condstarts(void)
 	pthread_rwlock_rdlock(&resource_lock);
 	list_do(&_tree, curr) {
 
-		if (curr->rn_resource->r_rule->rr_root == 0)
+		if ((curr->rn_resource->r_rule->rr_flags & RF_ROOT) == 0)
 			continue;
 
 		/* Group name */
-		name = curr->rn_resource->r_attrs->ra_value;
+		res_build_name(name, sizeof(name), curr->rn_resource);
 
 		/* New RG.  We'll need to initialize it. */
 		if (!(curr->rn_resource->r_flags & RF_NEEDSTART))
@@ -1140,6 +1436,13 @@ check_config_update(void)
 }
 
 
+void
+dump_config_version(FILE *fp)
+{
+	fprintf(fp, "Cluster configuration version %d\n\n", config_version);
+}
+
+
 /**
   Initialize resource groups.  This reads all the resource groups from 
   CCS, builds the tree, etc.  Ideally, we'll have a similar function 
@@ -1149,12 +1452,14 @@ check_config_update(void)
 int
 init_resource_groups(int reconfigure)
 {
-	int fd, x;
+	int fd, x, y, cnt;
 
+	event_table_t *evt = NULL;
 	resource_t *reslist = NULL, *res;
 	resource_rule_t *rulelist = NULL, *rule;
 	resource_node_t *tree = NULL;
 	fod_t *domains = NULL, *fod;
+	event_t *evp;
 	char *val;
 
 	if (reconfigure)
@@ -1178,10 +1483,11 @@ init_resource_groups(int reconfigure)
 		pthread_mutex_lock(&config_mutex);
 		config_version = atoi(val);
 		pthread_mutex_unlock(&config_mutex);
+		free(val);
 	}
-
+	
 	if (ccs_get(fd, "/cluster/rm/@statusmax", &val) == 0) {
-		if (strlen(val))
+		if (strlen(val)) 
 			rg_set_statusmax(atoi(val));
 		free(val);
 	}
@@ -1214,6 +1520,24 @@ init_resource_groups(int reconfigure)
 	x = 0;
 	list_do(&domains, fod) { ++x; } while (!list_done(&domains, fod));
 	clulog(LOG_DEBUG, "%d domains defined\n", x);
+	construct_events(fd, &evt);
+	cnt = 0;
+	if (evt) {
+		for (x=0; x <= evt->max_prio; x++) {
+			if (!evt->entries[x])
+				continue;
+			
+			y = 0;
+
+			list_do(&evt->entries[x], evp) {
+				++y;
+			} while (!list_done(&evt->entries[x], evp));
+
+			cnt += y;
+		}
+	}
+	clulog(LOG_DEBUG, "%d events defined\n", cnt);
+	
 
 	/* Reconfiguration done */
 	ccs_unlock(fd);
@@ -1242,6 +1566,9 @@ init_resource_groups(int reconfigure)
 	if (_domains)
 		deconstruct_domains(&_domains);
 	_domains = domains;
+	if (master_event_table)
+		deconstruct_events(&master_event_table);
+	master_event_table = evt;
 	pthread_rwlock_unlock(&resource_lock);
 
 	if (reconfigure) {
@@ -1282,6 +1609,94 @@ get_recovery_policy(char *rg_name, char *buf, size_t buflen)
 }
 
 
+int
+get_service_property(char *rg_name, char *prop, char *buf, size_t buflen)
+{
+	int ret = 0;
+	resource_t *res;
+	char *val;
+
+	memset(buf, 0, buflen);
+
+#if 0
+	if (!strcmp(prop, "domain")) {
+		/* not needed */
+		strncpy(buf, "", buflen);
+	} else if (!strcmp(prop, "autostart")) {
+		strncpy(buf, "1", buflen);
+	} else if (!strcmp(prop, "hardrecovery")) {
+		strncpy(buf, "0", buflen);
+	} else if (!strcmp(prop, "exclusive")) {
+		strncpy(buf, "0", buflen);
+	} else if (!strcmp(prop, "nfslock")) {
+		strncpy(buf, "0", buflen);
+	} else if (!strcmp(prop, "recovery")) {
+		strncpy(buf, "restart", buflen);
+	} else if (!strcmp(prop, "depend")) {
+		/* not needed */
+		strncpy(buf, "", buflen);
+	} else {
+		/* not found / no defaults */
+		ret = -1;
+	}
+#endif
+
+	pthread_rwlock_rdlock(&resource_lock);
+	res = find_root_by_ref(&_resources, rg_name);
+	if (res) {
+		val = res_attr_value(res, prop);
+		if (val) {
+			ret = 0;
+			strncpy(buf, val, buflen);
+		}
+	}
+	pthread_rwlock_unlock(&resource_lock);
+
+#if 0
+	if (ret == 0)
+		printf("%s(%s, %s) = %s\n", __FUNCTION__, rg_name, prop, buf);
+	else 
+		printf("%s(%s, %s) = NOT FOUND\n", __FUNCTION__, rg_name, prop);
+#endif
+
+	return ret;
+}
+
+
+int
+add_restart(char *rg_name)
+{
+	resource_node_t *node;
+	int ret = 1;
+
+	pthread_rwlock_rdlock(&resource_lock);
+	node = node_by_ref(&_tree, rg_name);
+	if (node) {
+		ret = restart_add(node->rn_restart_counter);
+	}
+	pthread_rwlock_unlock(&resource_lock);
+
+	return ret;
+}
+
+
+int
+check_restart(char *rg_name)
+{
+	resource_node_t *node;
+	int ret = 0;
+
+	pthread_rwlock_rdlock(&resource_lock);
+	node = node_by_ref(&_tree, rg_name);
+	if (node) {
+		ret = restart_threshold_exceeded(node->rn_restart_counter);
+	}
+	pthread_rwlock_unlock(&resource_lock);
+
+	return ret;
+}
+
+
 void
 kill_resource_groups(void)
 {
diff --git a/rgmanager/src/daemons/main.c b/rgmanager/src/daemons/main.c
index 1f0db93..527c5d7 100644
--- a/rgmanager/src/daemons/main.c
+++ b/rgmanager/src/daemons/main.c
@@ -32,6 +32,7 @@
 #include <msgsimple.h>
 #include <vf.h>
 #include <rg_queue.h>
+#include <event.h>
 #include <malloc.h>
 
 #define L_SYS (1<<1)
@@ -39,7 +40,6 @@
 
 int configure_logging(int ccsfd);
 
-void node_event_q(int, uint64_t, int);
 int daemon_init(char *);
 int init_resource_groups(int);
 void kill_resource_groups(void);
@@ -52,8 +52,11 @@ int send_rg_states(int, int);
 int check_config_update(void);
 int svc_exists(char *);
 int watchdog_init(void);
+int32_t master_event_callback(char *key, uint64_t viewno, void *data, uint32_t datalen);
 
-int shutdown_pending = 0, running = 1, need_reconfigure = 0;
+
+int running = 1, need_reconfigure = 0;
+int shutdown_pending = 0;
 char debug = 0; /* XXX* */
 static int signalled = 0;
 
@@ -127,60 +130,6 @@ flag_reconfigure(int sig)
 
 
 /**
-  Called to handle the transition of a cluster member from up->down or
-  down->up.  This handles initializing services (in the local node-up case),
-  exiting due to loss of quorum (local node-down), and service fail-over
-  (remote node down).
- 
-  @param nodeID		ID of the member which has come up/gone down.
-  @param nodeStatus		New state of the member in question.
-  @see eval_groups
- */
-void
-node_event(int local, uint64_t nodeID, int nodeStatus)
-{
-	if (!running)
-		return;
-
-	if (local) {
-
-		/* Local Node Event */
-		if (nodeStatus == STATE_DOWN)
-			hard_exit();
-
-		if (!rg_initialized()) {
-			if (init_resource_groups(0) != 0) {
-				clulog(LOG_ERR,
-				       "#36: Cannot initialize services\n");
-				hard_exit();
-			}
-		}
-
-		if (shutdown_pending) {
-			clulog(LOG_NOTICE, "Processing delayed exit signal\n");
-			graceful_exit(SIGINT);
-		}
-		setup_signal(SIGINT, graceful_exit);
-		setup_signal(SIGTERM, graceful_exit);
-		setup_signal(SIGHUP, flag_reconfigure);
-
-		eval_groups(1, nodeID, STATE_UP);
-		return;
-	}
-
-	/*
-	 * Nothing to do for events from other nodes if we are not ready.
-	 */
-	if (!rg_initialized()) {
-		clulog(LOG_DEBUG, "Services not initialized.\n");
-		return;
-	}
-
-	eval_groups(0, nodeID, nodeStatus);
-}
-
-
-/**
   This updates our local membership view and handles whether or not we
   should exit, as well as determines node transitions (thus, calling
   node_event()).
@@ -216,7 +165,7 @@ membership_update(void)
 		/* Should not happen */
 		clulog(LOG_INFO, "State change: LOCAL OFFLINE\n");
 		cml_free(node_delta);
-		node_event(1, my_id(), STATE_DOWN);
+		node_event_q(1, my_id(), STATE_DOWN, 1);
 		/* NOTREACHED */
 	}
 
@@ -228,7 +177,7 @@ membership_update(void)
 		   locked.  This is just a performance thing */
 		if (!rg_locked()) {
 			node_event_q(0, node_delta->cml_members[x].cm_id,
-			     		STATE_DOWN);
+			     		STATE_DOWN, 1);
 		} else {
 			clulog(LOG_NOTICE, "Not taking action - services"
 			       " locked\n");
@@ -246,7 +195,7 @@ membership_update(void)
 	me = memb_online(node_delta, my_id());
 	if (me) {
 		clulog(LOG_INFO, "State change: Local UP\n");
-		node_event_q(1, my_id(), STATE_UP);
+		node_event_q(1, my_id(), STATE_UP, 1);
 	}
 
 	for (x=0; node_delta && x < node_delta->cml_count; x++) {
@@ -261,7 +210,7 @@ membership_update(void)
 		clulog(LOG_INFO, "State change: %s UP\n",
 		       node_delta->cml_members[x].cm_name);
 		node_event_q(0, node_delta->cml_members[x].cm_id,
-			     STATE_UP);
+			     STATE_UP, 1);
 	}
 
 	cml_free(node_delta);
@@ -340,8 +289,10 @@ int
 dispatch_msg(int fd, uint64_t nodeid)
 {
 	int ret;
+	uint64_t nid;
 	generic_msg_hdr	msg_hdr;
 	SmMessageSt	msg_sm;
+	rg_state_msg_t	msg_rsm;
 	fd_set rfds;
 	struct timeval tv = { 0, 500000 };
 
@@ -429,6 +380,30 @@ dispatch_msg(int fd, uint64_t nodeid)
 			break;
 		}
 
+		if (central_events_enabled() &&
+		    msg_sm.sm_hdr.gh_arg1 != RG_ACTION_MASTER) {
+			
+			/* Centralized processing or request is from
+			   clusvcadm */
+			nid = event_master();
+			if (nid != my_id()) {
+				/* Forward the message to the event master */
+				forward_message(fd, &msg_sm, nid);
+			} else {
+				/* for us: queue it */
+				/* return below is intentional; the
+				   event engine will close the fd for us */
+				user_event_q(msg_sm.sm_data.d_svcName,
+					     msg_sm.sm_data.d_action,
+					     msg_sm.sm_hdr.gh_arg1,
+					     msg_sm.sm_hdr.gh_arg2,
+					     msg_sm.sm_data.d_svcOwner,
+					     fd);
+			}
+
+			return 0;
+		}
+
 		/* Queue request */
 		rt_enqueue_request(msg_sm.sm_data.d_svcName,
 		  		   msg_sm.sm_data.d_action,
@@ -437,6 +412,28 @@ dispatch_msg(int fd, uint64_t nodeid)
 		  		   msg_sm.sm_hdr.gh_arg2);
 		break;
 
+	case RG_EVENT:
+		ret = msg_receive_timeout(fd, &msg_rsm, sizeof(msg_rsm), 1);
+
+		/* Service event.  Run a dependency check */
+		if (ret < (int)sizeof(rg_state_msg_t)) {
+			clulog(LOG_ERR,
+			       "#39: Error receiving entire request (%d/%d)\n",
+			       ret, (int)sizeof(rg_state_msg_t));
+			ret = -1;
+			break;
+		}
+
+		/* Decode SmMessageSt message */
+		swab_rg_state_msg_t(&msg_rsm);
+
+		/* Send to our rg event handler */
+		rg_event_q(msg_rsm.rsm_state.rs_name,
+			   msg_rsm.rsm_state.rs_state,
+			   msg_rsm.rsm_state.rs_owner,
+			   msg_rsm.rsm_state.rs_last_owner);
+		break;
+
 	case RG_EXITING:
 
 		clulog(LOG_NOTICE, "Member %d is now offline\n", (int)nodeid);
@@ -446,7 +443,8 @@ dispatch_msg(int fd, uint64_t nodeid)
 		swab_generic_msg_hdr(&msg_hdr);
 		msg_close(fd);
 
-		node_event(0, nodeid, STATE_DOWN);
+		member_set_state(nodeid, STATE_DOWN);
+		node_event_q(0, nodeid, STATE_DOWN, 1);
 		break;
 
 	default:
@@ -586,7 +584,7 @@ event_loop(int clusterfd)
 		return 0;
 
 	/* No new messages.  Drop in the status check requests.  */
-	if (n == 0) {
+	if (n == 0 && rg_quorate()) {
 		do_status_checks();
 		return 0;
 	}
@@ -603,13 +601,6 @@ flag_shutdown(int sig)
 
 
 void
-graceful_exit(int sig)
-{
-	running = 0;
-}
-
-
-void
 hard_exit(void)
 {
 	rg_lockall(L_SYS);
@@ -670,6 +661,14 @@ configure_logging(int ccsfd)
 		free(v);
 	}
 
+	if (ccs_get(ccsfd, "/cluster/rm/@central_processing", &v) == 0) {
+		set_central_events(atoi(v));
+		if (atoi(v))
+			clulog(LOG_NOTICE,
+			       "Centralized Event Processing enabled\n");
+		free(v);
+	}
+
 	if (internal)
 		ccs_disconnect(ccsfd);
 
@@ -719,6 +718,17 @@ set_nonblock(int fd)
 }
 
 
+void *
+shutdown_thread(void __attribute__ ((unused)) *arg)
+{
+	rg_lockall(L_SYS);
+	rg_doall(RG_STOP_EXITING, 1, NULL);
+	running = 0;
+
+	pthread_exit(NULL);
+}
+
+
 void
 wait_for_status(int pid, int fd, int timeout)
 {
@@ -761,6 +771,7 @@ main(int argc, char **argv)
 	int listen_fds[2], listeners, waittime = 0, waitpipe[2];
 	int waiter;
 	uint64_t myNodeID;
+	pthread_t th;
 
 	while ((rv = getopt(argc, argv, "fdt:")) != EOF) {
 		switch (rv) {
@@ -861,10 +872,8 @@ main(int argc, char **argv)
 
 	if (quorate) {
 		rg_set_quorate();
-	} else {
-		setup_signal(SIGINT, graceful_exit);
-		setup_signal(SIGTERM, graceful_exit);
 	}
+
 	clulog(LOG_DEBUG, "Cluster Status: %s\n",
 	       quorate?"Quorate":"Inquorate");
 
@@ -882,6 +891,7 @@ main(int argc, char **argv)
 	}
 
 	vf_key_init("rg_lockdown", 10, NULL, lock_commit_cb);
+        vf_key_init("Transition-Master", 10, NULL, master_event_callback);
 
 	if (clu_login(cluster_fd, RG_SERVICE_GROUP) == -1) {
 		if (errno != ENOSYS) {
@@ -909,9 +919,20 @@ main(int argc, char **argv)
 	 */
 	notify_status(0);
 
-	while (running)
+	while (running) {
 		event_loop(cluster_fd);
 
+                if (shutdown_pending) {
+                        /* Kill local socket; local requests need to
+                           be ignored here */
+			for (rv = 0; rv < listeners; rv++)
+				msg_close(listen_fds[rv]);
+                        ++shutdown_pending;
+                        clulog(LOG_NOTICE, "Shutting down\n");
+                        pthread_create(&th, NULL, shutdown_thread, NULL);
+                }
+	}
+
 	clulog(LOG_NOTICE, "Shutting down\n");
 	cleanup(cluster_fd);
 	clulog(LOG_NOTICE, "Shutdown complete, exiting\n");
diff --git a/rgmanager/src/daemons/members.c b/rgmanager/src/daemons/members.c
index 839e2c7..910d174 100644
--- a/rgmanager/src/daemons/members.c
+++ b/rgmanager/src/daemons/members.c
@@ -41,6 +41,47 @@ member_list_update(cluster_member_list_t *new_ml)
 }
 
 
+void
+member_set_state(uint64_t nid, int state)
+{
+	int x;
+
+	pthread_rwlock_wrlock(&memblock);
+	if (membership) {
+		for (x = 0; x < membership->cml_count; x++) {
+			if (membership->cml_members[x].cm_id == nid) {
+				membership->cml_members[x].cm_state = state;
+				goto out;
+			}
+		}
+	}
+
+out:
+	pthread_rwlock_unlock(&memblock);
+}
+
+
+int
+member_online(uint64_t nid)
+{
+	int x, ret = 0;
+
+	pthread_rwlock_wrlock(&memblock);
+	if (membership) {
+		for (x = 0; x < membership->cml_count; x++) {
+			if (membership->cml_members[x].cm_id == nid) {
+				ret = membership->cml_members[x].cm_state;
+				goto out;
+			}
+		}
+	}
+
+out:
+	pthread_rwlock_unlock(&memblock);
+	return ret;
+}
+
+
 cluster_member_list_t *
 member_list(void)
 {
diff --git a/rgmanager/src/daemons/reslist.c b/rgmanager/src/daemons/reslist.c
index dbf685a..11c5122 100644
--- a/rgmanager/src/daemons/reslist.c
+++ b/rgmanager/src/daemons/reslist.c
@@ -19,7 +19,6 @@
 #include <libxml/parser.h>
 #include <libxml/xmlmemory.h>
 #include <libxml/xpath.h>
-#include <magma.h>
 #include <ccs.h>
 #include <stdlib.h>
 #include <stdio.h>
@@ -27,6 +26,7 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <list.h>
+#include <restart_counter.h>
 #include <reslist.h>
 #include <pthread.h>
 #ifndef NO_CCS
@@ -37,6 +37,13 @@
 char *attr_value(resource_node_t *node, char *attrname);
 char *rg_attr_value(resource_node_t *node, char *attrname);
 
+void
+res_build_name(char *buf, size_t buflen, resource_t *res)
+{
+	snprintf(buf, buflen, "%s:%s", res->r_rule->rr_type,
+		 res->r_attrs[0].ra_value);
+}
+
 /**
    Find and determine an attribute's value. 
 
@@ -170,18 +177,29 @@ primary_attr_value(resource_t *res)
 /**
    Compare two resources.
 
+  @param left	Left resource
+  @param right	Right resource	
+  @return	-1 on different resource, 0 if the same, 1 if different,
+		2 if different, but only safe resources are different
+
  */
 int
 rescmp(resource_t *left, resource_t *right)
 {
-	int x, y = 0, found;
+	int x, y = 0, found = 0, ret = 0;
+
 
 	/* Completely different resource class... */
 	if (strcmp(left->r_rule->rr_type, right->r_rule->rr_type)) {
-		//printf("Er, wildly different resource type! ");
 		return -1;
 	}
 
+	/*
+	printf("Comparing %s:%s to %s:%s\n",
+	       left->r_rule->rr_type, left->r_attrs[0].ra_value,
+	       right->r_rule->rr_type, right->r_attrs[0].ra_value)
+	 */
+
 	for (x = 0; left->r_attrs && left->r_attrs[x].ra_name; x++) {
 
 		found = 0;
@@ -197,35 +215,52 @@ rescmp(resource_t *left, resource_t *right)
 			    left->r_attrs[x].ra_flags) {
 				/* Flags are different.  Change in
 				   resource agents? */
-				//printf("flags differ ");
+				/*
+				printf("* flags differ %08x vs %08x\n",
+				       left->r_attrs[x].ra_flags,
+				       right->r_attrs[y].ra_flags);
+				 */
 				return 1;
 			}
 
 			if (strcmp(right->r_attrs[y].ra_value,
 				   left->r_attrs[x].ra_value)) {
 				/* Different attribute value. */
-				//printf("different value for attr '%s'  ",
-				       //right->r_attrs[y].ra_name);
-				return 1;
+				/*
+				printf("* different value for attr '%s':"
+				       " '%s' vs '%s'",
+				       right->r_attrs[y].ra_name,
+				       left->r_attrs[x].ra_value,
+				       right->r_attrs[y].ra_value);
+				 */
+				if (left->r_attrs[x].ra_flags & RA_RECONFIG) {
+					/* printf(" [SAFE]\n"); */
+					ret = 2;
+			 	} else {
+					/* printf("\n"); */
+					return 1;
+				}
 			}
 		}
 
 		/* Attribute missing -> different attribute value. */
 		if (!found) {
-			//printf("Attribute %s deleted  ",
-			       //left->r_attrs[x].ra_name);
+			/*
+			printf("* Attribute '%s' deleted\n",
+			       left->r_attrs[x].ra_name);
+			 */
 			return 1;
 		}
 	}
 
 	/* Different attribute count */
 	if (x != y) {
-		//printf("Attribute count differ (attributes added!) ");
+		/* printf("* Attribute count differ (attributes added!) "); */
 		return 1;
 	}
 
 	/* All the same */
-	return 0;
+	return ret;
 }
 
 
@@ -280,11 +315,24 @@ resource_t *
 find_root_by_ref(resource_t **reslist, char *ref)
 {
 	resource_t *curr;
+	char ref_buf[128];
+	char *type;
+	char *name = ref;
 	int x;
 
+	snprintf(ref_buf, sizeof(ref_buf), "%s", ref);
+
+	type = ref_buf;
+	if ((name = strchr(ref_buf, ':'))) {
+		*name = 0;
+		name++;
+	} else {
+		/* Default type */
+		type = "service";
+		name = ref;
+	}
+
 	list_do(reslist, curr) {
-		if (curr->r_rule->rr_root == 0)
-			continue;
 
 		/*
 		   This should be one operation - the primary attr
@@ -292,15 +340,18 @@ find_root_by_ref(resource_t **reslist, char *ref)
 		 */
 		for (x = 0; curr->r_attrs && curr->r_attrs[x].ra_name;
 		     x++) {
+			if (strcmp(type, curr->r_rule->rr_type))
+				continue;
 			if (!(curr->r_attrs[x].ra_flags & RA_PRIMARY))
 				continue;
-			if (strcmp(ref, curr->r_attrs[x].ra_value))
+			if (strcmp(name, curr->r_attrs[x].ra_value))
 				continue;
 
 			return curr;
 		}
 	} while (!list_done(reslist, curr));
 
+
 	return NULL;
 }
 
@@ -421,14 +472,14 @@ xpath_get_one(xmlDocPtr doc, xmlXPathContextPtr ctx, char *query)
 	if (((node->type == XML_ATTRIBUTE_NODE) && strstr(query, "@*")) ||
 	    ((node->type == XML_ELEMENT_NODE) && strstr(query, "child::*"))){
 		if (node->children && node->children->content)
-	  		size = strlen(node->children->content)+
-				      strlen(node->name)+2;
+	  		size = strlen((char *)node->children->content)+
+				      strlen((char *)node->name)+2;
 		else 
-			size = strlen(node->name)+2;
+			size = strlen((char *)node->name)+2;
 		nnv = 1;
 	} else {
 		if (node->children && node->children->content) {
-			size = strlen(node->children->content)+1;
+			size = strlen((char *)node->children->content)+1;
 		} else {
 			goto out;
 		}
@@ -514,7 +565,7 @@ print_resource(resource_t *res)
 	int x;
 
 	printf("Resource type: %s", res->r_rule->rr_type);
-	if (res->r_rule->rr_root)
+	if (res->r_rule->rr_flags & RF_ROOT)
 		printf(" [ROOT]");
 	if (res->r_flags & RF_INLINE)
 		printf(" [INLINE]");
@@ -524,6 +575,8 @@ print_resource(resource_t *res)
 		printf(" [NEEDSTOP]");
 	if (res->r_flags & RF_COMMON)
 		printf(" [COMMON]");
+	if (res->r_flags & RF_RECONFIG)
+		printf(" [RECONFIG]");
 	printf("\n");
 
 	if (res->r_rule->rr_maxrefs)
@@ -559,6 +612,8 @@ print_resource(resource_t *res)
 			printf(" unique");
 		if (res->r_attrs[x].ra_flags & RA_REQUIRED)
 			printf(" required");
+		if (res->r_attrs[x].ra_flags & RA_RECONFIG)
+			printf(" reconfig");
 		if (res->r_attrs[x].ra_flags & RA_INHERIT)
 			printf(" inherit(\"%s\")", res->r_attrs[x].ra_value);
 		printf(" ]\n");
@@ -817,6 +872,7 @@ load_resources(int ccsfd, resource_t **reslist, resource_rule_t **rulelist)
 				      "Error storing %s resource\n",
 				      newres->r_rule->rr_type);
 #endif
+
 			       destroy_resource(newres);
 		       }
 
diff --git a/rgmanager/src/daemons/resrules.c b/rgmanager/src/daemons/resrules.c
index 282eb5a..6102a6b 100644
--- a/rgmanager/src/daemons/resrules.c
+++ b/rgmanager/src/daemons/resrules.c
@@ -19,14 +19,16 @@
 #include <libxml/parser.h>
 #include <libxml/xmlmemory.h>
 #include <libxml/xpath.h>
-#include <magma.h>
 #include <ccs.h>
 #include <stdlib.h>
 #include <stdio.h>
+#include <string.h>
 #include <resgroup.h>
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <list.h>
+#include <ctype.h>
+#include <restart_counter.h>
 #include <reslist.h>
 #include <pthread.h>
 #include <dirent.h>
@@ -60,7 +62,8 @@ store_rule(resource_rule_t **rulelist, resource_rule_t *newrule)
 #endif
 			return -1;
 		}
-		if (newrule->rr_root && curr->rr_root) {
+		if ((newrule->rr_flags & RF_ROOT) &&
+		    (curr->rr_flags & RF_ROOT)) {
 #ifdef NO_CCS
 			fprintf(stderr, "Error storing %s: root "
 				"resource type %s exists already\n",
@@ -175,32 +178,37 @@ _get_maxparents(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
 
 
 /**
-   Get and store the version
+  Get and store a bit field.
 
-   @param doc		Pre-parsed XML document pointer.
-   @param ctx		Pre-allocated XML XPath context pointer.
-   @param base		XPath prefix to search
-   @param rr		Resource rule to store new information in.
+  @param doc		Pre-parsed XML document pointer.
+  @param ctx		Pre-allocated XML XPath context pointer.
+  @param base		XPath prefix to search
+  @param rr		Resource rule to store new information in.
  */
 void
-_get_version(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
-	     resource_rule_t *rr)
+_get_rule_flag(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
+	       resource_rule_t *rr, char *flag, int bit)
 {
 	char xpath[256];
 	char *ret = NULL;
 
-	snprintf(xpath, sizeof(xpath), "%s/@version", base);
+	snprintf(xpath, sizeof(xpath),
+		 "%s/attributes/@%s",
+		 base, flag);
 	ret = xpath_get_one(doc, ctx, xpath);
 	if (ret) {
-		rr->rr_version = ret;
+		if (atoi(ret)) {
+			rr->rr_flags |= bit;
+		} else {
+			rr->rr_flags &= ~bit;
+		}
 		free(ret);
 	}
-	rr->rr_version = NULL;
 }
 
 
 /**
-   Get and store the root attribute.
+   Get and store the version
 
    @param doc		Pre-parsed XML document pointer.
    @param ctx		Pre-allocated XML XPath context pointer.
@@ -208,18 +216,19 @@ _get_version(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
    @param rr		Resource rule to store new information in.
  */
 void
-_get_root(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
-	  resource_rule_t *rr)
+_get_version(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
+	     resource_rule_t *rr)
 {
 	char xpath[256];
 	char *ret = NULL;
 
-	snprintf(xpath, sizeof(xpath), "%s/attributes/@root", base);
+	snprintf(xpath, sizeof(xpath), "%s/@version", base);
 	ret = xpath_get_one(doc, ctx, xpath);
 	if (ret) {
-		rr->rr_root = 1;
+		rr->rr_version = ret;
 		free(ret);
 	}
+	rr->rr_version = NULL;
 }
 
 
@@ -588,7 +597,7 @@ print_resource_rule(resource_rule_t *rr)
 	int x;
 
 	printf("Resource Rules for \"%s\"", rr->rr_type);
-	if (rr->rr_root)
+	if (rr->rr_flags & RF_ROOT)
 		printf(" [ROOT]");
 	printf("\n");
 
@@ -599,6 +608,17 @@ print_resource_rule(resource_rule_t *rr)
 		printf("Max instances: %d\n", rr->rr_maxrefs);
 	if (rr->rr_agent)
 		printf("Agent: %s\n", basename(rr->rr_agent));
+
+	printf("Flags: ");
+	if (rr->rr_flags) {
+		if (rr->rr_flags & RF_INIT)
+			printf("init_on_add ");
+		if (rr->rr_flags & RF_DESTROY)
+			printf("destroy_on_delete ");
+	} else {
+		printf("(none)");
+	}
+	printf("\n");
 	
 	printf("Attributes:\n");
 	if (!rr->rr_attrs) {
@@ -614,18 +634,25 @@ print_resource_rule(resource_rule_t *rr)
 			continue;
 		}
 
-		printf(" [");
-		if (rr->rr_attrs[x].ra_flags & RA_PRIMARY)
-			printf(" primary");
-		if (rr->rr_attrs[x].ra_flags & RA_UNIQUE)
-			printf(" unique");
-		if (rr->rr_attrs[x].ra_flags & RA_REQUIRED)
-			printf(" required");
-		if (rr->rr_attrs[x].ra_flags & RA_INHERIT)
-			printf(" inherit");
-		else if (rr->rr_attrs[x].ra_value)
-			printf(" default=\"%s\"", rr->rr_attrs[x].ra_value);
-		printf(" ]\n");
+		if (rr->rr_attrs[x].ra_flags) {
+			printf(" [");
+			if (rr->rr_attrs[x].ra_flags & RA_PRIMARY)
+				printf(" primary");
+			if (rr->rr_attrs[x].ra_flags & RA_UNIQUE)
+				printf(" unique");
+			if (rr->rr_attrs[x].ra_flags & RA_REQUIRED)
+				printf(" required");
+			if (rr->rr_attrs[x].ra_flags & RA_INHERIT)
+				printf(" inherit");
+			if (rr->rr_attrs[x].ra_flags & RA_RECONFIG)
+				printf(" reconfig");
+			printf(" ]");
+		}
+
+		if (rr->rr_attrs[x].ra_value)
+			printf(" default=\"%s\"\n", rr->rr_attrs[x].ra_value);
+		else
+			printf("\n");
 	}
 
 actions:
@@ -761,6 +788,18 @@ _get_rule_attrs(xmlDocPtr doc, xmlXPathContextPtr ctx, char *base,
 		}
 
 		/*
+		   See if this can be reconfigured on the fly without a 
+		   stop/start
+		 */
+		snprintf(xpath, sizeof(xpath), "%s/parameter[%d]/@reconfig",
+			 base, x);
+		if ((ret = xpath_get_one(doc,ctx,xpath))) {
+			if ((atoi(ret) != 0) || (ret[0] == 'y'))
+				flags |= RA_RECONFIG;
+			free(ret);
+		}
+
+		/*
 		   See if this is supposed to be inherited
 		 */
 		snprintf(xpath, sizeof(xpath), "%s/parameter[%d]/@inherit",
@@ -891,6 +930,10 @@ read_pipe(int fd, char **file, size_t *length)
 
 		n = read(fd, buf, sizeof(buf));
 		if (n < 0) {
+
+			if (errno == EINTR)
+				continue;
+
 			if (*file)
 				free(*file);
 			return -1;
@@ -899,7 +942,7 @@ read_pipe(int fd, char **file, size_t *length)
 		if (n == 0 && (!*length))
 			return 0;
 
-		if (n != sizeof(buf)) {
+		if (n == 0) {
 			done = 1;
 		}
 
@@ -1024,6 +1067,7 @@ load_resource_rulefile(char *filename, resource_rule_t **rules)
 			break;
 		memset(rr,0,sizeof(*rr));
 
+		rr->rr_flags = RF_INIT | RF_DESTROY;
 		rr->rr_type = type;
 		snprintf(base, sizeof(base), "/resource-agent[%d]", ruleid);
 
@@ -1035,12 +1079,14 @@ load_resource_rulefile(char *filename, resource_rule_t **rules)
 		snprintf(base, sizeof(base),
 			 "/resource-agent[%d]/special[@tag=\"rgmanager\"]",
 			 ruleid);
-		_get_root(doc, ctx, base, rr);
 		_get_maxparents(doc, ctx, base, rr);
+		_get_rule_flag(doc, ctx, base, rr, "root", RF_ROOT);
+		_get_rule_flag(doc, ctx, base, rr, "init_on_add", RF_INIT);
+		_get_rule_flag(doc, ctx, base, rr, "destroy_on_delete", RF_DESTROY);
 		rr->rr_agent = strdup(filename);
 
 		/*
-		   Second, add the allowable-children fields
+		   Second, add the children fields
 		 */
 		_get_childtypes(doc, ctx, base, rr);
 
@@ -1095,8 +1141,9 @@ load_resource_rules(const char *rpath, resource_rule_t **rules)
 {
 	DIR *dir;
 	struct dirent *de;
-	char *fn;//, *dot;
+	char *fn, *dot;
 	char path[2048];
+	struct stat st_buf;
 
 	dir = opendir(rpath);
 	if (!dir)
@@ -1109,14 +1156,40 @@ load_resource_rules(const char *rpath, resource_rule_t **rules)
 		if (!fn)
 			continue;
 		
+		/* Ignore files with common backup extension */
 		if ((fn != NULL) && (strlen(fn) > 0) && 
 			(fn[strlen(fn)-1] == '~')) 
 			continue;
 
+		/* Ignore hidden files */
+		if (*fn == '.')
+			continue;
+
+		dot = strrchr(fn, '.');
+		if (dot) {
+			/* Ignore RPM installed save files, patches,
+			   diffs, etc. */
+			if (!strncasecmp(dot, ".rpm", 4)) {
+				fprintf(stderr, "Warning: "
+					"Ignoring %s/%s: Bad extension %s\n",
+					rpath, de->d_name, dot);
+				continue;
+			}
+		}
+
 		snprintf(path, sizeof(path), "%s/%s",
 			 rpath, de->d_name);
 
-		load_resource_rulefile(path, rules);
+		if (stat(path, &st_buf) < 0)
+			continue;
+
+		if (S_ISDIR(st_buf.st_mode))
+			continue;
+
+ 		if (st_buf.st_mode & (S_IXUSR|S_IXOTH|S_IXGRP)) {
+ 			//printf("Loading resource rule from %s\n", path);
+  			load_resource_rulefile(path, rules);
+ 		}
 	}
 	xmlCleanupParser();
 
diff --git a/rgmanager/src/daemons/restart_counter.c b/rgmanager/src/daemons/restart_counter.c
new file mode 100644
index 0000000..c889c6d
--- /dev/null
+++ b/rgmanager/src/daemons/restart_counter.c
@@ -0,0 +1,205 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+/* Time-based restart counters for rgmanager */
+
+#include <stdio.h>
+#include <list.h>
+#include <errno.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <time.h>
+#include <restart_counter.h>
+
+
+
+#define RESTART_INFO_MAGIC 0x184820ab
+
+typedef struct {
+	list_head();
+	time_t restart_time;
+} restart_item_t;
+
+typedef struct {
+	int magic;
+	time_t expire_timeout;
+	int max_restarts;
+	int restart_count;
+	restart_item_t *restart_nodes;
+} restart_info_t;
+
+
+#define VALIDATE(arg, ret) \
+do { \
+	if (!arg) {\
+		errno = EINVAL; \
+		return ret; \
+	} \
+	if (((restart_info_t *)arg)->magic != RESTART_INFO_MAGIC) {\
+		errno = EINVAL; \
+		return ret; \
+	} \
+} while(0)
+
+
+/* Remove expired restarts */
+static int
+restart_timer_purge(restart_counter_t arg, time_t now)
+{
+	restart_info_t *restarts = (restart_info_t *)arg;
+	restart_item_t *i;
+	int x, done = 0;
+
+	VALIDATE(arg, -1);
+
+	/* No timeout */
+	if (restarts->expire_timeout == 0)
+		return 0;
+
+	do {
+		done = 1;
+		list_for(&restarts->restart_nodes, i, x) {
+			if ((now - i->restart_time) >=
+			    restarts->expire_timeout) {
+				restarts->restart_count--;
+				list_remove(&restarts->restart_nodes, i);
+				done = 0;
+				break;
+			}
+		}
+	} while(!done);
+
+	return 0;
+}
+
+
+int
+restart_count(restart_counter_t arg)
+{
+	restart_info_t *restarts = (restart_info_t *)arg;
+	time_t now;
+
+	VALIDATE(arg, -1);
+	now = time(NULL);
+	restart_timer_purge(arg, now);
+	return restarts->restart_count;
+}
+
+
+int
+restart_threshold_exceeded(restart_counter_t arg)
+{
+	restart_info_t *restarts = (restart_info_t *)arg;
+	time_t now;
+
+	VALIDATE(arg, -1);
+	now = time(NULL);
+	restart_timer_purge(arg, now);
+	if (restarts->restart_count >= restarts->max_restarts)
+		return 1;
+	return 0;
+}
+
+
+/* Add a restart entry to the list.  Returns 1 if restart
+   count is exceeded */
+int
+restart_add(restart_counter_t arg)
+{
+	restart_info_t *restarts = (restart_info_t *)arg;
+	restart_item_t *i;
+	time_t t;
+
+	if (!arg)
+		/* No max restarts / threshold = always
+		   ok to restart! */
+		return 0;
+
+	VALIDATE(arg, -1);
+
+	i = malloc(sizeof(*i));
+	if (!i) {
+		return -1;
+	}
+
+	t = time(NULL);
+	i->restart_time = t;
+
+	list_insert(&restarts->restart_nodes, i);
+	restarts->restart_count++;
+
+	/* Check and remove old entries */
+	restart_timer_purge(restarts, t);
+
+	if (restarts->restart_count >= restarts->max_restarts)
+		return 1;
+
+	return 0;
+}
+
+
+int
+restart_clear(restart_counter_t arg)
+{
+	restart_info_t *restarts = (restart_info_t *)arg;
+	restart_item_t *i;
+
+	VALIDATE(arg, -1);
+	while ((i = restarts->restart_nodes)) {
+		list_remove(&restarts->restart_nodes, i);
+		free(i);
+	}
+
+	restarts->restart_count = 0;
+
+	return 0;
+}
+
+
+restart_counter_t
+restart_init(time_t expire_timeout, int max_restarts)
+{
+	restart_info_t *info;
+
+	if (max_restarts < 0) {
+		errno = EINVAL;
+		return NULL;
+	}
+
+	info = malloc(sizeof(*info));
+	if (info == NULL)
+		return NULL;
+
+	info->magic = RESTART_INFO_MAGIC;
+	info->expire_timeout = expire_timeout;
+	info->max_restarts = max_restarts;
+	info->restart_count = 0;
+	info->restart_nodes = NULL;
+
+	return (void *)info;
+}
+
+
+int
+restart_cleanup(restart_counter_t arg)
+{
+	VALIDATE(arg, -1);
+	restart_clear(arg);
+	free(arg);
+	return 0;
+}
diff --git a/rgmanager/src/daemons/restree.c b/rgmanager/src/daemons/restree.c
index 95b2450..3d7b0ab 100644
--- a/rgmanager/src/daemons/restree.c
+++ b/rgmanager/src/daemons/restree.c
@@ -1,5 +1,5 @@
 /*
-  Copyright Red Hat, Inc. 2004
+  Copyright Red Hat, Inc. 2004-2006
 
   This program is free software; you can redistribute it and/or modify it
   under the terms of the GNU General Public License as published by the
@@ -30,6 +30,7 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <list.h>
+#include <restart_counter.h>
 #include <reslist.h>
 #include <pthread.h>
 #include <clulog.h>
@@ -39,12 +40,15 @@
 void malloc_zap_mutex(void);
 #endif
 
-
 /* XXX from resrules.c */
 int store_childtype(resource_child_t **childp, char *name, int start,
 		    int stop, int forbid, int flags);
 int _res_op(resource_node_t **tree, resource_t *first, char *type,
 	    void * __attribute__((unused))ret, int op);
+static inline int
+_res_op_internal(resource_node_t **tree, resource_t *first,
+		 char *type, void *__attribute__((unused))ret, int realop,
+		 resource_node_t *node);
 void print_env(char **env);
 static inline int _res_op_internal(resource_node_t **tree, resource_t *first,
 		 char *type, void *__attribute__((unused))ret, int realop,
@@ -63,22 +67,6 @@ void * act_dup(resource_act_t *acts);
 time_t get_time(char *action, int depth, resource_node_t *node);
 
 
-const char *res_ops[] = {
-	"start",
-	"stop",
-	"status",
-	"resinfo",
-	"restart",
-	"reload",
-	"condrestart",		/* Unused */
-	"recover",		
-	"condstart",
-	"condstop",
-	"monitor",
-	"meta-data",		/* printenv */
-	"validate-all"
-};
-
 
 const char *ocf_errors[] = {
 	"success",				// 0
@@ -110,14 +98,13 @@ _no_op_mode(int arg)
 const char *
 ocf_strerror(int ret)
 {
-	if (ret < OCF_RA_MAX)
+	if (ret >= 0 && ret < OCF_RA_MAX)
 		return ocf_errors[ret];
 
 	return "unspecified";
 }
 
 
-
 /**
    Destroys an environment variable array.
 
@@ -143,7 +130,7 @@ kill_env(char **env)
    @see			build_env
  */
 static void
-add_ocf_stuff(resource_t *res, char **env, int depth)
+add_ocf_stuff(resource_t *res, char **env, int depth, int refcnt)
 {
 	char ver[10];
 	char *minor, *val;
@@ -227,6 +214,17 @@ add_ocf_stuff(resource_t *res, char **env, int depth)
 		return;
 	snprintf(val, n, "%s=%s", OCF_CHECK_LEVEL_STR, ver);
 	*env = val; env++;
+
+	/*
+	   Store the resource local refcnt (0 for now)
+	 */
+	snprintf(ver, sizeof(ver), "%d", refcnt);
+	n = strlen(OCF_REFCNT_STR) + strlen(ver) + 2;
+	val = malloc(n);
+	if (!val)
+		return;
+	snprintf(val, n, "%s=%s", OCF_REFCNT_STR, ver);
+	*env = val; env++;
 }
 
 
@@ -234,14 +232,13 @@ add_ocf_stuff(resource_t *res, char **env, int depth)
    Allocate and fill an environment variable array.
 
    @param node		Node in resource tree to use for parameters
-   @param op		Operation (start/stop/status/monitor/etc.)
    @param depth		Depth (status/monitor/etc.)
    @return		Newly allocated environment array or NULL if
    			one could not be formed.
    @see			kill_env res_exec add_ocf_stuff
  */
 static char **
-build_env(resource_node_t *node, int op, int depth)
+build_env(resource_node_t *node, int depth, int refcnt)
 {
 	resource_t *res = node->rn_resource;
 	char **env;
@@ -249,7 +246,7 @@ build_env(resource_node_t *node, int op, int depth)
 	int x, attrs, n;
 
 	for (attrs = 0; res->r_attrs && res->r_attrs[attrs].ra_name; attrs++);
-	attrs += 7; /*
+	attrs += 8; /*
 		   Leave space for:
 		   OCF_RA_VERSION_MAJOR
 		   OCF_RA_VERSION_MINOR
@@ -257,6 +254,7 @@ build_env(resource_node_t *node, int op, int depth)
 		   OCF_RESOURCE_INSTANCE
 		   OCF_RESOURCE_TYPE
 		   OCF_CHECK_LEVEL
+		   OCF_RESKEY_RGMANAGER_meta_refcnt
 		   (null terminator)
 		 */
 
@@ -296,7 +294,7 @@ build_env(resource_node_t *node, int op, int depth)
 		++attrs;
 	}
 
-	add_ocf_stuff(res, &env[attrs], depth);
+	add_ocf_stuff(res, &env[attrs], depth, refcnt);
 
 	return env;
 }
@@ -346,46 +344,35 @@ restore_signals(void)
    @see			build_env
  */
 int
-res_exec(resource_node_t *node, int op, int depth)
+res_exec(resource_node_t *node, int op, const char *arg, int depth)
 {
 	int childpid, pid;
 	int ret = 0;
 	char **env = NULL;
 	resource_t *res = node->rn_resource;
+	const char *op_str = agent_op_str(op);
 	char fullpath[2048];
 
-	if (!res->r_rule->rr_agent) {
-		clulog(LOG_DEBUG,
-		       "%s on %s \"%s\" no rr_agent\n",
-		       res_ops[op], res->r_rule->rr_type,
-		       res->r_attrs->ra_value);
+	if (!res->r_rule->rr_agent)
 		return 0;
-	}
 
 #ifdef DEBUG
-	env = build_env(node, op);
-	if (!env) {
-		clulog(LOG_DEBUG,
-		       "%s on %s \"%s\" build_env failed %d\n",
-		       res_ops[op], res->r_rule->rr_type,
-		       res->r_attrs->ra_value, errno);
+	env = build_env(node, depth, node->rn_resource->r_incarnations);
+	if (!env)
 		return -errno;
-	}
 #endif
 
 #ifdef NO_CCS
 	if (_no_op_mode_) {
-		printf("[%s] %s:%s\n", res_ops[op],
-			res->r_rule->rr_type, res->r_attrs->ra_value);
+		printf("[%s] %s:%s\n", op_str, res->r_rule->rr_type,
+		       res->r_attrs->ra_value);
 		return 0;
 	}
 #endif
 
 	childpid = fork();
-	if (childpid < 0) {
-		clulog(LOG_ERR, "%s: fork failed (%d)!\n", __func__, errno);
+	if (childpid < 0)
 		return -errno;
-	}
 
 	if (!childpid) {
 		/* Child */ 
@@ -394,21 +381,16 @@ res_exec(resource_node_t *node, int op, int depth)
 #endif
 #if 0
 		printf("Exec of script %s, action %s type %s\n",
-			res->r_rule->rr_agent, res_ops[op],
+			res->r_rule->rr_agent, agent_op_str(op),
 			res->r_rule->rr_type);
 #endif
 
 #ifndef DEBUG
-		env = build_env(node, op, depth);
+		env = build_env(node, depth, node->rn_resource->r_incarnations);
 #endif
 
-		if (!env) {
-			clulog(LOG_DEBUG,
-		       		"%s on %s \"%s\" build_env failed (ENOMEM)\n",
-		       		res_ops[op], res->r_rule->rr_type,
-		       		res->r_attrs->ra_value);
+		if (!env)
 			exit(-ENOMEM);
-		}
 
 		if (res->r_rule->rr_agent[0] != '/')
 			snprintf(fullpath, sizeof(fullpath), "%s/%s",
@@ -419,7 +401,10 @@ res_exec(resource_node_t *node, int op, int depth)
 
 		restore_signals();
 
-		execle(fullpath, fullpath, res_ops[op], NULL, env);
+		if (arg)
+			execle(fullpath, fullpath, op_str, arg, NULL, env);
+		else
+			execle(fullpath, fullpath, op_str, NULL, env);
 	}
 
 #ifdef DEBUG
@@ -436,16 +421,16 @@ res_exec(resource_node_t *node, int op, int depth)
 
 		ret = WEXITSTATUS(ret);
 
+#ifndef NO_CCS
+		if ((op == RS_STATUS &&
+		     node->rn_state == RES_STARTED && ret) ||
+		    (op != RS_STATUS && ret)) {
+#else
 		if (ret) {
+#endif
 			clulog(LOG_NOTICE,
-			       "%s on %s:%s returned %d (%s)\n",
-			       res_ops[op], res->r_rule->rr_type,
-			       res->r_attrs->ra_value, ret,
-			       ocf_strerror(ret));
-		} else {
-			clulog(LOG_DEBUG,
-			       "%s on %s:%s returned %d (%s)\n",
-			       res_ops[op], res->r_rule->rr_type,
+			       "%s on %s \"%s\" returned %d (%s)\n",
+			       op_str, res->r_rule->rr_type,
 			       res->r_attrs->ra_value, ret,
 			       ocf_strerror(ret));
 		}
@@ -456,15 +441,43 @@ res_exec(resource_node_t *node, int op, int depth)
 	if (!WIFSIGNALED(ret))
 		assert(0);
 
-	clulog(LOG_ERR,
-	       "%s on %s:%s caught signal %d\n",
-	       res_ops[op], res->r_rule->rr_type,
-	       res->r_attrs->ra_value, WTERMSIG(ret));
-
 	return -EFAULT;
 }
 
 
+static inline void
+assign_restart_policy(resource_t *curres, resource_node_t *parent,
+		      resource_node_t *node)
+{
+	char *val;
+	int max_restarts = 0;
+	time_t restart_expire_time = 0;
+
+	node->rn_restart_counter = NULL;
+
+	if (!curres || !node)
+		return;
+	if (parent) /* Non-parents don't get one for now */
+		return;
+
+	val = res_attr_value(curres, "max_restarts");
+	if (!val)
+		return;
+	max_restarts = atoi(val);
+	if (max_restarts <= 0)
+		return;
+	val = res_attr_value(curres, "restart_expire_time");
+	if (val) {
+		restart_expire_time = (time_t)expand_time(val);
+		if (!restart_expire_time)
+			return;
+	}
+
+	node->rn_restart_counter = restart_init(restart_expire_time,
+						max_restarts);
+}
+
+
 static inline int
 do_load_resource(int ccsfd, char *base,
 	         resource_rule_t *rule,
@@ -545,7 +558,21 @@ do_load_resource(int ccsfd, char *base,
 	node->rn_parent = parent;
 	node->rn_resource = curres;
 	node->rn_state = RES_STOPPED;
+	node->rn_flags = 0;
 	node->rn_actions = (resource_act_t *)act_dup(curres->r_actions);
+	assign_restart_policy(curres, parent, node);
+
+	snprintf(tok, sizeof(tok), "%s/@__independent_subtree", base);
+#ifndef NO_CCS
+	if (ccs_get(ccsfd, tok, &ref) == 0) {
+#else
+	if (conf_get(tok, &ref) == 0) {
+#endif
+		if (atoi(ref) > 0 || strcasecmp(ref, "yes") == 0)
+			node->rn_flags |= RF_INDEPENDENT;
+		free(ref);
+	}
+
 	curres->r_refs++;
 
 	*newnode = node;
@@ -685,8 +712,10 @@ build_tree(int ccsfd, resource_node_t **tree,
 			}
 		}
 		/* No resource rule matching the child?  Press on... */
-		if (!flags)
+		if (!flags) {
+			free(ref);
 			continue;
+		}
 
 		flags = 0;
 		/* Don't descend on anything we should have already picked
@@ -706,12 +735,9 @@ build_tree(int ccsfd, resource_node_t **tree,
 			break;
 		}
 
-		if (flags == 2) {
-			free(ref);
-			continue;
-		}
-
 		free(ref);
+		if (flags == 2)
+			continue;
 
 		x = 1;
 		switch(do_load_resource(ccsfd, tok, childrule, tree,
@@ -758,21 +784,13 @@ build_resource_tree(int ccsfd, resource_node_t **tree,
 		    resource_rule_t **rulelist,
 		    resource_t **reslist)
 {
-	resource_rule_t *curr;
 	resource_node_t *root = NULL;
 	char tok[512];
 
 	snprintf(tok, sizeof(tok), "%s", RESOURCE_TREE_ROOT);
 
 	/* Find and build the list of root nodes */
-	list_do(rulelist, curr) {
-
-		if (!curr->rr_root)
-			continue;
-
-		build_tree(ccsfd, &root, NULL, NULL/*curr*/, rulelist, reslist, tok);
-
-	} while (!list_done(rulelist, curr));
+	build_tree(ccsfd, &root, NULL, NULL/*curr*/, rulelist, reslist, tok);
 
 	if (root)
 		*tree = root;
@@ -797,6 +815,11 @@ destroy_resource_tree(resource_node_t **tree)
 			destroy_resource_tree(&(*tree)->rn_child);
 
 		list_remove(tree, node);
+
+		if (node->rn_restart_counter) {
+			restart_cleanup(node->rn_restart_counter);
+		}
+
 		if(node->rn_actions){
 			free(node->rn_actions);
 		}
@@ -810,6 +833,7 @@ _print_resource_tree(resource_node_t **tree, int level)
 {
 	resource_node_t *node;
 	int x, y;
+	char *val;
 
 	list_do(tree, node) {
 		for (x = 0; x < level; x++)
@@ -824,19 +848,24 @@ _print_resource_tree(resource_node_t **tree, int level)
 				printf("NEEDSTART ");
 			if (node->rn_flags & RF_COMMON)
 				printf("COMMON ");
+			if (node->rn_flags & RF_INDEPENDENT)
+				printf("INDEPENDENT ");
 			printf("]");
 		}
 		printf(" {\n");
 
 		for (x = 0; node->rn_resource->r_attrs &&
 		     node->rn_resource->r_attrs[x].ra_value; x++) {
+			val = attr_value(node,
+				node->rn_resource->r_attrs[x].ra_name);
+			if (!val)
+				continue;
+
 			for (y = 0; y < level+1; y++)
 				printf("  ");
 			printf("%s = \"%s\";\n",
 			       node->rn_resource->r_attrs[x].ra_name,
-			       attr_value(node,
-					  node->rn_resource->r_attrs[x].ra_name)
-			      );
+			       val);
 		}
 
 		_print_resource_tree(&node->rn_child, level + 1);
@@ -879,16 +908,17 @@ _do_child_levels(resource_node_t **tree, resource_t *first, void *ret,
 
 #if 0
 			printf("%s children of %s type %s (level %d)\n",
-			       res_ops[op],
+			       agent_op_str(op),
 			       node->rn_resource->r_rule->rr_type,
 			       rule->rr_childtypes[x].rc_name, l);
 #endif
 
 			/* Do op on all children at our level */
-			rv += _res_op(&node->rn_child, first,
+			rv |= _res_op(&node->rn_child, first,
 			     	     rule->rr_childtypes[x].rc_name, 
 		     		     ret, op);
-			if (rv != 0 && op != RS_STOP)
+
+			if (rv & SFL_FAILURE && op != RS_STOP)
 				return rv;
 		}
 
@@ -900,46 +930,6 @@ _do_child_levels(resource_node_t **tree, resource_t *first, void *ret,
 }
 
 
-#if 0
-static inline int
-_do_child_default_level(resource_node_t **tree, resource_t *first,
-			void *ret, int op)
-{
-	resource_node_t *node = *tree;
-	resource_t *res = node->rn_resource;
-	resource_rule_t *rule = res->r_rule;
-	int x, rv = 0, lev;
-
-	for (x = 0; rule->rr_childtypes &&
-	     rule->rr_childtypes[x].rc_name; x++) {
-
-		if(op == RS_STOP)
-			lev = rule->rr_childtypes[x].rc_stoplevel;
-		else
-			lev = rule->rr_childtypes[x].rc_startlevel;
-
-		if (lev)
-			continue;
-
-		/*
-		printf("%s children of %s type %s (default level)\n",
-		       res_ops[op],
-		       node->rn_resource->r_rule->rr_type,
-		       rule->rr_childtypes[x].rc_name);
-		 */
-
-		rv = _res_op(&node->rn_child, first,
-			     rule->rr_childtypes[x].rc_name, 
-			     ret, op);
-		if (rv != 0)
-			return rv;
-	}
-
-	return 0;
-}
-#endif
-
-
 static inline int
 _xx_child_internal(resource_node_t *node, resource_t *first,
 		   resource_node_t *child, void *ret, int op)
@@ -973,13 +963,14 @@ _do_child_default_level(resource_node_t **tree, resource_t *first,
 
 	if (op == RS_START || op == RS_STATUS) {
 		list_for(&node->rn_child, child, y) {
-			rv = _xx_child_internal(node, first, child, ret, op);
-			if (rv)
+			rv |= _xx_child_internal(node, first, child, ret, op);
+
+			if (rv & SFL_FAILURE)
 				return rv;
 		}
 	} else {
 		list_for_rev(&node->rn_child, child, y) {
-			rv += _xx_child_internal(node, first, child, ret, op);
+			rv |= _xx_child_internal(node, first, child, ret, op);
 		}
 	}
 
@@ -1019,26 +1010,39 @@ _res_op_by_level(resource_node_t **tree, resource_t *first, void *ret,
 		return _res_op(&node->rn_child, first, NULL, ret, op);
 
 	if (op == RS_START || op == RS_STATUS) {
-		rv =  _do_child_levels(tree, first, ret, op);
-	       	if (rv != 0)
+		rv |= _do_child_levels(tree, first, ret, op);
+	       	if (rv & SFL_FAILURE)
 			return rv;
 
 		/* Start default level after specified ones */
-		rv =  _do_child_default_level(tree, first, ret, op);
+		rv |= _do_child_default_level(tree, first, ret, op);
 
 	} /* stop */ else {
 
-		rv =  _do_child_default_level(tree, first, ret, op);
-	       	if (rv != 0)
-			return rv;
-
-		rv =  _do_child_levels(tree, first, ret, op);
+		rv |= _do_child_default_level(tree, first, ret, op);
+		rv |= _do_child_levels(tree, first, ret, op);
 	}
 
 	return rv;
 }
 
 
+void
+mark_nodes(resource_node_t *node, int state, int flags)
+{
+	int x;
+	resource_node_t *child;
+
+	list_for(&node->rn_child, child, x) {
+		if (child->rn_child)
+			mark_nodes(child->rn_child, state, flags);
+	}
+
+	node->rn_state = state;
+	node->rn_flags |= (RF_NEEDSTART | RF_NEEDSTOP);
+}
+
+
 /**
    Do a status on a resource node.  This takes into account the last time the
    status operation was run and selects the highest possible resource depth
@@ -1075,7 +1079,8 @@ do_status(resource_node_t *node)
 
 		/* Ok, it's a 'status' action. See if enough time has
 		   elapsed for a given type of status action */
-		if (delta < node->rn_actions[x].ra_interval)
+		if (delta < node->rn_actions[x].ra_interval ||
+		    !node->rn_actions[x].ra_interval)
 			continue;
 
 		if (idx == -1 ||
@@ -1090,29 +1095,22 @@ do_status(resource_node_t *node)
 		return 0;
 	}
 
+
 	node->rn_actions[idx].ra_last = now;
-	x = res_exec(node, RS_STATUS, node->rn_actions[idx].ra_depth);
+	x = res_exec(node, RS_STATUS, NULL, node->rn_actions[idx].ra_depth);
 
 	node->rn_last_status = x;
 	node->rn_last_depth = node->rn_actions[idx].ra_depth;
 	node->rn_checked = 1;
 
-	/* Clear check levels below ours. */
-	for (x=0; node->rn_actions[x].ra_name; x++) {
-		if (strcmp(node->rn_actions[x].ra_name, "status"))
-			continue;
-		if (node->rn_actions[x].ra_depth <= node->rn_last_depth)
-			node->rn_actions[x].ra_last = now;
-	}
-
-	if (node->rn_last_status == 0)
+	if (x == 0)
 		return 0;
 
 	if (!has_recover)
-		return node->rn_last_status;
+		return x;
 
 	/* Strange/failed status. Try to recover inline. */
-	if ((x = res_exec(node, RS_RECOVER, 0)) == 0)
+	if ((x = res_exec(node, RS_RECOVER, NULL, 0)) == 0)
 		return 0;
 
 	return x;
@@ -1162,8 +1160,9 @@ clear_checks(resource_node_t *node)
 {
 	time_t now;
 	int x = 0;
+	resource_t *res = node->rn_resource;
 
-	now = time(NULL);
+	now = res->r_started;
 
 	for (; node->rn_actions[x].ra_name; x++) {
 
@@ -1197,136 +1196,12 @@ clear_checks(resource_node_t *node)
 			in the subtree).
    @see			_res_op_by_level res_exec
  */
-#if 0
-int
-_res_op(resource_node_t **tree, resource_t *first,
-	char *type, void * __attribute__((unused))ret, int realop)
-{
-	int rv, me;
-	resource_node_t *node;
-	int op;
-
-	list_do(tree, node) {
-
-		/* Restore default operation. */
-		op = realop;
-
-		/* If we're starting by type, do that funky thing. */
-		if (type && strlen(type) &&
-		    strcmp(node->rn_resource->r_rule->rr_type, type))
-			continue;
-
-		/* If the resource is found, all nodes in the subtree must
-		   have the operation performed as well. */
-		me = !first || (node->rn_resource == first);
-
-		/*
-		printf("begin %s: %s %s [0x%x]\n", res_ops[op],
-		       node->rn_resource->r_rule->rr_type,
-		       primary_attr_value(node->rn_resource),
-		       node->rn_flags);
-		 */
-
-		if (me) {
-			/*
-			   If we've been marked as a node which
-			   needs to be started or stopped, clear
-			   that flag and start/stop this resource
-			   and all resource babies.
-
-			   Otherwise, don't do anything; look for
-			   children with RF_NEEDSTART and
-			   RF_NEEDSTOP flags.
-
-			   CONDSTART and CONDSTOP are no-ops if
-			   the appropriate flag is not set.
-			 */
-		       	if ((op == RS_CONDSTART) &&
-			    (node->rn_flags & RF_NEEDSTART)) {
-				/*
-				printf("Node %s:%s - CONDSTART\n",
-				       node->rn_resource->r_rule->rr_type,
-				       primary_attr_value(node->rn_resource));
-				 */
-				op = RS_START;
-			}
-
-			if ((op == RS_CONDSTOP) &&
-			    (node->rn_flags & RF_NEEDSTOP)) {
-				/*
-				printf("Node %s:%s - CONDSTOP\n",
-				       node->rn_resource->r_rule->rr_type,
-				       primary_attr_value(node->rn_resource));
-				 */
-				op = RS_STOP;
-			}
-		}
-
-		/* Start starts before children */
-		if (me && (op == RS_START)) {
-			node->rn_flags &= ~RF_NEEDSTART;
-
-			rv = res_exec(node, op, 0);
-			if (rv != 0) {
-				node->rn_state = RES_FAILED;
-				return rv;
-			}
-
-			set_time("start", 0, node);
-			clear_checks(node);
-
-			if (node->rn_state != RES_STARTED) {
-				++node->rn_resource->r_incarnations;
-				node->rn_state = RES_STARTED;
-			}
-		}
-
-		if (node->rn_child) {
-			rv = _res_op_by_level(&node, me?NULL:first, ret, op);
-			if (rv != 0)
-				return rv;
-		}
-
-		/* Stop/status/etc stops after children have stopped */
-		if (me && (op == RS_STOP)) {
-			node->rn_flags &= ~RF_NEEDSTOP;
-			rv = res_exec(node, op, 0);
-
-			if (rv != 0) {
-				node->rn_state = RES_FAILED;
-				return rv;
-			}
-
-			if (node->rn_state != RES_STOPPED) {
-				--node->rn_resource->r_incarnations;
-				node->rn_state = RES_STOPPED;
-			}
-
-		} else if (me && (op == RS_STATUS)) {
-
-			rv = do_status(node);
-			if (rv != 0)
-				return rv;
-		}
-
-		/*
-		printf("end %s: %s %s\n", res_ops[op],
-		       node->rn_resource->r_rule->rr_type,
-		       primary_attr_value(node->rn_resource));
-		 */
-	} while (!list_done(tree, node));
-
-	return 0;
-}
-#endif
-
-
 static inline int
 _res_op_internal(resource_node_t **tree, resource_t *first,
 		 char *type, void *__attribute__((unused))ret, int realop,
 		 resource_node_t *node)
 {
-	int rv, me, op;
+	int rv = 0, me, op;
 
 	/* Restore default operation. */
 	op = realop;
@@ -1378,12 +1253,18 @@ _res_op_internal(resource_node_t **tree, resource_t *first,
 
 	/* Start starts before children */
 	if (me && (op == RS_START)) {
-		node->rn_flags &= ~RF_NEEDSTART;
 
-		rv = res_exec(node, op, 0);
+		if (node->rn_flags & RF_RECONFIG &&
+		    realop == RS_CONDSTART) {
+			rv = res_exec(node, RS_RECONFIG, NULL, 0);
+			op = realop; /* reset to CONDSTART */
+		} else {
+			rv = res_exec(node, op, NULL, 0);
+		}
+		node->rn_flags &= ~(RF_NEEDSTART | RF_RECONFIG);
 		if (rv != 0) {
 			node->rn_state = RES_FAILED;
-			return rv;
+			return SFL_FAILURE;
 		}
 
 		set_time("start", 0, node);
@@ -1396,24 +1277,53 @@ _res_op_internal(resource_node_t **tree, resource_t *first,
 	} else if (me && (op == RS_STATUS)) {
 		/* Check status before children*/
 		rv = do_status(node);
-		if (rv != 0)
-			return rv;
-	}
+		if (rv != 0) {
+			/*
+			   If this node's status has failed, all of its
+			   dependent children are failed, whether or not this
+			   node is independent or not.
+			 */
+			mark_nodes(node, RES_FAILED,
+				   RF_NEEDSTART | RF_NEEDSTOP);
+
+			/* If we're an independent subtree, return a flag
+			   stating that this section is recoverable apart
+			   from siblings in the resource tree.  All child
+			   resources of this node must be restarted,
+			   but siblings of this node are not affected. */
+			if (node->rn_flags & RF_INDEPENDENT)
+				return SFL_RECOVERABLE;
+
+			return SFL_FAILURE;
+		}
 
-	if (node->rn_child) {
-		rv = _res_op_by_level(&node, me?NULL:first, ret, op);
-		if (rv != 0)
-			return rv;
 	}
 
+       if (node->rn_child) {
+                rv |= _res_op_by_level(&node, me?NULL:first, ret, op);
+
+               /* If one or more child resources are failed and at least one
+		  of them is not an independent subtree then let's check if
+		  if we are an independent subtree.  If so, mark ourself
+		  and all our children as failed and return a flag stating
+		  that this section is recoverable apart from siblings in
+		  the resource tree. */
+		if (op == RS_STATUS && (rv & SFL_FAILURE) &&
+		    (node->rn_flags & RF_INDEPENDENT)) {
+			mark_nodes(node, RES_FAILED,
+				   RF_NEEDSTART | RF_NEEDSTOP);
+			rv = SFL_RECOVERABLE;
+		}
+	}
+ 			
 	/* Stop should occur after children have stopped */
 	if (me && (op == RS_STOP)) {
 		node->rn_flags &= ~RF_NEEDSTOP;
-		rv = res_exec(node, op, 0);
+		rv |= res_exec(node, op, NULL, 0);
 
 		if (rv != 0) {
 			node->rn_state = RES_FAILED;
-			return rv;
+			return SFL_FAILURE;
 		}
 
 		if (node->rn_state != RES_STOPPED) {
@@ -1426,7 +1336,7 @@ _res_op_internal(resource_node_t **tree, resource_t *first,
 	       //node->rn_resource->r_rule->rr_type,
 	       //primary_attr_value(node->rn_resource));
 	
-	return 0;
+	return rv;
 }
 
 
@@ -1452,24 +1362,31 @@ _res_op(resource_node_t **tree, resource_t *first,
 	char *type, void * __attribute__((unused))ret, int realop)
 {
   	resource_node_t *node;
- 	int count = 0, rv;
+ 	int count = 0, rv = 0;
  	
  	if (realop == RS_STOP) {
  		list_for_rev(tree, node, count) {
- 			rv = _res_op_internal(tree, first, type, ret, realop,
- 					      node);
- 			if (rv != 0) 
- 				return rv;
+ 			rv |= _res_op_internal(tree, first, type, ret, realop,
+ 					       node);
  		}
  	} else {
  		list_for(tree, node, count) {
- 			rv = _res_op_internal(tree, first, type, ret, realop,
- 					      node);
- 			if (rv != 0) 
+ 			rv |= _res_op_internal(tree, first, type, ret, realop,
+ 					       node);
+
+			/* If we hit a problem during a 'status' op in an
+			   independent subtree, rv will have the
+			   SFL_RECOVERABLE bit set, but not SFL_FAILURE.
+			   If we ever hit SFL_FAILURE during a status
+			   operation, we're *DONE* - even if the subtree
+			   is flagged w/ indy-subtree */
+			  
+ 			if (rv & SFL_FAILURE) 
  				return rv;
  		}
  	}
-	return 0;
+
+	return rv;
 }
 
 /**
@@ -1564,6 +1481,7 @@ int
 resource_delta(resource_t **leftres, resource_t **rightres)
 {
 	resource_t *lc, *rc;
+	int ret;
 
 	list_do(leftres, lc) {
 		rc = find_resource_by_ref(rightres, lc->r_rule->rr_type,
@@ -1576,10 +1494,25 @@ resource_delta(resource_t **leftres, resource_t **rightres)
 		}
 
 		/* Ok, see if the resource is the same */
-		if (rescmp(lc, rc) == 0) {
+		ret = rescmp(lc, rc);
+		if (ret	== 0) {
+			rc->r_flags |= RF_COMMON;
+			continue;
+		}
+
+		if (ret == 2) {
+			/* return of 2 from rescmp means
+			   the two resources differ only 
+			   by reconfigurable bits */
+			/* Do nothing on condstop phase;
+			   do a "reconfig" instead of 
+			   "start" on conststart phase */
 			rc->r_flags |= RF_COMMON;
+			rc->r_flags |= RF_NEEDSTART;
+			rc->r_flags |= RF_RECONFIG;
 			continue;
 		}
+
 		rc->r_flags |= RF_COMMON;
 
 		/* Resource has changed.  Flag it. */
@@ -1641,12 +1574,17 @@ resource_tree_delta(resource_node_t **ltree, resource_node_t **rtree)
 			   or is new), then we don't really care about its
 			   children.
 			 */
+
 			if (rn->rn_resource->r_flags & RF_NEEDSTART) {
 				rn->rn_flags |= RF_NEEDSTART;
-				continue;
+				if ((rn->rn_resource->r_flags & RF_RECONFIG) == 0)
+					continue;
 			}
 
-			if (rc == 0) {
+			if (rc == 0 || rc == 2) {
+				if (rc == 2)
+					rn->rn_flags |= RF_NEEDSTART | RF_RECONFIG;
+
 				/* Ok, same resource.  Recurse. */
 				ln->rn_flags |= RF_COMMON;
 				rn->rn_flags |= RF_COMMON;
diff --git a/rgmanager/src/daemons/rg_event.c b/rgmanager/src/daemons/rg_event.c
new file mode 100644
index 0000000..48d0fca
--- /dev/null
+++ b/rgmanager/src/daemons/rg_event.c
@@ -0,0 +1,500 @@
+/*
+  Copyright Red Hat, Inc. 2006-2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+#include <resgroup.h>
+#include <rg_locks.h>
+#include <gettid.h>
+#include <assert.h>
+#include <ccs.h>
+#include <clulog.h>
+#include <event.h>
+#include <stdint.h>
+#include <platform.h>
+#include <magma.h>
+#include <magmamsg.h>
+#include <vf.h>
+
+
+/**
+ * resource group event queue.
+ */
+static event_t *event_queue = NULL;
+#ifdef WRAP_LOCKS
+static pthread_mutex_t event_queue_mutex = PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP;
+static pthread_mutex_t mi_mutex = PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP;
+#else
+static pthread_mutex_t event_queue_mutex = PTHREAD_MUTEX_INITIALIZER;
+static pthread_mutex_t mi_mutex = PTHREAD_MUTEX_INITIALIZER;
+#endif
+static pthread_t event_thread = 0;
+static int transition_throttling = 5;
+static int central_events = 0;
+
+extern int running;
+static int _master = 0;
+static void *_master_lock = NULL;
+static int _xid = 0;
+static event_master_t *mi = NULL;
+
+void hard_exit(void);
+int init_resource_groups(int);
+void flag_shutdown(int sig);
+void flag_reconfigure(int sig);
+
+event_table_t *master_event_table = NULL;
+
+
+void
+set_transition_throttling(int nsecs)
+{
+	if (nsecs < 0)
+		nsecs = 0;
+	transition_throttling = nsecs;
+}
+
+
+void
+set_central_events(int flag)
+{
+	central_events = flag;
+}
+
+
+int
+central_events_enabled(void)
+{
+	return central_events;
+}
+
+
+/**
+  Called to handle the transition of a cluster member from up->down or
+  down->up.  This handles initializing services (in the local node-up case),
+  exiting due to loss of quorum (local node-down), and service fail-over
+  (remote node down).  This is the distributed node event processor;
+  for the local-only node event processor, see slang_event.c
+ 
+  @param nodeID		ID of the member which has come up/gone down.
+  @param nodeStatus		New state of the member in question.
+  @see eval_groups
+ */
+void
+node_event(int local, uint64_t nodeID, int nodeStatus, int clean)
+{
+	if (!running)
+		return;
+
+	if (local) {
+
+		/* Local Node Event */
+		if (nodeStatus == 0) {
+			clulog(LOG_ERR, "Exiting uncleanly\n");
+			hard_exit();
+		}
+
+		if (!rg_initialized()) {
+			if (init_resource_groups(0) != 0) {
+				clulog(LOG_ERR,
+				       "#36: Cannot initialize services\n");
+				hard_exit();
+			}
+		}
+
+		if (!running) {
+			clulog(LOG_NOTICE, "Processing delayed exit signal\n");
+			return;
+		}
+		setup_signal(SIGINT, flag_shutdown);
+		setup_signal(SIGTERM, flag_shutdown);
+		setup_signal(SIGHUP, flag_reconfigure);
+
+		eval_groups(1, nodeID, 1);
+		return;
+	}
+
+	/*
+	 * Nothing to do for events from other nodes if we are not ready.
+	 */
+	if (!rg_initialized()) {
+		clulog(LOG_DEBUG, "Services not initialized.\n");
+		return;
+	}
+
+	eval_groups(0, nodeID, nodeStatus);
+}
+
+
+/**
+   Callback from view-formation when a commit occurs for the Transition-
+   Master key.
+ */
+int32_t
+master_event_callback(char *key, uint64_t viewno,
+		      void *data, uint32_t datalen)
+{
+	event_master_t *m;
+
+	m = data;
+	if (datalen != (uint32_t)sizeof(*m)) {
+		clulog(LOG_ERR, "%s: wrong size\n", __FUNCTION__);
+		return 1;
+	}
+
+	swab_event_master_t(m);
+	if (m->m_magic != EVENT_MASTER_MAGIC) {
+		clulog(LOG_ERR, "%s: wrong size\n", __FUNCTION__);
+		return 1;
+	}
+
+	if (m->m_nodeid == my_id())
+		clulog(LOG_DEBUG, "Master Commit: I am master\n");
+	else 
+		clulog(LOG_DEBUG, "Master Commit: %d is master\n", m->m_nodeid);
+
+	pthread_mutex_lock(&mi_mutex);
+	if (mi)
+		free(mi);
+	mi = m;
+	pthread_mutex_unlock(&mi_mutex);
+
+	return 0;
+}
+
+
+/**
+  Read the Transition-Master key from vf if it exists.  If it doesn't,
+  attempt to become the transition-master.
+ */
+static int
+find_master(void)
+{
+	event_master_t *masterinfo = NULL;
+	void *data;
+	uint32_t sz;
+	cluster_member_list_t *m = NULL;
+	uint64_t vn;
+	int master_id = -1;
+
+	m = member_list();
+	if (vf_read(m, "Transition-Master", &vn,
+		    (void **)(&data), &sz) < 0) {
+		clulog(LOG_ERR, "Unable to discover master"
+		       " status\n");
+		masterinfo = NULL;
+	} else {
+		masterinfo = (event_master_t *)data;
+	}
+	cml_free(m);
+
+	if (masterinfo && (sz >= sizeof(*masterinfo))) {
+		swab_event_master_t(masterinfo);
+		if (masterinfo->m_magic == EVENT_MASTER_MAGIC) {
+			clulog(LOG_DEBUG, "Master Locate: %d is master\n",
+			       masterinfo->m_nodeid);
+			pthread_mutex_lock(&mi_mutex);
+			if (mi)
+				free(mi);
+			mi = masterinfo;
+			pthread_mutex_unlock(&mi_mutex);
+			master_id = masterinfo->m_nodeid;
+		}
+	}
+
+	return master_id;
+}
+
+
+/**
+  Return a copy of the cached event_master_t structure to the
+  caller.
+ */
+int
+event_master_info_cached(event_master_t *mi_out)
+{
+	if (!central_events || !mi_out) {
+		errno = -EINVAL;
+		return -1;
+	}
+
+	pthread_mutex_lock(&mi_mutex);
+	if (!mi) {
+		pthread_mutex_unlock(&mi_mutex);
+		errno = -ENOENT;
+		return -1;
+	}
+
+	memcpy(mi_out, mi, sizeof(*mi));
+	pthread_mutex_unlock(&mi_mutex);
+	return 0;
+}
+
+
+/**
+  Return the node ID of the master.  If none exists, become
+  the master and return our own node ID.
+ */
+uint64_t
+event_master(void)
+{
+	cluster_member_list_t *m = NULL;
+	event_master_t masterinfo;
+	uint64_t master_id = NODE_ID_NONE;
+
+	/* We hold this forever. */
+	if (_master)
+		return my_id();
+
+	m = member_list();
+	pthread_mutex_lock(&mi_mutex);
+
+	if (mi) {
+		master_id = mi->m_nodeid;
+		pthread_mutex_unlock(&mi_mutex);
+		if (memb_online(m, master_id)) {
+			//clulog(LOG_DEBUG, "%d is master\n", mi->m_nodeid);
+			goto out;
+		}
+	}
+
+	pthread_mutex_unlock(&mi_mutex);
+
+	memset(&_master_lock, 0, sizeof(_master_lock));
+	if (clu_lock("Transition-Master", CLK_EX|CLK_NOWAIT,
+		     &_master_lock) < 0) {
+		/* not us, find out who is master */
+		master_id = find_master();
+		goto out;
+	}
+
+#if 0 /* XXX */
+	if (_master_lock.sb_status != 0) {
+		master_id = -1;
+		goto out;
+	}
+#endif
+
+	_master = 1;
+
+	memset(&masterinfo, 0, sizeof(masterinfo));
+	masterinfo.m_magic = EVENT_MASTER_MAGIC;
+	masterinfo.m_nodeid = my_id();
+	masterinfo.m_master_time = (uint64_t)time(NULL);
+	swab_event_master_t(&masterinfo);
+
+	if (vf_write(m, VFF_IGN_CONN_ERRORS | VFF_RETRY,
+		     "Transition-Master", &masterinfo,
+		     sizeof(masterinfo)) < 0) {
+		clulog(LOG_ERR, "Unable to advertise master"
+		       " status to all nodes\n");
+	}
+
+	master_id = my_id();
+out:
+	if(m)
+		cml_free(m);
+	return master_id;
+}
+
+
+
+void group_event(char *name, uint32_t state, int owner);
+
+/**
+  Event handling function.  This only stays around as long as
+  events are on the queue.
+ */
+void *
+_event_thread_f(void *arg)
+{
+	event_t *ev;
+	int count = 0;
+
+	while (1) {
+		pthread_mutex_lock(&event_queue_mutex);
+		ev = event_queue;
+		if (ev)
+			list_remove(&event_queue, ev);
+		else
+			break; /* We're outta here */
+
+		++count;
+		/* Event thread usually doesn't hang around.  When it's
+	   	   spawned, sleep for this many seconds in order to let
+	   	   some events queue up */
+		if ((count==1) && transition_throttling && !central_events)
+			sleep(transition_throttling);
+
+		pthread_mutex_unlock(&event_queue_mutex);
+
+		if (ev->ev_type == EVENT_CONFIG) {
+			/*
+			clulog(LOG_NOTICE, "Config Event: %d -> %d\n",
+			       ev->ev.config.cfg_oldversion,
+			       ev->ev.config.cfg_version);
+			 */
+			init_resource_groups(1);
+			free(ev);
+			continue;
+		}
+
+		if (central_events) {
+			/* If the master node died or there isn't
+			   one yet, take the master lock. */
+			if (event_master() == my_id()) {
+				slang_process_event(master_event_table,
+						    ev);
+			} 
+			free(ev);
+			continue;
+			/* ALL OF THE CODE BELOW IS DISABLED
+			   when using central_events */
+		}
+
+		if (ev->ev_type == EVENT_RG) {
+			/*
+			clulog(LOG_NOTICE, "RG Event: %s %s %d\n",
+			       ev->ev.group.rg_name,
+			       rg_state_str(ev->ev.group.rg_state),
+			       ev->ev.group.rg_owner);
+			 */
+			group_event(ev->ev.group.rg_name,
+				    ev->ev.group.rg_state,
+				    ev->ev.group.rg_owner);
+		} else if (ev->ev_type == EVENT_NODE) {
+			/*
+			clulog(LOG_NOTICE, "Node Event: %s %d %s %s\n",
+			       ev->ev.node.ne_local?"Local":"Remote",
+			       ev->ev.node.ne_nodeid,
+			       ev->ev.node.ne_state?"UP":"DOWN",
+			       ev->ev.node.ne_clean?"Clean":"Dirty")
+			 */
+
+			node_event(ev->ev.node.ne_local,
+				   ev->ev.node.ne_nodeid,
+				   ev->ev.node.ne_state,
+				   ev->ev.node.ne_clean);
+		}
+
+		free(ev);
+	}
+
+	if (!central_events || _master) {
+		clulog(LOG_DEBUG, "%d events processed\n", count);
+	}
+	/* Mutex held */
+	event_thread = 0;
+	pthread_mutex_unlock(&event_queue_mutex);
+	pthread_exit(NULL);
+}
+
+
+static void
+insert_event(event_t *ev)
+{
+	pthread_attr_t attrs;
+	pthread_mutex_lock (&event_queue_mutex);
+	ev->ev_transaction = ++_xid;
+	list_insert(&event_queue, ev);
+	if (event_thread == 0) {
+        	pthread_attr_init(&attrs);
+        	pthread_attr_setinheritsched(&attrs, PTHREAD_INHERIT_SCHED);
+        	pthread_attr_setdetachstate(&attrs, PTHREAD_CREATE_DETACHED);
+		pthread_attr_setstacksize(&attrs, 262144);
+
+		pthread_create(&event_thread, &attrs, _event_thread_f, NULL);
+        	pthread_attr_destroy(&attrs);
+	}
+	pthread_mutex_unlock (&event_queue_mutex);
+}
+
+
+static event_t *
+new_event(void)
+{
+	event_t *ev;
+
+	while (1) {
+		ev = malloc(sizeof(*ev));
+		if (ev) {
+			break;
+		}
+		sleep(1);
+	}
+	memset(ev,0,sizeof(*ev));
+	ev->ev_type = EVENT_NONE;
+
+	return ev;
+}
+
+
+void
+rg_event_q(char *name, uint32_t state, uint64_t owner, uint64_t last)
+{
+	event_t *ev = new_event();
+
+	ev->ev_type = EVENT_RG;
+
+	strncpy(ev->ev.group.rg_name, name, 128);
+	ev->ev.group.rg_state = state;
+	ev->ev.group.rg_owner = owner;
+	ev->ev.group.rg_last_owner = last;
+
+	insert_event(ev);
+}
+
+
+void
+node_event_q(int local, uint64_t nodeID, int state, int clean)
+{
+	event_t *ev = new_event();
+
+	ev->ev_type = EVENT_NODE;
+	ev->ev.node.ne_state = state;
+	ev->ev.node.ne_local = local;
+	ev->ev.node.ne_nodeid = nodeID;
+	ev->ev.node.ne_clean = clean;
+	insert_event(ev);
+}
+
+
+void
+config_event_q(int old_version, int new_version)
+{
+	event_t *ev = new_event();
+
+	ev->ev_type = EVENT_CONFIG;
+	ev->ev.config.cfg_version = new_version;
+	ev->ev.config.cfg_oldversion = old_version;
+	insert_event(ev);
+}
+
+void
+user_event_q(char *svc, int request,
+	     int arg1, int arg2, uint64_t target, int fd)
+{
+	event_t *ev = new_event();
+
+	ev->ev_type = EVENT_USER;
+	strncpy(ev->ev.user.u_name, svc, sizeof(ev->ev.user.u_name));
+	ev->ev.user.u_request = request;
+	ev->ev.user.u_arg1 = arg1;
+	ev->ev.user.u_arg2 = arg2;
+	ev->ev.user.u_target = target;
+	ev->ev.user.u_fd = fd;
+	insert_event(ev);
+}
+
diff --git a/rgmanager/src/daemons/rg_forward.c b/rgmanager/src/daemons/rg_forward.c
index 081abb5..0d7df55 100644
--- a/rgmanager/src/daemons/rg_forward.c
+++ b/rgmanager/src/daemons/rg_forward.c
@@ -26,6 +26,14 @@
 #include <clulog.h>
 
 
+struct fw_message {
+	SmMessageSt msg;
+	uint64_t nodeid;
+	int fd;
+	int unused;
+};
+
+
 void
 build_message(SmMessageSt *msgp, int action, char *svcName, uint64_t target)
 {
@@ -145,3 +153,105 @@ forward_request(request_t *req)
         pthread_attr_destroy(&attrs);
 }
 
+
+void *
+forwarding_thread_v2(void *arg)
+{
+	int fd = -1, resp_fd = -1;
+	cluster_member_list_t *m = NULL;
+	SmMessageSt *msgp = NULL, msg;
+	int response_code = RG_EAGAIN, ret, target = -1;
+	int retries = 0;
+	struct fw_message *fwmsg = (struct fw_message *)arg;
+
+	msgp = &fwmsg->msg;
+	resp_fd = fwmsg->fd;
+	target = fwmsg->nodeid;
+
+	clulog(LOG_DEBUG, "FW: Forwarding SM request to %d\n",
+	       target);
+
+	if ((fd = msg_open(target, RG_PORT, RG_PURPOSE, 10)) < 0) {
+		clulog(LOG_DEBUG, "FW: Failed to open channel to %d: %s\n",
+		       target, strerror(errno));
+		goto out_fail;
+	}
+
+	/* swap + send */
+	swab_SmMessageSt(msgp);
+	if (msg_send(fd, msgp, sizeof(*msgp)) < sizeof(*msgp)) {
+		clulog(LOG_DEBUG, "FW: Failed to send message to %d fd %d: %s\n",
+		       target, fd, strerror(errno));
+		goto out_fail;
+	}
+
+
+        /*
+	 * Ok, we're forwarding a message to another node.  Keep tabs on
+	 * the node to make sure it doesn't die.  Basically, wake up every
+	 * now and again to make sure it's still online.  If it isn't, send
+	 * a response back to the caller.
+	 */
+	do {
+		ret = msg_receive_timeout(fd, &msg, sizeof(msg), 10);
+		if (ret < (int)sizeof(msg)) {
+			if (ret < 0 && errno == ETIMEDOUT) {
+				if (!member_online(target)) {
+					response_code = RG_ENODE;
+					goto out_fail;
+				}
+				continue;
+			}
+
+			if (ret == 0)
+				continue;
+		}
+		break;
+	} while(++retries < 60); /* old 600 second rule */
+
+	swab_SmMessageSt(&msg);
+
+	response_code = msg.sm_data.d_ret;
+	target = msg.sm_data.d_svcOwner;
+
+out_fail:
+	free(fwmsg); 
+
+	if (resp_fd >= 0) {
+		send_ret(resp_fd, msgp->sm_data.d_svcName, response_code,
+			 msgp->sm_data.d_action, target);
+		msg_close(resp_fd);
+	}
+
+	if (fd >= 0)
+		msg_close(fd);
+
+	pthread_exit(NULL);
+}
+
+
+void
+forward_message(int fd, void *msgp, uint64_t nodeid)
+{
+	pthread_t newthread;
+	pthread_attr_t attrs;
+	struct fw_message *fwmsg;
+
+	fwmsg = malloc(sizeof(struct fw_message));
+	if (!fwmsg) {
+		msg_close(fd);
+		return;
+	}
+
+	memcpy(&fwmsg->msg, msgp, sizeof(fwmsg->msg));
+	fwmsg->fd = fd;
+	fwmsg->nodeid = nodeid;
+
+        pthread_attr_init(&attrs);
+        pthread_attr_setinheritsched(&attrs, PTHREAD_INHERIT_SCHED);
+        pthread_attr_setdetachstate(&attrs, PTHREAD_CREATE_DETACHED);
+	pthread_attr_setstacksize(&attrs, 262144);
+
+	pthread_create(&newthread, &attrs, forwarding_thread_v2, fwmsg);
+        pthread_attr_destroy(&attrs);
+}
diff --git a/rgmanager/src/daemons/rg_state.c b/rgmanager/src/daemons/rg_state.c
index a81a9ae..4509e6a 100644
--- a/rgmanager/src/daemons/rg_state.c
+++ b/rgmanager/src/daemons/rg_state.c
@@ -31,6 +31,9 @@
 #include <rg_queue.h>
 #include <msgsimple.h>
 
+#define cm_svccount cm_pad[0] /* Theses are uint8_t size */
+#define cm_svcexcl  cm_pad[1]
+
 int node_should_start_safe(uint64_t, cluster_member_list_t *, char *);
 
 uint64_t next_node_id(cluster_member_list_t *membership, uint64_t me);
@@ -72,6 +75,65 @@ next_node_id(cluster_member_list_t *membership, uint64_t me)
 }
 
 
+char *
+c_name(char *svcName)
+{
+	char *ptr, *ret = svcName;
+
+	ptr = strchr(svcName,':');
+	if (!ptr)
+		return ret;
+	if ((int)(ptr - svcName) == 7 &&
+	    !memcmp(svcName, "service", 7)) /* strlen("service") */
+		ret = ptr + 1;
+
+	return ret;
+}
+
+
+void
+broadcast_event(char *svcName, uint32_t state, uint64_t owner, uint64_t last)
+{
+	rg_state_msg_t msgp;
+	cluster_member_list_t *membership = NULL;
+	int fd, x, target;
+
+	msgp.rsm_hdr.gh_magic = GENERIC_HDR_MAGIC;
+	msgp.rsm_hdr.gh_command = RG_EVENT;
+	msgp.rsm_hdr.gh_length = sizeof(msgp);
+
+	msgp.rsm_state.rs_state = state;
+	strncpy(msgp.rsm_state.rs_name, svcName,
+		sizeof(msgp.rsm_state.rs_name));
+	msgp.rsm_state.rs_owner = owner;
+	msgp.rsm_state.rs_last_owner = last;
+
+	swab_rg_state_msg_t(&msgp);
+
+	membership = member_list();
+	if (!membership) {
+		clulog(LOG_ERR, "Cannot send event: %s\n", strerror(errno));
+		return;
+	}
+
+	for (x = 0; x < membership->cml_count; x++) {
+		if (!membership->cml_members[x].cm_state)
+			continue;
+
+		target = membership->cml_members[x].cm_id;
+
+		fd = msg_open(target, RG_PORT, RG_PURPOSE, 2);
+		if (fd < 0) 
+			continue;
+
+		msg_send(fd, &msgp, sizeof(msgp));	
+		msg_close(fd);
+	}	
+
+	cml_free(membership);
+}
+
+
 int
 svc_report_failure(char *svcName)
 {
@@ -81,13 +143,13 @@ svc_report_failure(char *svcName)
 	cluster_member_list_t *membership;
 
 	if (rg_lock(svcName, &lockp) == -1) {
-		clulog(LOG_ERR, "#41: Couldn't obtain lock for RG %s: %s\n",
+		clulog(LOG_ERR, "#41: Couldn't obtain lock for %s: %s\n",
 		       svcName, strerror(errno));
 		return -1;
 	}
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
-		clulog(LOG_ERR, "#42: Couldn't obtain status for RG %s\n",
+		clulog(LOG_ERR, "#42: Couldn't obtain status for %s\n",
 		       svcName);
 		clu_unlock(svcName, lockp);
 		return -1;
@@ -98,11 +160,12 @@ svc_report_failure(char *svcName)
 	nodeName = memb_id_to_name(membership, svcStatus.rs_last_owner);
 	if (nodeName) {
 		clulog(LOG_ALERT, "#2: Service %s returned failure "
-		       "code.  Last Owner: %s\n", svcName, nodeName);
+		       "code.  Last Owner: %s\n",
+		       c_name(svcName), nodeName);
 	} else {
 		clulog(LOG_ALERT, "#3: Service %s returned failure "
 		       "code.  Last Owner: %d\n",
-		       svcName, (int)svcStatus.rs_last_owner);
+		       c_name(svcName), (int)svcStatus.rs_last_owner);
 	}
 
 	cml_free(membership);
@@ -223,11 +286,14 @@ send_response(int ret, uint64_t newowner, request_t *req)
 
 
 int
-set_rg_state(char *name, rg_state_t *svcblk)
+set_rg_state(char *rgname, rg_state_t *svcblk)
 {
 	cluster_member_list_t *membership;
 	char res[256];
 	int ret;
+	char *name;
+
+	name = c_name(rgname);
 
 	if (name)
 		strncpy(svcblk->rs_name, name, sizeof(svcblk->rs_name));
@@ -256,7 +322,7 @@ init_rg(char *name, rg_state_t *svcblk)
 
 
 int
-get_rg_state(char *name, rg_state_t *svcblk)
+get_rg_state(char *rgname, rg_state_t *svcblk)
 {
 	char res[256];
 	int ret;
@@ -264,6 +330,9 @@ get_rg_state(char *name, rg_state_t *svcblk)
 	uint32_t datalen = 0;
 	uint64_t viewno;
 	cluster_member_list_t *membership;
+	char *name;
+
+	name = c_name(rgname);
 
 	/* ... */
 	if (name)
@@ -282,7 +351,7 @@ get_rg_state(char *name, rg_state_t *svcblk)
 		if (ret != VFR_OK) {
 			cml_free(membership);
 			printf("Couldn't initialize rg %s!\n", name);
-			return FAIL;
+			return RG_EFAIL;
 		}
 
 		ret = vf_read(membership, res, &viewno, &data, &datalen);
@@ -291,7 +360,7 @@ get_rg_state(char *name, rg_state_t *svcblk)
 				free(data);
 			cml_free(membership);
 			printf("Couldn't reread rg %s! (%d)\n", name, ret);
-			return FAIL;
+			return RG_EFAIL;
 		}
 	}
 
@@ -301,7 +370,7 @@ get_rg_state(char *name, rg_state_t *svcblk)
 		if (data)
 			free(data);
 		cml_free(membership);
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	/* Copy out the data. */
@@ -315,15 +384,17 @@ get_rg_state(char *name, rg_state_t *svcblk)
 
 int vf_read_local(char *, uint64_t *, void *, uint32_t *);
 int
-get_rg_state_local(char *name, rg_state_t *svcblk)
+get_rg_state_local(char *rgname, rg_state_t *svcblk)
 {
 	char res[256];
 	int ret;
 	void *data = NULL;
+	char *name;
 	uint32_t datalen = 0;
 	uint64_t viewno;
 
 	/* ... */
+	name = c_name(rgname);
 	if (name)
 		strncpy(svcblk->rs_name, name, sizeof(svcblk->rs_name));
 
@@ -342,7 +413,7 @@ get_rg_state_local(char *name, rg_state_t *svcblk)
 		svcblk->rs_transition = 0;	
 		strncpy(svcblk->rs_name, name, sizeof(svcblk->rs_name));
 
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	/* Copy out the data. */
@@ -360,11 +431,11 @@ get_rg_state_local(char *name, rg_state_t *svcblk)
  * @param svcStatus	Current service status.
  * @param svcName	Service name
  * @param req		Specify request to perform
- * @return		0 = DO NOT stop service, return FAIL
+ * @return		0 = DO RG_NOT stop service, return RG_EFAIL
  *			1 = STOP service - return whatever it returns.
- *			2 = DO NOT stop service, return 0 (success)
- *                      3 = DO NOT stop service, return RG_EFORWARD
- *			4 = DO NOT stop service, return RG_EAGAIN
+ *			2 = DO RG_NOT stop service, return 0 (success)
+ *                      3 = DO RG_NOT stop service, return RG_EFORWARD
+ *			4 = DO RG_NOT stop service, return RG_EAGAIN
  */
 int
 svc_advise_stop(rg_state_t *svcStatus, char *svcName, int req)
@@ -436,7 +507,7 @@ svc_advise_stop(rg_state_t *svcStatus, char *svcName, int req)
 		}
 		clulog(LOG_DEBUG,
 		       "Not stopping %s: service is failed\n",
-		       svcName);
+		       c_name(svcName));
 		ret = 0;
 		break;
 
@@ -449,23 +520,23 @@ svc_advise_stop(rg_state_t *svcStatus, char *svcName, int req)
 		break;
 	
 	case RG_STATE_DISABLED:
-		ret = 2;
 	case RG_STATE_UNINITIALIZED:
 		if (req == RG_DISABLE) {
 			clulog(LOG_NOTICE,
 			       "Disabling disabled service %s\n",
-			       svcName);
+			       c_name(svcName));
 			ret = 1;
 			break;
 		}
 
+		ret = 2;
 		clulog(LOG_DEBUG, "Not stopping disabled service %s\n",
-		       svcName);
+		       c_name(svcName));
 		break;
 
 	default:
 		clulog(LOG_ERR,
-		       "#42: Cannot stop RG %s: Invalid State %d\n",
+		       "#42: Cannot stop %s: Invalid State %d\n",
 		       svcName, svcStatus->rs_state);
 		break;
 	}
@@ -483,11 +554,11 @@ svc_advise_stop(rg_state_t *svcStatus, char *svcName, int req)
  * @param svcName	Service name
  * @param flags		Specify whether or not it's legal to start a 
  *			disabled service, etc.
- * @return		0 = DO NOT start service, return FAIL
+ * @return		0 = DO RG_NOT start service, return RG_EFAIL
  *			1 = START service - return whatever it returns.
- *			2 = DO NOT start service, return 0
- *			3 = DO NOT start service, return RG_EAGAIN
- *                      4 = DO NOT start servuce, return RG_ERUN
+ *			2 = DO RG_NOT start service, return 0
+ *			3 = DO RG_NOT start service, return RG_EAGAIN
+ *                      4 = DO RG_NOT start servuce, return RG_ERUN
  */
 int
 svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
@@ -502,7 +573,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 	case RG_STATE_FAILED:
 		clulog(LOG_ERR,
 		       "#43: Service %s has failed; can not start.\n",
-		       svcName);
+		       c_name(svcName));
 		break;
 		
 	case RG_STATE_STOPPING:
@@ -513,7 +584,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		    	/*
 			 * Service is already running locally
 			clulog(LOG_DEBUG,
-			       "RG %s is already running locally\n", svcName);
+			       "%s is already running locally\n", svcName);
 			 */
 			ret = 4;
 			break;
@@ -523,7 +594,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		    memb_online(membership, svcStatus->rs_owner)) {
 			/*
 			 * Service is running and the owner is online!
-			clulog(LOG_DEBUG, "RG %s is running on member %s.\n",
+			clulog(LOG_DEBUG, "%s is running on member %s.\n",
 			       svcName,
 			       memb_id_to_name(membership,svcStatus->rs_owner));
 			 */
@@ -541,7 +612,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 
 			clulog(LOG_NOTICE,
 			       "Starting stopped service %s\n",
-			       svcName);
+			       c_name(svcName));
 			ret = 1;
 			break;
 		}
@@ -554,7 +625,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		}
 
 		/*
-		 * Service is running but owner is down -> FAILOVER
+		 * Service is running but owner is down -> RG_EFAILOVER
 		 */
 		fd = ccs_connect();
 		if (fd > 0) {
@@ -568,7 +639,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 
 		clulog(LOG_NOTICE,
 		       "Taking over service %s from down member %s\n",
-		       svcName, nodename);
+		       c_name(svcName), nodename);
 		ret = 1;
 		break;
 
@@ -576,10 +647,10 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		/*
 		 * Starting failed service...
 		 */
-		if (req == RG_START_RECOVER) {
+		if (req == RG_START_RECOVER || central_events_enabled()) {
 			clulog(LOG_NOTICE,
 			       "Recovering failed service %s\n",
-			       svcName);
+			       c_name(svcName));
 			svcStatus->rs_state = RG_STATE_STOPPED;
 			/* Start! */
 			ret = 1;
@@ -589,7 +660,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		/* Don't start, but return success. */
 		clulog(LOG_DEBUG,
 		       "Not starting %s: recovery state\n",
-		       svcName);
+		       c_name(svcName));
 		ret = 2;
 		break;
 
@@ -602,13 +673,13 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 		}
 
 		clulog(LOG_NOTICE, "Starting stopped service %s\n",
-		       svcName);
+		       c_name(svcName));
 		ret = 1;
 		break;
 	
 	case RG_STATE_DISABLED:
 	case RG_STATE_UNINITIALIZED:
-		if (req == RG_ENABLE) {
+		if (req == RG_ENABLE || req == RG_START_REMOTE) {
 			/* Don't actually enable if the RG is locked! */
 			if (rg_locked()) {
 				ret = 3;
@@ -617,7 +688,7 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 
 			clulog(LOG_NOTICE,
 			       "Starting disabled service %s\n",
-			       svcName);
+		       	       c_name(svcName));
 			ret = 1;
 			break;
 		}
@@ -626,13 +697,13 @@ svc_advise_start(rg_state_t *svcStatus, char *svcName, int req)
 			break;
 		}
 
-		clulog(LOG_DEBUG, "Not starting disabled RG %s\n",
+		clulog(LOG_DEBUG, "Not starting disabled %s\n",
 		       svcName);
 		break;
 
 	default:
 		clulog(LOG_ERR,
-		       "#44: Cannot start RG %s: Invalid State %d\n",
+		       "#44: Cannot start %s: Invalid State %d\n",
 		       svcName, svcStatus->rs_state);
 		break;
 	}
@@ -671,7 +742,7 @@ svc_start(char *svcName, int req)
 	}
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
-		clulog(LOG_ERR, "#46: Failed getting status for RG %s\n",
+		clulog(LOG_ERR, "#46: Failed getting status for %s\n",
 		       svcName);
 		goto out_unlock;
 	}
@@ -693,7 +764,7 @@ svc_start(char *svcName, int req)
 
 	/* LOCK HELD */
 	switch (svc_advise_start(&svcStatus, svcName, req)) {
-	case 0: /* Don't start service, return FAIL */
+	case 0: /* Don't start service, return RG_EFAIL */
 		goto out_unlock;
 	case 2: /* Don't start service, return 0 */
 		ret = 0;
@@ -709,15 +780,19 @@ svc_start(char *svcName, int req)
 	}
 
 	/* LOCK HELD if we get here */
+        if (req == RG_START_RECOVER ||
+	    svcStatus.rs_state == RG_STATE_RECOVER) {
+		if (!central_events_enabled())
+			add_restart(svcName);
+		svcStatus.rs_restarts++;
+	} else {
+		svcStatus.rs_restarts = 0;
+	}
+
 	svcStatus.rs_owner = my_id();
 	svcStatus.rs_state = RG_STATE_STARTING;
 	svcStatus.rs_transition = (uint64_t)time(NULL);
 
-	if (req == RG_START_RECOVER)
-		svcStatus.rs_restarts++;
-	else
-		svcStatus.rs_restarts = 0;
-
 	if (set_rg_state(svcName, &svcStatus) != 0) {
 		clulog(LOG_ERR,
 		       "#47: Failed changing service status\n");
@@ -753,14 +828,18 @@ svc_start(char *svcName, int req)
 		goto out_unlock;
 	}
        
-	if (ret == 0)
+	if (ret == 0) {
 		clulog(LOG_NOTICE,
 		       "Service %s started\n",
-		       svcName);
-	else
+		       c_name(svcName));
+
+		broadcast_event(svcName, RG_STATE_STARTED, svcStatus.rs_owner,
+				svcStatus.rs_last_owner);
+	} else {
 		clulog(LOG_WARNING,
 		       "#68: Failed to start %s; return value: %d\n",
 		       svcName, ret);
+	}
 
 out_unlock:
 	rg_unlock(svcName, lockp);
@@ -775,7 +854,7 @@ out_nolock:
  * Check status of a cluster service 
  *
  * @param svcName	Service name to check.
- * @return		RG_EFORWARD, FAIL, 0
+ * @return		RG_EFORWARD, RG_EFAIL, 0
  */
 int
 svc_status(char *svcName)
@@ -786,24 +865,24 @@ svc_status(char *svcName)
 	if (rg_lock(svcName, &lockp) < 0) {
 		clulog(LOG_ERR, "#48: Unable to obtain cluster lock: %s\n",
 		       strerror(errno));
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
-		clulog(LOG_ERR, "#49: Failed getting status for RG %s\n",
+		clulog(LOG_ERR, "#49: Failed getting status for %s\n",
 		       svcName);
-		return FAIL;
+		return RG_EFAIL;
 	}
 	rg_unlock(svcName, lockp);
 
 	if (svcStatus.rs_owner != my_id())
 		/* Don't check status for anything not owned */
-		return SUCCESS;
+		return RG_ESUCCESS;
 
 	if (svcStatus.rs_state != RG_STATE_STARTED)
 		/* Not-running RGs should not be checked either. */
-		return SUCCESS;
+		return RG_ESUCCESS;
 
 	return group_op(svcName, RG_STATUS);
 }
@@ -831,28 +910,28 @@ _svc_stop(char *svcName, int req, int recover, uint32_t newstate)
 		return group_op(svcName, RG_STOP);
 	}
 
-	if (rg_lock(svcName, &lockp) == FAIL) {
+	if (rg_lock(svcName, &lockp) == RG_EFAIL) {
 		clulog(LOG_ERR, "#50: Unable to obtain cluster lock: %s\n",
 		       strerror(errno));
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
-		clulog(LOG_ERR, "#51: Failed getting status for RG %s\n",
+		clulog(LOG_ERR, "#51: Failed getting status for %s\n",
 		       svcName);
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	switch (svc_advise_stop(&svcStatus, svcName, req)) {
 	case 0:
 		rg_unlock(svcName, lockp);
-		clulog(LOG_DEBUG, "Unable to stop RG %s in %s state\n",
+		clulog(LOG_DEBUG, "Unable to stop %s in %s state\n",
 		       svcName, rg_state_str(svcStatus.rs_state));
-		return FAIL;
+		return RG_EFAIL;
 	case 2:
 		rg_unlock(svcName, lockp);
-		return SUCCESS;
+		return RG_ESUCCESS;
 	case 3:
 		rg_unlock(svcName, lockp);
 		return RG_EFORWARD;
@@ -865,7 +944,20 @@ _svc_stop(char *svcName, int req, int recover, uint32_t newstate)
 
 	old_state = svcStatus.rs_state;
 
-	clulog(LOG_NOTICE, "Stopping service %s\n", svcName);
+	if (old_state == RG_STATE_RECOVER) {
+		clulog(LOG_DEBUG, "%s is clean; skipping double-stop\n",
+		       svcName);
+		svcStatus.rs_state = newstate;
+
+		if (set_rg_state(svcName, &svcStatus) != 0) {
+			clulog(LOG_ERR, "#52: Failed changing RG status\n");
+			return RG_EFAIL;
+		}
+		rg_unlock(svcName, lockp);
+		return 0;
+	} 
+
+	clulog(LOG_NOTICE, "Stopping service %s\n", c_name(svcName));
 
 	if (recover)
 		svcStatus.rs_state = RG_STATE_ERROR;
@@ -878,7 +970,7 @@ _svc_stop(char *svcName, int req, int recover, uint32_t newstate)
 	if (set_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
 		clulog(LOG_ERR, "#52: Failed changing RG status\n");
-		return FAIL;
+		return RG_EFAIL;
 	}
 	rg_unlock(svcName, lockp);
 
@@ -890,6 +982,7 @@ _svc_stop(char *svcName, int req, int recover, uint32_t newstate)
 			       "but some resources may still be allocated!\n",
 			       svcName);
 		_svc_stop_finish(svcName, 0, newstate);
+		ret = 0;
 	} else {
 		_svc_stop_finish(svcName, ret, newstate);
 	}
@@ -904,17 +997,17 @@ _svc_stop_finish(char *svcName, int failed, uint32_t newstate)
 	rg_state_t svcStatus;
 	void *lockp;
 
-	if (rg_lock(svcName, &lockp) == FAIL) {
+	if (rg_lock(svcName, &lockp) == RG_EFAIL) {
 		clulog(LOG_ERR, "#53: Unable to obtain cluster lock: %s\n",
 		       strerror(errno));
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
-		clulog(LOG_ERR, "#54: Failed getting status for RG %s\n",
+		clulog(LOG_ERR, "#54: Failed getting status for %s\n",
 		       svcName);
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	if ((svcStatus.rs_state != RG_STATE_STOPPING) &&
@@ -927,7 +1020,7 @@ _svc_stop_finish(char *svcName, int failed, uint32_t newstate)
 	svcStatus.rs_owner = NODE_ID_NONE;
 
 	if (failed) {
-		clulog(LOG_CRIT, "#12: RG %s failed to stop; intervention "
+		clulog(LOG_CRIT, "#12: %s failed to stop; intervention "
 		       "required\n", svcName);
 		svcStatus.rs_state = RG_STATE_FAILED;
 	} else if (svcStatus.rs_state == RG_STATE_ERROR)
@@ -935,7 +1028,7 @@ _svc_stop_finish(char *svcName, int failed, uint32_t newstate)
 	else
 		svcStatus.rs_state = newstate;
 
-	clulog(LOG_NOTICE, "Service %s is %s\n", svcName,
+	clulog(LOG_NOTICE, "Service %s is %s\n", c_name(svcName),
 	       rg_state_str(svcStatus.rs_state));
 	//printf("rg state = %s\n", rg_state_str(svcStatus.rs_state));
 
@@ -943,10 +1036,12 @@ _svc_stop_finish(char *svcName, int failed, uint32_t newstate)
 	if (set_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
 		clulog(LOG_ERR, "#55: Failed changing RG status\n");
-		return FAIL;
+		return RG_EFAIL;
 	}
 	rg_unlock(svcName, lockp);
 
+	broadcast_event(svcName, svcStatus.rs_state, NODE_ID_NONE, svcStatus.rs_last_owner);
+
 	return 0;
 }
 
@@ -986,27 +1081,27 @@ svc_fail(char *svcName)
 	void *lockp = NULL;
 	rg_state_t svcStatus;
 
-	if (rg_lock(svcName, &lockp) == FAIL) {
+	if (rg_lock(svcName, &lockp) == RG_EFAIL) {
 		clulog(LOG_ERR, "#55: Unable to obtain cluster lock: %s\n",
 		       strerror(errno));
-		return FAIL;
+		return RG_EFAIL;
 	}
 
-	clulog(LOG_DEBUG, "Handling failure request for RG %s\n", svcName);
+	clulog(LOG_DEBUG, "Handling failure request for %s\n", svcName);
 
 	if (get_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
-		clulog(LOG_ERR, "#56: Failed getting status for RG %s\n",
+		clulog(LOG_ERR, "#56: Failed getting status for %s\n",
 		       svcName);
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	if ((svcStatus.rs_state == RG_STATE_STARTED) &&
 	    (svcStatus.rs_owner != my_id())) {
 		rg_unlock(svcName, lockp);
-		clulog(LOG_DEBUG, "Unable to disable RG %s in %s state\n",
+		clulog(LOG_DEBUG, "Unable to disable %s in %s state\n",
 		       svcName, rg_state_str(svcStatus.rs_state));
-		return FAIL;
+		return RG_EFAIL;
 	}
 
 	/*
@@ -1022,10 +1117,13 @@ svc_fail(char *svcName)
 	if (set_rg_state(svcName, &svcStatus) != 0) {
 		rg_unlock(svcName, lockp);
 		clulog(LOG_ERR, "#57: Failed changing RG status\n");
-		return FAIL;
+		return RG_EFAIL;
 	}
 	rg_unlock(svcName, lockp);
 
+	broadcast_event(svcName, RG_STATE_FAILED, NODE_ID_NONE,
+			svcStatus.rs_last_owner);
+
 	return 0;
 }
 
@@ -1033,8 +1131,8 @@ svc_fail(char *svcName)
 /*
  * Send a message to the target node to start the service.
  */
-static int
-relocate_service(char *svcName, int request, uint64_t target)
+int
+svc_start_remote(char *svcName, int request, uint64_t target)
 {
 	SmMessageSt msg_relo;
 	int fd_relo, msg_ret;
@@ -1043,6 +1141,7 @@ relocate_service(char *svcName, int request, uint64_t target)
 	/* Build the message header */
 	msg_relo.sm_hdr.gh_magic = GENERIC_HDR_MAGIC;
 	msg_relo.sm_hdr.gh_command = RG_ACTION_REQUEST;
+	msg_relo.sm_hdr.gh_arg1 = RG_ACTION_MASTER;
 	msg_relo.sm_hdr.gh_length = sizeof (SmMessageSt);
 	msg_relo.sm_data.d_action = request;
 	strncpy(msg_relo.sm_data.d_svcName, svcName,
@@ -1065,13 +1164,13 @@ relocate_service(char *svcName, int request, uint64_t target)
 	if (msg_send(fd_relo, &msg_relo, sizeof (SmMessageSt)) !=
 	    sizeof (SmMessageSt)) {
 		clulog(LOG_ERR,
-		       "#59: Error sending relocate request to member #%d\n",
+		       "#59: Error sending remote start request to member #%d\n",
 		       target);
 		msg_close(fd_relo);
 		return -1;
 	}
 
-	clulog(LOG_DEBUG, "Sent relocate request to %d\n", (int)target);
+	clulog(LOG_DEBUG, "Sent remote start request to %d\n", (int)target);
 
 	/* Check the response */
 	do {
@@ -1088,7 +1187,7 @@ relocate_service(char *svcName, int request, uint64_t target)
 			clulog(LOG_WARNING,
 			       "#XX: Cancelling relocation: Shutting down\n");
 			msg_close(fd_relo);
-			return NO;
+			return RG_NO;
 		}
 
 		/* Check for node transition in the middle of a relocate */
@@ -1101,7 +1200,7 @@ relocate_service(char *svcName, int request, uint64_t target)
 		       "#XX: Cancelling relocation: Target node down\n");
 		cml_free(ml);
 		msg_close(fd_relo);
-		return FAIL;
+		return RG_EFAIL;
 	} while (1);
 
 	if (msg_ret != sizeof (SmMessageSt)) {
@@ -1109,7 +1208,7 @@ relocate_service(char *svcName, int request, uint64_t target)
 		 * In this case, we don't restart the service, because the 
 		 * service state is actually unknown to us at this time.
 		 */
-		clulog(LOG_ERR, "#60: Mangled reply from member #%d during RG "
+		clulog(LOG_ERR, "#60: Mangled reply from member #%d during "
 		       "relocate\n", target);
 		msg_close(fd_relo);
 		return 0;	/* XXX really UNKNOWN */
@@ -1138,7 +1237,7 @@ relocate_service(char *svcName, int request, uint64_t target)
  *				management software, a destination node
  *				is sent as well.  This causes us to try
  *				starting the service on that node *first*,
- *				but does NOT GUARANTEE that the service
+ *				but does RG_NOT GUARANTEE that the service
  *				will end up on that node.  It will end up
  *				on whatever node actually successfully
  *				starts it.
@@ -1148,24 +1247,51 @@ int
 handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		    uint64_t *new_owner)
 {
-	cluster_member_list_t *allowed_nodes, *backup = NULL;
-	uint64_t target = preferred_target, me = my_id();
-	int ret, x, tried = 0;
+	cluster_member_list_t *allowed_nodes = NULL, *backup = NULL;
+	cluster_member_t *m;
+	int target = preferred_target, me = my_id();
+	int ret, x;
+	rg_state_t svcStatus;
 	
+	get_rg_state_local(svcName, &svcStatus);
+	if (svcStatus.rs_state == RG_STATE_DISABLED ||
+	    svcStatus.rs_state == RG_STATE_UNINITIALIZED)
+		return RG_EINVAL;
+
+	if (preferred_target > 0) {
+		/* TODO: simplify this and don't keep alloc/freeing 
+		   member lists */
+		allowed_nodes = member_list();
+		/* Avoid even bothering the other node if we can */
+		m = memb_id_to_p(allowed_nodes, preferred_target);
+		if (!m) {
+			cml_free(allowed_nodes);
+			return RG_EINVAL;
+		}
+
+		count_resource_groups_local(m);
+		if (m->cm_svcexcl ||
+	    	    (m->cm_svccount && is_exclusive(svcName))) {
+			cml_free(allowed_nodes);
+			return RG_EDEPEND;
+		}
+		cml_free(allowed_nodes);
+	}
+
 	/*
 	 * Stop the service - if we haven't already done so.
 	 */
 	if (request != RG_START_RECOVER) {
 		ret = _svc_stop(svcName, request, 0, RG_STATE_STOPPED);
-		if (ret == FAIL) {
+		if (ret == RG_EFAIL) {
 			svc_fail(svcName);
-			return FAIL;
+			return RG_EFAIL;
 		}
 		if (ret == RG_EFORWARD)
 			return RG_EFORWARD;
 	}
 
-	if (preferred_target != NODE_ID_NONE) {
+	if (preferred_target > 0) {
 
 		allowed_nodes = member_list();
 		/*
@@ -1181,7 +1307,7 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		    	    allowed_nodes->cml_members[x].cm_id ==
 			    		preferred_target)
 				continue;
-			allowed_nodes->cml_members[x].cm_state = STATE_DOWN;
+			allowed_nodes->cml_members[x].cm_state = 0;
 		}
 
 		/*
@@ -1198,9 +1324,10 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		 * I am the ONLY one capable of running this service,
 		 * PERIOD...
 		 */
-		if (target == me && me != preferred_target)
+		if (target == me && me != preferred_target) {
+			cml_free(backup);
 			goto exhausted;
-
+		}
 
 		if (target == me) {
 			/*
@@ -1216,8 +1343,7 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		 	 * It's legal to start the service on the given
 		 	 * node.  Try to do so.
 		 	 */
-			++tried;
-			if (relocate_service(svcName, request, target) == 0) {
+			if (svc_start_remote(svcName, request, target) == 0) {
 				*new_owner = target;
 				/*
 				 * Great! We're done...
@@ -1238,7 +1364,7 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		//count_resource_groups(allowed_nodes);
 	}
 
-	if (preferred_target != NODE_ID_NONE)
+	if (preferred_target > 0)
 		memb_mark_down(allowed_nodes, preferred_target);
 	memb_mark_down(allowed_nodes, me);
 
@@ -1247,36 +1373,41 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 		if (target == me)
 			goto exhausted;
 
-		++tried;
-
-		/* Each node gets one try */
-		memb_mark_down(allowed_nodes, target);
-		switch (relocate_service(svcName, request, target)) {
+		ret = svc_start_remote(svcName, request, target);
+		switch (ret) {
+		case RG_ERUN:
+			/* Someone stole the service while we were 
+			   trying to relo it */
+			get_rg_state_local(svcName, &svcStatus);
+			*new_owner = svcStatus.rs_owner;
+			cml_free(allowed_nodes);
+			return 0;
+		case RG_EDEPEND:
 		case RG_EFAIL:
+			memb_mark_down(allowed_nodes, target);
 			continue;
 		case RG_EABORT:
 			svc_report_failure(svcName);
 			cml_free(allowed_nodes);
-			return FAIL;
-		case NO:
+			return RG_EFAIL;
+		default:
+			/* deliberate fallthrough */
+			clulog(LOG_ERR,
+			       "#61: Invalid reply from member %d during"
+			       " relocate operation!\n", target);
+		case RG_NO:
 			/* state uncertain */
 			cml_free(allowed_nodes);
-			clulog(LOG_DEBUG, "State Uncertain: svc:%s "
-			       "nid:%08x%08x req:%d\n", svcName,
-			       (uint32_t)(target>>32)&0xffffffff,
-			       (uint32_t)(target&0xffffffff), request);
+			clulog(LOG_CRIT, "State Uncertain: svc:%s "
+			       "nid:%d req:%s ret:%d\n", svcName,
+			       target, rg_req_str(request), ret);
 			return 0;
 		case 0:
+			*new_owner = target;
 			clulog(LOG_NOTICE, "Service %s is now running "
 			       "on member %d\n", svcName, (int)target);
-		case RG_ERUN:
-			*new_owner = target;
 			cml_free(allowed_nodes);
 			return 0;
-		default:
-			clulog(LOG_ERR,
-			       "#61: Invalid reply from member %d during"
-			       " relocate operation!\n", target);
 		}
 	}
 	cml_free(allowed_nodes);
@@ -1285,8 +1416,10 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 	 * We got sent here from handle_start_req.
 	 * We're DONE.
 	 */
-	if (request == RG_START_RECOVER)
-		return FAIL;
+	if (request == RG_START_RECOVER) {
+		_svc_stop_finish(svcName, 0, RG_STATE_STOPPED);
+		return RG_EFAIL;
+	}
 
 	/*
 	 * All potential places for the service to start have been exhausted.
@@ -1294,13 +1427,12 @@ handle_relocate_req(char *svcName, int request, uint64_t preferred_target,
 	 */
 exhausted:
 	if (!rg_locked()) {
-		if (tried)
-			clulog(LOG_WARNING,
-			       "#70: Attempting to restart service %s locally.\n",
-			       svcName);
+		clulog(LOG_WARNING,
+		       "#70: Failed to relocate %s; restarting locally\n",
+		       svcName);
 		if (svc_start(svcName, RG_START_RECOVER) == 0) {
 			*new_owner = me;
-			return FAIL;
+			return RG_EFAIL;
 		}
 	}
 
@@ -1309,7 +1441,8 @@ exhausted:
 		svc_report_failure(svcName);
 	}
 
-	return FAIL;
+	return RG_EFAIL;
+
 }
 
 
@@ -1328,7 +1461,7 @@ handle_fd_start_req(char *svcName, int request, uint64_t *new_owner)
 		if (target == me) {
 			ret = handle_start_remote_req(svcName, request);
 		} else {
-			ret = relocate_service(svcName, request, target);
+			ret = svc_start_remote(svcName, request, target);
 		}
 
 		switch (ret) {
@@ -1340,11 +1473,11 @@ handle_fd_start_req(char *svcName, int request, uint64_t *new_owner)
 		case RG_EABORT:
 			svc_report_failure(svcName);
 			cml_free(allowed_nodes);
-			return FAIL;
-		case NO:
+			return RG_EFAIL;
+		case RG_NO:
 			/* state uncertain */
 			cml_free(allowed_nodes);
-			clulog(LOG_DEBUG, "State Uncertain: svc:%s "
+			clulog(LOG_DEBUG, "State Uncertain: %s "
 			       "nid:%08x%08x req:%d\n", svcName,
 			       (uint32_t)(target>>32)&0xffffffff,
 			       (uint32_t)(target&0xffffffff), request);
@@ -1352,7 +1485,8 @@ handle_fd_start_req(char *svcName, int request, uint64_t *new_owner)
 		case 0:
 			*new_owner = target;
 			clulog(LOG_NOTICE, "Service %s is now running "
-			       "on member %d\n", svcName, (int)target);
+			       "on member %d\n", c_name(svcName),
+			       (int)target);
 			cml_free(allowed_nodes);
 			return 0;
 		default:
@@ -1395,7 +1529,7 @@ handle_start_req(char *svcName, int req, uint64_t *new_owner)
 	    (node_should_start_safe(my_id(), membership, svcName) <
 	     tolerance)) {
 		cml_free(membership);
-		return FAIL;
+		return RG_EFAIL;
 	}
 	cml_free(membership);
 	
@@ -1416,22 +1550,22 @@ handle_start_req(char *svcName, int req, uint64_t *new_owner)
 		/* If service is already running, return that value */
 		return ret;
 
-	case SUCCESS:
+	case RG_ESUCCESS:
 		/* If we succeeded, then we're done.  */
 		*new_owner = my_id();
-	case NO: 
-		return SUCCESS;
+	case RG_NO: 
+		return RG_ESUCCESS;
 	}
 	
 	/* 
 	 * Keep the state open so the other nodes don't try to start
 	 * it.  This allows us to be the 'root' of a given service.
 	 */
-	clulog(LOG_DEBUG, "Stopping failed service %s\n", svcName);
+	clulog(LOG_DEBUG, "Stopping failed service %s\n", c_name(svcName));
 	if (svc_stop(svcName, RG_STOP_RECOVER) != 0) {
 		clulog(LOG_CRIT,
 		       "#13: Service %s failed to stop cleanly\n",
-		       svcName);
+		       c_name(svcName));
 		(void) svc_fail(svcName);
 
 		/*
@@ -1447,13 +1581,13 @@ handle_start_req(char *svcName, int req, uint64_t *new_owner)
 	 * we should relocate the service.
 	 */
 	clulog(LOG_WARNING, "#71: Relocating failed service %s\n",
-	       svcName);
+	       c_name(svcName));
 relocate:
 	ret = handle_relocate_req(svcName, RG_START_RECOVER, -1, new_owner);
 
 	/* If we leave the service stopped, instead of disabled, someone
 	   will try to start it after the next node transition */
-	if (ret == FAIL) {
+	if (ret == RG_EFAIL) {
 		if (svc_stop(svcName, RG_STOP) != 0) {
 			svc_fail(svcName);
 			svc_report_failure(svcName);
@@ -1498,7 +1632,7 @@ handle_start_remote_req(char *svcName, int req)
 	 */
 	if (node_should_start_safe(me, membership, svcName) < tolerance){
 		cml_free(membership);
-		return FAIL;
+		return RG_EFAIL;
 	}
 	cml_free(membership);
 
@@ -1508,7 +1642,8 @@ handle_start_remote_req(char *svcName, int req)
 		/* Don't relocate from here; it was a remote start */
 		/* Return fail so the other node can go ahead and 
 		   try the other nodes in the cluster */
-	case NO: 
+		return RG_ERELO;
+	case RG_NO: 
 		return RG_EFAIL;
 
 	case RG_EAGAIN:
@@ -1519,9 +1654,9 @@ handle_start_remote_req(char *svcName, int req)
 		/* If service is already running, return that value */
 		return x;
 
-	case SUCCESS:
+	case RG_ESUCCESS:
 		/* If we succeeded, then we're done.  */
-		return SUCCESS;
+		return RG_ESUCCESS;
 	}
 
 	if (svc_stop(svcName, RG_STOP_RECOVER) == 0)
@@ -1545,7 +1680,17 @@ handle_recover_req(char *svcName, uint64_t *new_owner)
 	if (!strcasecmp(policy, "disable")) {
 		return svc_disable(svcName);
 	} else if (!strcasecmp(policy, "relocate")) {
-		return handle_relocate_req(svcName, RG_START_RECOVER, -1,
+		return handle_relocate_req(svcName, RG_START_RECOVER,
+					   NODE_ID_NONE,
+					   new_owner);
+	}
+
+	/* Check restart counter/timer for this resource */
+	if (check_restart(svcName) > 0) {
+		clulog(LOG_NOTICE, "Restart threshold for %s exceeded; "
+		       "attempting to relocate\n", svcName);
+		return handle_relocate_req(svcName, RG_START_RECOVER, 
+					   NODE_ID_NONE,
 					   new_owner);
 	}
 
diff --git a/rgmanager/src/daemons/rg_thread.c b/rgmanager/src/daemons/rg_thread.c
index 0e7750d..e96982b 100644
--- a/rgmanager/src/daemons/rg_thread.c
+++ b/rgmanager/src/daemons/rg_thread.c
@@ -391,6 +391,11 @@ resgroup_thread_main(void *arg)
 
 			error = svc_stop(myname, RG_STOP_RECOVER);
 			if (error == 0) {
+				if (central_events_enabled()) {
+					ret = RG_SUCCESS;
+					break;
+				}
+
 				error = handle_recover_req(myname, &newowner);
 				if (error == 0)
 					ret = RG_SUCCESS;
diff --git a/rgmanager/src/daemons/service_op.c b/rgmanager/src/daemons/service_op.c
new file mode 100644
index 0000000..6705a5f
--- /dev/null
+++ b/rgmanager/src/daemons/service_op.c
@@ -0,0 +1,207 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+#include <assert.h>
+#include <platform.h>
+#include <magma.h>
+#include <magmamsg.h>
+#include <stdio.h>
+#include <string.h>
+#include <resgroup.h>
+#include <clulog.h>
+#include <rg_locks.h>
+#include <ccs.h>
+#include <rg_queue.h>
+#include <msgsimple.h>
+#include <res-ocf.h>
+#include <event.h>
+
+
+/*
+ * Send a message to the target node to start the service.
+ */
+int svc_start_remote(char *svcName, int request, uint64_t target);
+void svc_report_failure(char *);
+int get_service_state_internal(char *svcName, rg_state_t *svcStatus);
+
+
+/**
+ *
+ */
+int
+service_op_start(char *svcName,
+		 uint64_t *target_list,
+		 int target_list_len,
+		 uint64_t *new_owner)
+{
+	int target;
+	int ret, x;
+	int excl = 0, dep = 0, fail = 0;
+	rg_state_t svcStatus;
+	
+	if (get_service_state_internal(svcName, &svcStatus) < 0) {
+		return RG_EFAIL;
+	}
+
+	if (svcStatus.rs_state == RG_STATE_FAILED ||
+	    svcStatus.rs_state == RG_STATE_UNINITIALIZED)
+		return RG_EINVAL;
+
+	if (svcStatus.rs_state == RG_STATE_RECOVER)
+		add_restart(svcName);
+
+	for (x = 0; x < target_list_len; x++) {
+
+		target = target_list[x];
+		ret = svc_start_remote(svcName, RG_START_REMOTE,
+				       target);
+		switch (ret) {
+		case RG_ERUN:
+			/* Someone stole the service while we were 
+			   trying to start it */
+			get_rg_state_local(svcName, &svcStatus);
+			if (new_owner)
+				*new_owner = svcStatus.rs_owner;
+			return 0;
+		case RG_EEXCL:
+			++excl;
+			continue;
+		case RG_EDEPEND:
+		case RG_ERELO:
+			++dep;
+			continue;
+		case RG_EFAIL:
+			++fail;
+			continue;
+		case RG_EABORT:
+			svc_report_failure(svcName);
+			return RG_EFAIL;
+		default:
+			/* deliberate fallthrough */
+			clulog(LOG_ERR,
+			       "#61: Invalid reply from member %d during"
+			       " start operation!\n", target);
+		case RG_NO:
+			/* state uncertain */
+			clulog(LOG_CRIT, "State Uncertain: svc:%s "
+			       "nid:%d req:%s ret:%s\n", svcName,
+			       target, rg_req_str(RG_START_REMOTE), rg_strerror(ret));
+			return 0;
+		case 0:
+			if (new_owner)
+				*new_owner = target;
+			clulog(LOG_NOTICE, "Service %s is now running "
+			       "on member %d\n", svcName, (int)target);
+			return 0;
+		}
+	}
+
+	ret = RG_EFAIL;
+	if (excl == target_list_len) 
+		ret = RG_EEXCL;
+	else if (dep == target_list_len)
+		ret = RG_EDEPEND;
+
+	clulog(LOG_INFO, "Start failed; node reports: %d failures, "
+	       "%d exclusive, %d dependency errors\n", fail, excl, dep);
+	return ret;
+}
+
+
+int
+service_op_stop(char *svcName, int do_disable, int event_type)
+{
+	SmMessageSt msg;
+	int msg_ret;
+	int fd;
+	rg_state_t svcStatus;
+	uint64_t msgtarget = my_id();
+
+	/* Build the message header */
+	msg.sm_hdr.gh_magic = GENERIC_HDR_MAGIC;
+	msg.sm_hdr.gh_command = RG_ACTION_REQUEST;
+	msg.sm_hdr.gh_arg1 = RG_ACTION_MASTER; 
+	msg.sm_hdr.gh_length = sizeof (SmMessageSt);
+
+	msg.sm_data.d_action = ((!do_disable) ? RG_STOP:RG_DISABLE);
+
+	if (msg.sm_data.d_action == RG_STOP && event_type == EVENT_USER)
+		msg.sm_data.d_action = RG_STOP_USER;
+
+	strncpy(msg.sm_data.d_svcName, svcName,
+		sizeof(msg.sm_data.d_svcName));
+	msg.sm_data.d_ret = 0;
+	msg.sm_data.d_svcOwner = 0;
+
+	/* Open a connection to the local node - it will decide what to
+	   do in this case. XXX inefficient; should queue requests
+	   locally and immediately forward requests otherwise */
+
+	if (get_service_state_internal(svcName, &svcStatus) < 0)
+		return RG_EFAIL;
+	if (svcStatus.rs_owner != NODE_ID_NONE)
+		msgtarget = svcStatus.rs_owner;
+
+	if ((fd = msg_open(msgtarget, RG_PORT, RG_PURPOSE, 2)) < 0) {
+		clulog(LOG_ERR,
+		       "#58: Failed opening connection to member #%d\n",
+		       msgtarget);
+		return -1;
+	}
+
+	/* Encode */
+	swab_SmMessageSt(&msg);
+
+	/* Send stop message to the other node */
+	if (msg_send(fd, &msg, sizeof (SmMessageSt)) < 
+	    (int)sizeof (SmMessageSt)) {
+		clulog(LOG_ERR, "Failed to send complete message\n");
+		msg_close(fd);
+		return -1;
+	}
+
+	/* Check the response */
+	do {
+		msg_ret = msg_receive_timeout(fd, &msg,
+				      sizeof (SmMessageSt), 10);
+		if ((msg_ret == -1 && errno != ETIMEDOUT) ||
+		    (msg_ret > 0)) {
+			break;
+		}
+	} while(1);
+
+	if (msg_ret != sizeof (SmMessageSt)) {
+		clulog(LOG_WARNING, "Strange response size: %d vs %d\n",
+		       msg_ret, (int)sizeof(SmMessageSt));
+		return 0;	/* XXX really UNKNOWN */
+	}
+
+	/* Got a valid response from other node. */
+	msg_close(fd);
+
+	/* Decode */
+	swab_SmMessageSt(&msg);
+
+	return msg.sm_data.d_ret;
+}
+
+
+/*
+   TODO
+   service_op_migrate()
+ */
+
diff --git a/rgmanager/src/daemons/slang_event.c b/rgmanager/src/daemons/slang_event.c
new file mode 100644
index 0000000..d3a522b
--- /dev/null
+++ b/rgmanager/src/daemons/slang_event.c
@@ -0,0 +1,1286 @@
+/*
+  Copyright Red Hat, Inc. 2007
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License version 2 as published
+  by the Free Software Foundation.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
+/**
+  @file S/Lang event handling & intrinsic functions + vars
+ */
+#include <platform.h>
+#include <resgroup.h>
+#include <list.h>
+#include <restart_counter.h>
+#include <reslist.h>
+#include <clulog.h>
+#include <magma.h>
+#include <magmamsg.h>
+#include <assert.h>
+#include <event.h>
+
+#include <stdio.h>
+#include <string.h>
+#include <slang.h>
+#include <sys/syslog.h>
+#include <malloc.h>
+#include <clulog.h>
+#include <sets.h>
+#include <signal.h>
+
+static int __sl_initialized = 0;
+
+static char **_service_list = NULL;
+static int _service_list_len = 0;
+
+char **get_service_names(int *len); /* from groups.c */
+int get_service_property(char *rg_name, char *prop, char *buf, size_t buflen);
+void push_int_array(set_type_t *stuff, int len);
+
+
+/* ================================================================
+ * Node states 
+ * ================================================================ */
+static const int
+   _ns_online = 1,
+   _ns_offline = 0;
+
+/* ================================================================
+ * Event information 
+ * ================================================================ */
+static const int
+   _ev_none = EVENT_NONE,
+   _ev_node = EVENT_NODE,
+   _ev_service = EVENT_RG,
+   _ev_config = EVENT_CONFIG,
+   _ev_user = EVENT_USER;
+
+static const int
+   _rg_fail = RG_EFAIL,
+   _rg_success = RG_ESUCCESS,
+   _rg_edomain = RG_EDOMAIN,
+   _rg_edepend = RG_EDEPEND,
+   _rg_eabort = RG_EABORT,
+   _rg_einval = RG_EINVAL,
+   _rg_erun = RG_ERUN;
+
+static int
+   _stop_processing = 0,
+   _my_node_id = 0,
+   _node_state = 0,
+   _node_id = 0,
+   _node_clean = 0,
+   _service_owner = 0,
+   _service_last_owner = 0,
+   _service_restarts_exceeded = 0,
+   _user_request = 0,
+   _user_arg1 = 0,
+   _user_arg2 = 0,
+   _user_return = 0,
+   _rg_err = 0,
+   _event_type = 0;
+
+static char
+   *_node_name = NULL,
+   *_service_name = NULL,
+   *_service_state = NULL,
+   *_rg_err_str = "No Error";
+
+static int
+   _user_enable = RG_ENABLE,
+   _user_disable = RG_DISABLE,
+   _user_stop = RG_STOP_USER,		/* From clusvcadm */
+   _user_relo = RG_RELOCATE,
+   _user_restart = RG_RESTART;
+
+
+SLang_Intrin_Var_Type rgmanager_vars[] =
+{
+	/* Log levels (constants) */
+
+	/* Node state information */
+	MAKE_VARIABLE("NODE_ONLINE",	&_ns_online,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("NODE_OFFLINE",	&_ns_offline,	SLANG_INT_TYPE, 1),
+
+	/* Node event information */
+	MAKE_VARIABLE("node_self",	&_my_node_id,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("node_state",	&_node_state,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("node_id",	&_node_id,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("node_name",	&_node_name,	SLANG_STRING_TYPE,1),
+	MAKE_VARIABLE("node_clean",	&_node_clean,	SLANG_INT_TYPE, 1),
+
+	/* Service event information */
+	MAKE_VARIABLE("service_name",	&_service_name,	SLANG_STRING_TYPE,1),
+	MAKE_VARIABLE("service_state",	&_service_state,SLANG_STRING_TYPE,1),
+	MAKE_VARIABLE("service_owner",	&_service_owner,SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("service_last_owner", &_service_last_owner,
+		      					SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("service_restarts_exceeded", &_service_restarts_exceeded,
+		      					SLANG_INT_TYPE, 1),
+
+	/* User event information */
+	MAKE_VARIABLE("user_request",	&_user_request,	SLANG_INT_TYPE,1),
+	MAKE_VARIABLE("user_arg1",	&_user_arg1,	SLANG_INT_TYPE,1),
+	MAKE_VARIABLE("user_arg2",	&_user_arg2,	SLANG_INT_TYPE,1),
+	MAKE_VARIABLE("user_service",	&_service_name, SLANG_STRING_TYPE,1),
+	MAKE_VARIABLE("user_target",	&_service_owner,SLANG_INT_TYPE, 1),
+	/* Return code to user requests; i.e. clusvcadm */
+	MAKE_VARIABLE("user_return",	&_user_return,	SLANG_INT_TYPE, 0),
+
+	/* General event information */
+	MAKE_VARIABLE("event_type",	&_event_type,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("EVENT_NONE",	&_ev_none,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("EVENT_NODE",	&_ev_node,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("EVENT_CONFIG",	&_ev_config,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("EVENT_SERVICE",	&_ev_service,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("EVENT_USER",	&_ev_user,	SLANG_INT_TYPE, 1),
+
+	/* User request constants */
+	MAKE_VARIABLE("USER_ENABLE",	&_user_enable,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("USER_DISABLE",	&_user_disable,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("USER_STOP",	&_user_stop,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("USER_RELOCATE",	&_user_relo,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("USER_RESTART",	&_user_restart,	SLANG_INT_TYPE, 1),
+
+	/* Errors */
+	MAKE_VARIABLE("rg_error",	&_rg_err,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("rg_error_string",&_rg_err_str,	SLANG_STRING_TYPE,1),
+
+	/* From constants.c */
+	MAKE_VARIABLE("FAIL",		&_rg_fail,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("SUCCESS",	&_rg_success,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("ERR_ABORT",	&_rg_eabort,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("ERR_INVALID",	&_rg_einval,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("ERR_DEPEND",	&_rg_edepend,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("ERR_DOMAIN",	&_rg_edomain,	SLANG_INT_TYPE, 1),
+	MAKE_VARIABLE("ERR_RUNNING",	&_rg_erun,	SLANG_INT_TYPE, 1),
+
+	SLANG_END_INTRIN_VAR_TABLE
+};
+
+
+#define rg_error(errortype) \
+do { \
+	_rg_err = errortype; \
+	_rg_err_str = ##errortype; \
+} while(0)
+
+
+int
+get_service_state_internal(char *svcName, rg_state_t *svcStatus)
+{
+	void *lock;
+	char buf[32];
+
+	get_rg_state_local(svcName, svcStatus);
+	if (svcStatus->rs_state == RG_STATE_UNINITIALIZED) {
+		if (rg_lock(svcName, &lock) < 0) {
+			errno = ENOLCK;
+			return -1;
+		}
+
+		if (get_rg_state(svcName, svcStatus) < 0) {
+			errno = ENOENT;
+			rg_unlock(svcName, lock);
+			return -1;
+		}
+
+		/* We got a copy from another node - don't flip the state */
+		if (svcStatus->rs_transition) {
+			rg_unlock(svcName, lock);
+			return 0;
+		}
+
+		/* Finish initializing the service state */
+		svcStatus->rs_transition = (uint64_t)time(NULL);
+
+		if (get_service_property(svcName, "autostart",
+					 buf, sizeof(buf)) == 0) {
+			if (buf[0] == '0' || !strcasecmp(buf, "no")) {
+				svcStatus->rs_state = RG_STATE_DISABLED;
+			} else {
+				svcStatus->rs_state = RG_STATE_STOPPED;
+			}
+		}
+
+		set_rg_state(svcName, svcStatus);
+
+		rg_unlock(svcName, lock);
+	}
+
+	return 0;
+}
+
+
+/*
+   (restarts, last_owner, owner, state) = get_service_status(servicename)
+ */
+void
+sl_service_status(char *svcName)
+{
+	rg_state_t svcStatus;
+	int restarts_exceeded = 0;
+	char *state_str;
+
+	if (get_service_state_internal(svcName, &svcStatus) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to get status for %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	restarts_exceeded = check_restart(svcName);
+	if (SLang_push_integer(restarts_exceeded) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to push restarts_exceeded %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	if (SLang_push_integer(svcStatus.rs_restarts) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to push restarts for %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	if (SLang_push_integer((int)(svcStatus.rs_last_owner)) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to push last owner of %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	switch(svcStatus.rs_state) {
+	case RG_STATE_DISABLED:
+	case RG_STATE_STOPPED:
+	case RG_STATE_FAILED:
+	case RG_STATE_RECOVER:
+	case RG_STATE_ERROR:
+		/* There is no owner for these states.  Ever.  */
+		svcStatus.rs_owner = -1;
+	}
+
+	if (SLang_push_integer((int)(svcStatus.rs_owner)) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to push owner of %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	state_str = strdup(rg_state_str(svcStatus.rs_state));
+	if (!state_str) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to duplicate state of %s",
+			     __FUNCTION__,
+			     svcName);
+		return;
+	}
+
+	if (SLang_push_malloced_string(state_str) < 0) {
+		SLang_verror(SL_INTRINSIC_ERROR,
+			     "%s: Failed to push state of %s",
+			     __FUNCTION__,
+			     svcName);
+		//free(state_str);
+	}
+}
+
+
+/**
+  (nofailback, restricted, ordered, nodelist) = service_domain_info(svcName);
+ */
+void
+sl_domain_info(char *svcName)
+{
+	set_type_t *nodelist = NULL;
+	int listlen;
+	char buf[64];
+	int flags = 0;
+
+	if (get_service_property(svcName, "domain", buf, sizeof(buf)) < 0) {
+		/* no nodes */
+		SLang_push_integer(0);
+
+		/* no domain? */
+/*
+		str = strdup("none");
+		if (SLang_push_malloced_string(str) < 0) {
+			free(state_str);
+			return;
+		}
+*/
+
+		/* not ordered */
+		SLang_push_integer(0);
+		/* not restricted */
+		SLang_push_integer(0);
+		/* nofailback not set */
+		SLang_push_integer(0);
+	}
+
+	if (node_domain_set_safe(buf, &nodelist, &listlen, &flags) < 0) {
+		SLang_push_integer(0);
+		SLang_push_integer(0);
+		SLang_push_integer(0);
+		SLang_push_integer(0);
+		return;
+	}
+
+	SLang_push_integer(!!(flags & FOD_NOFAILBACK));
+	SLang_push_integer(!!(flags & FOD_RESTRICTED));
+	SLang_push_integer(!!(flags & FOD_ORDERED));
+
+	push_int_array(nodelist, listlen);
+	free(nodelist);
+
+/*
+	str = strdup(buf);
+	if (SLang_push_malloced_string(str) < 0) {
+		free(state_str);
+		return;
+	}
+*/
+}
+
+
+static int
+get_int_array(set_type_t **nodelist, int *len)
+{
+	SLang_Array_Type *a = NULL;
+	set_type_t *nodes = NULL;
+	int i;
+	int t, ret = -1, tmp;
+
+	if (!nodelist || !len)
+		return -1;
+
+	t = SLang_peek_at_stack();
+	if (t == SLANG_INT_TYPE) {
+
+		nodes = malloc(sizeof(set_type_t) * 1);
+		if (!nodes)
+			goto out;
+		if (SLang_pop_integer(&tmp) < 0)
+			goto out;
+
+		/* XXX gulm? */
+		nodes[0] = (uint64_t)tmp;
+		*len = 1;
+		ret = 0;
+
+	} else if (t == SLANG_ARRAY_TYPE) {
+		if (SLang_pop_array_of_type(&a, SLANG_INT_TYPE) < 0)
+			goto out;
+		if (a->num_dims > 1)
+			goto out;
+		if (a->dims[0] < 0)
+			goto out;
+		nodes = malloc(sizeof(set_type_t) * a->dims[0]);
+		if (!nodes)
+			goto out;
+		for (i = 0; i < a->dims[0]; i++) {
+			SLang_get_array_element(a, &i, &tmp);
+			/* XXX gulm? */
+			nodes[i] = (uint64_t)tmp;
+		}
+
+		*len = a->dims[0];
+		ret = 0;
+	}
+
+out:
+	if (a)
+		SLang_free_array(a);
+	if (ret == 0) {
+		*nodelist = nodes;
+	} else {
+		if (nodes)
+			free(nodes);
+	}
+	
+	return ret;
+}
+
+
+/**
+  get_service_property(service_name, property)
+ */
+char *
+sl_service_property(char *svcName, char *prop)
+{
+	char buf[96];
+
+	if (get_service_property(svcName, prop, buf, sizeof(buf)) < 0)
+		return NULL;
+
+	/* does this work or do I have to push a malloce'd string? */
+	return strdup(buf);
+}
+
+
+/**
+  usage:
+
+  stop_service(name, disable_flag);
+ */
+int
+sl_stop_service(void)
+{
+	char *svcname = NULL;
+	int nargs, t, ret = -1;
+	int do_disable = 0;
+
+	nargs = SLang_Num_Function_Args;
+
+	/* Takes one or two args */
+	if (nargs <= 0 || nargs > 2) {
+		SLang_verror(SL_SYNTAX_ERROR,
+		     "%s: Wrong # of args (%d), must be 1 or 2\n",
+		     __FUNCTION__,
+		     nargs);
+		return -1;
+	}
+
+	if (nargs == 2) {
+		t = SLang_peek_at_stack();
+		if (t != SLANG_INT_TYPE) {
+			SLang_verror(SL_SYNTAX_ERROR,
+				     "%s: expected type %d got %d\n",
+				     __FUNCTION__, SLANG_INT_TYPE, t);
+			goto out;
+		}
+
+		if (SLang_pop_integer(&do_disable) < 0) {
+			SLang_verror(SL_SYNTAX_ERROR,
+			    "%s: Failed to pop integer from stack!\n",
+			    __FUNCTION__);
+			goto out;
+		}
+
+		--nargs;
+	}
+
+	if (nargs == 1) {
+		t = SLang_peek_at_stack();
+		if (t != SLANG_STRING_TYPE) {
+			SLang_verror(SL_SYNTAX_ERROR,
+				     "%s: expected type %d got %d\n",
+				     __FUNCTION__,
+				     SLANG_STRING_TYPE, t);
+			goto out;
+		}
+
+		if (SLpop_string(&svcname) < 0) {
+			SLang_verror(SL_SYNTAX_ERROR,
+			    "%s: Failed to pop string from stack!\n",
+			    __FUNCTION__);
+			goto out;
+		}
+	}
+
+	/* TODO: Meat of function goes here */
+	ret = service_op_stop(svcname, do_disable, _event_type);
+out:
+	if (svcname)
+		free(svcname);
+	_user_return = ret;
+	return ret;
+}
+
+
+/**
+  usage:
+
+  start_service(name, <array>ordered_node_list_allowed,
+  		      <array>node_list_illegal)
+ */
+int
+sl_start_service(void)
+{
+	char *svcname = NULL;
+	set_type_t *pref_list = NULL, *illegal_list = NULL;
+	int pref_list_len = 0, illegal_list_len = 0;
+	int nargs, t, ret = -1;
+	uint64_t newowner;
+
+	nargs = SLang_Num_Function_Args;
+
+	/* Takes one, two, or three */
+	if (nargs <= 0 || nargs > 3) {
+		SLang_verror(SL_SYNTAX_ERROR,
+		     "%s: Wrong # of args (%d), must be 1 or 2\n",
+		     __FUNCTION__, nargs);
+		return -1;
+	}
+
+	if (nargs == 3) {
+		if (get_int_array(&illegal_list, &illegal_list_len) < 0)
+			goto out;
+		--nargs;
+	}
+
+	if (nargs == 2) {
+		if (get_int_array(&pref_list, &pref_list_len) < 0)
+			goto out;
+		--nargs;
+	}
+
+	if (nargs == 1) {
+		/* Just get the service name */
+		t = SLang_peek_at_stack();
+		if (t != SLANG_STRING_TYPE) {
+			SLang_verror(SL_SYNTAX_ERROR,
+				     "%s: expected type %d got %d\n",
+				     __FUNCTION__,
+				     SLANG_STRING_TYPE, t);
+			goto out;
+		}
+
+		if (SLpop_string(&svcname) < 0)
+			goto out;
+	}
+
+	/* TODO: Meat of function goes here */
+	ret = service_op_start(svcname, pref_list,
+			       pref_list_len, &newowner);
+
+	if (ret == 0 && newowner > 0)
+		ret = newowner;
+out:
+	if (svcname)
+		free(svcname);
+	if (illegal_list)
+		free(illegal_list);
+	if (pref_list)
+		free(pref_list);
+	_user_return = ret;
+	return ret;
+}
+
+
+/* Take an array of integers given its length and
+   push it on to the S/Lang stack */
+void
+push_int_array(set_type_t *stuff, int len)
+{
+	int arrlen, x;
+	SLang_Array_Type *arr;
+	int i;
+
+	arrlen = len;
+	arr = SLang_create_array(SLANG_INT_TYPE, 0, NULL, &arrlen, 1);
+	if (!arr)
+		return;
+
+	x = 0;
+	for (x = 0; x < len; x++) {
+		/* XXX gulm? */
+		i = (int) (stuff[x]);
+		SLang_set_array_element(arr, &x, &i);
+	}
+	SLang_push_array(arr, 1);
+}
+
+
+/*
+   Returns an array of rgmanager-visible nodes online.  How cool is that?
+ */
+void
+sl_nodes_online(void)
+{
+	int i, nodecount = 0;
+	set_type_t *nodes;
+
+	cluster_member_list_t *membership = member_list();
+	if (!membership)
+		return;
+	nodes = malloc(sizeof(set_type_t) * membership->cml_count);
+	if (!nodes)
+		return;
+
+	nodecount = 0;
+	for (i = 0; i < membership->cml_count; i++) {
+		if (membership->cml_members[i].cm_state &&
+		    membership->cml_members[i].cm_id != 0) {
+			nodes[nodecount] = membership->cml_members[i].cm_id;
+			++nodecount;
+		}
+	}
+	cml_free(membership);
+	push_int_array(nodes, nodecount);
+	free(nodes);
+}
+
+
+/*
+   Returns an array of rgmanager-defined services, in type:name format
+   We allocate/kill this list *once* per event to ensure we don't leak
+   memory
+ */
+void
+sl_service_list(void)
+{
+	int svccount = _service_list_len, x = 0;
+	SLang_Array_Type *svcarray;
+
+	svcarray = SLang_create_array(SLANG_STRING_TYPE, 0, NULL, &svccount, 1);
+	if (!svcarray)
+		return;
+
+	for (; x < _service_list_len; x++) 
+		SLang_set_array_element(svcarray, &x, &_service_list[x]);
+
+	SLang_push_array(svcarray, 1);
+}
+
+
+/* s_union hook (see sets.c) */
+void
+sl_union(void)
+{
+	set_type_t *arr1 = NULL, *arr2 = NULL, *ret = NULL;
+	int a1len = 0, a2len = 0, retlen = 0;
+	int nargs = SLang_Num_Function_Args;
+
+	if (nargs != 2)
+		return;
+		
+	/* Remember: args on the stack are reversed */
+	get_int_array(&arr2, &a2len);
+	get_int_array(&arr1, &a1len);
+	s_union(arr1, a1len, arr2, a2len, &ret, &retlen);
+	push_int_array(ret, retlen);
+	if (arr1)
+		free(arr1);
+	if (arr2)
+		free(arr2);
+	if (ret)
+		free(ret);
+	return;
+}
+
+
+/* s_intersection hook (see sets.c) */
+void
+sl_intersection(void)
+{
+	set_type_t *arr1 = NULL, *arr2 = NULL, *ret = NULL;
+	int a1len = 0, a2len = 0, retlen = 0;
+	int nargs = SLang_Num_Function_Args;
+
+	if (nargs != 2)
+		return;
+		
+	/* Remember: args on the stack are reversed */
+	get_int_array(&arr2, &a2len);
+	get_int_array(&arr1, &a1len);
+	s_intersection(arr1, a1len, arr2, a2len, &ret, &retlen);
+	push_int_array(ret, retlen);
+	if (arr1)
+		free(arr1);
+	if (arr2)
+		free(arr2);
+	if (ret)
+		free(ret);
+	return;
+}
+
+
+/* s_delta hook (see sets.c) */
+void
+sl_delta(void)
+{
+	set_type_t *arr1 = NULL, *arr2 = NULL, *ret = NULL;
+	int a1len = 0, a2len = 0, retlen = 0;
+	int nargs = SLang_Num_Function_Args;
+
+	if (nargs != 2)
+		return;
+		
+	/* Remember: args on the stack are reversed */
+	get_int_array(&arr2, &a2len);
+	get_int_array(&arr1, &a1len);
+	s_delta(arr1, a1len, arr2, a2len, &ret, &retlen);
+	push_int_array(ret, retlen);
+	if (arr1)
+		free(arr1);
+	if (arr2)
+		free(arr2);
+	if (ret)
+		free(ret);
+	return;
+}
+
+
+/* s_subtract hook (see sets.c) */
+void
+sl_subtract(void)
+{
+	set_type_t *arr1 = NULL, *arr2 = NULL, *ret = NULL;
+	int a1len = 0, a2len = 0, retlen = 0;
+	int nargs = SLang_Num_Function_Args;
+
+	if (nargs != 2)
+		return;
+		
+	/* Remember: args on the stack are reversed */
+	get_int_array(&arr2, &a2len);
+	get_int_array(&arr1, &a1len);
+	s_subtract(arr1, a1len, arr2, a2len, &ret, &retlen);
+	push_int_array(ret, retlen);
+	if (arr1)
+		free(arr1);
+	if (arr2)
+		free(arr2);
+	if (ret)
+		free(ret);
+	return;
+}
+
+
+/* Shuffle array (see sets.c) */
+void
+sl_shuffle(void)
+{
+	set_type_t *arr1 = NULL;
+	int a1len = 0;
+	int nargs = SLang_Num_Function_Args;
+
+	if (nargs != 1)
+		return;
+		
+	/* Remember: args on the stack are reversed */
+	get_int_array(&arr1, &a1len);
+	s_shuffle(arr1, a1len);
+	push_int_array(arr1, a1len);
+	if (arr1)
+		free(arr1);
+	return;
+}
+
+
+/* Converts an int array to a string so we can log it in one shot */
+static int
+array_to_string(char *buf, int buflen, set_type_t *array, int arraylen)
+{
+	char intbuf[16];
+	int x, len, remain = buflen;
+
+	memset(intbuf, 0, sizeof(intbuf));
+	memset(buf, 0, buflen);
+	len = snprintf(buf, buflen - 1, "[ ");
+	if (len == buflen)
+		return -1;
+
+	remain -= len;
+	for (x = 0; x < arraylen; x++) {
+		len = snprintf(intbuf, sizeof(intbuf) - 1, "%d ", (int)array[x]);
+		remain -= len;
+		if (remain > 0) {
+			strncat(buf, intbuf, len);
+		} else {
+			return -1;
+		}
+	}
+
+	len = snprintf(intbuf, sizeof(intbuf) - 1 ,  "]");
+	remain -= len;
+	if (remain > 0) {
+		strncat(buf, intbuf, len);
+	} else {
+		return -1;
+	}
+	return (buflen - remain);
+}
+
+
+/**
+  Start at the end of the arg list and work backwards, prepending a string.
+  This does not support standard clulog / printf formattting; rather, we 
+  just allow integers / strings to be mixed on the stack, figure out the
+  type, convert it to the right type, and prepend it on to our log message
+
+  The last must be a log level, as specified above:
+     LOG_DEBUG
+     ...
+     LOG_EMERG
+
+  This matches up with clulog / syslog mappings in the var table; the above
+  are constants in the S/Lang interpreter.  Any number of arguments may
+  be provided.  Examples are:
+
+    log(LOG_INFO, "String", 1, "string2");
+
+  Result:  String1string2
+
+    log(LOG_INFO, "String ", 1, " string2");
+
+  Result:  String 1 string2
+
+ */
+void
+sl_clulog(int level)
+{
+	int t, nargs, len;
+	//int level;
+	int s_intval;
+	char *s_strval;
+	set_type_t *nodes = NULL;
+       	int nlen = 0;
+	char logbuf[512];
+	char tmp[256];
+	int need_free;
+	int remain = sizeof(logbuf)-2;
+
+	nargs = SLang_Num_Function_Args;
+	if (nargs < 1)
+		return;
+
+	memset(logbuf, 0, sizeof(logbuf));
+	memset(tmp, 0, sizeof(tmp));
+	logbuf[sizeof(logbuf)-1] = 0;
+	logbuf[sizeof(logbuf)-2] = '\n';
+
+	while (nargs && (t = SLang_peek_at_stack()) >= 0 && remain) {
+		switch(t) {
+		case SLANG_ARRAY_TYPE:
+			if (get_int_array(&nodes, &nlen) < 0)
+				return;
+			len = array_to_string(tmp, sizeof(tmp),
+					      nodes, nlen);
+			if (len < 0) {
+				free(nodes);
+				return;
+			}
+			free(nodes);
+			break;
+		case SLANG_INT_TYPE:
+			if (SLang_pop_integer(&s_intval) < 0)
+				return;
+			len=snprintf(tmp, sizeof(tmp) - 1, "%d", s_intval);
+			break;
+		case SLANG_STRING_TYPE:
+			need_free = 0;
+			if (SLpop_string(&s_strval) < 0)
+				return;
+			len=snprintf(tmp, sizeof(tmp) - 1, "%s", s_strval);
+			SLfree(s_strval);
+			break;
+		default:
+			need_free = 0;
+			len=snprintf(tmp, sizeof(tmp) - 1,
+				     "{UnknownType %d}", t);
+			break;
+		}
+
+		--nargs;
+
+		if (len > remain)
+			return;
+		remain -= len;
+
+		memcpy(&logbuf[remain], tmp, len);
+	}
+
+#if 0
+	printf("<%d> %s\n", level, &logbuf[remain]);
+#endif
+	clulog(level, &logbuf[remain]);
+	return;
+}
+
+
+/* Logging functions */
+void
+sl_log_debug(void)
+{
+	sl_clulog(LOG_DEBUG);
+}
+
+
+void
+sl_log_info(void)
+{
+	sl_clulog(LOG_INFO);
+}
+
+
+void
+sl_log_notice(void)
+{
+	sl_clulog(LOG_NOTICE);
+}
+
+
+void
+sl_log_warning(void)
+{
+	sl_clulog(LOG_WARNING);
+}
+
+
+void
+sl_log_err(void)
+{
+	sl_clulog(LOG_ERR);
+}
+
+
+void
+sl_log_crit(void)
+{
+	sl_clulog(LOG_CRIT);
+}
+
+
+void
+sl_log_alert(void)
+{
+	sl_clulog(LOG_ALERT);
+}
+
+
+void
+sl_log_emerg(void)
+{
+	sl_clulog(LOG_EMERG);
+}
+
+
+void
+sl_die(void)
+{
+	_stop_processing = 1;
+	return;
+}
+
+
+SLang_Intrin_Fun_Type rgmanager_slang[] =
+{
+	MAKE_INTRINSIC_0("nodes_online", sl_nodes_online, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("service_list", sl_service_list, SLANG_VOID_TYPE),
+
+	MAKE_INTRINSIC_SS("service_property", sl_service_property,
+			  SLANG_STRING_TYPE),
+	MAKE_INTRINSIC_S("service_domain_info", sl_domain_info, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("service_stop", sl_stop_service, SLANG_INT_TYPE),
+	MAKE_INTRINSIC_0("service_start", sl_start_service, SLANG_INT_TYPE),
+	MAKE_INTRINSIC_S("service_status", sl_service_status,
+			 SLANG_VOID_TYPE),
+
+	/* Node list manipulation */
+	MAKE_INTRINSIC_0("union", sl_union, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("intersection", sl_intersection, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("delta", sl_delta, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("subtract", sl_subtract, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("shuffle", sl_shuffle, SLANG_VOID_TYPE),
+
+	/* Logging */
+	MAKE_INTRINSIC_0("debug", sl_log_debug, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("info", sl_log_info, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("notice", sl_log_notice, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("warning", sl_log_warning, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("err", sl_log_err, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("crit", sl_log_crit, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("alert", sl_log_alert, SLANG_VOID_TYPE),
+	MAKE_INTRINSIC_0("emerg", sl_log_emerg, SLANG_VOID_TYPE),
+
+	MAKE_INTRINSIC_0("stop_processing", sl_die, SLANG_VOID_TYPE),
+
+	SLANG_END_INTRIN_FUN_TABLE
+};
+
+
+/* Hook for when we generate a script error */
+void
+rgmanager_slang_error_hook(char *errstr)
+{
+	/* Don't just send errstr, because it might contain
+	   "%s" for example which would result in a crash!
+	   plus, we like the newline :) */
+	clulog(LOG_ERR, "[S/Lang] %s\n", errstr);
+	//raise(SIGSTOP);
+}
+
+
+
+/* ================================================================
+ * S/Lang initialization
+ * ================================================================ */
+int
+do_init_slang(void)
+{
+	SLang_init_slang();
+	SLang_init_slfile();
+
+	if (SLadd_intrin_fun_table(rgmanager_slang, NULL) < 0)
+		return 1;
+    	if (SLadd_intrin_var_table (rgmanager_vars, NULL) < 0)
+		return 1;
+
+	/* TODO: Make rgmanager S/Lang conformant.  Though, it
+	   might be a poor idea to provide access to all the 
+	   S/Lang libs */
+	SLpath_set_load_path(RESOURCE_ROOTDIR);
+
+	_my_node_id = my_id();
+	__sl_initialized = 1;
+
+	SLang_Error_Hook = rgmanager_slang_error_hook;
+
+	return 0;
+}
+
+
+/*
+   Execute a script / file and return the result to the caller
+   Log an error if we receive one.
+ */
+int
+do_slang_run(const char *file, const char *script)
+{
+	int ret = 0;
+	int tries = 0;
+
+	for (tries = 0; tries < 2; tries++) {
+		/* XXX This is here because there's a stack leak
+			that I have not been able to track down */
+		SLang_restart(1);
+		SLang_Error = 0;
+
+		if (file) 
+			ret = SLang_load_file((char *)file);
+		else
+			ret = SLang_load_string((char *)script);
+
+		if (ret == 0)
+			break;
+	}
+
+	if (ret < 0)
+                clulog(LOG_ERR, "[S/Lang] Script Execution Failure\n");
+
+	return ret;
+}
+
+
+int
+S_node_event(const char *file, const char *script, int nodeid,
+	     int state, int clean)
+{
+	int ret;
+	cluster_member_list_t *membership = member_list();
+	char *nodename;
+
+	nodename = memb_id_to_name(membership, nodeid);
+	if (nodename)
+		_node_name = strdup(nodename);
+	else
+		_node_name = strdup("unknown");
+	_node_state = state;
+	_node_clean = clean;
+	_node_id = nodeid;
+	cml_free(membership);
+
+	ret = do_slang_run(file, script);
+
+	_node_state = 0;
+	_node_clean = 0;
+	_node_id = 0;
+	if (_node_name)
+		free(_node_name);
+	_node_name = NULL;
+
+	return ret;
+}
+
+
+int
+S_service_event(const char *file, const char *script, char *name,
+	        int state, int owner, int last_owner)
+{
+	int ret;
+
+	_service_name = name;
+	_service_state = (char *)rg_state_str(state);
+	_service_owner = owner;
+	_service_last_owner = last_owner;
+	_service_restarts_exceeded = check_restart(name);
+
+	switch(state) {
+	case RG_STATE_DISABLED:
+	case RG_STATE_STOPPED:
+	case RG_STATE_FAILED:
+	case RG_STATE_RECOVER:
+	case RG_STATE_ERROR:
+		/* There is no owner for these states.  Ever.  */
+		_service_owner = -1;
+	}
+
+	ret = do_slang_run(file, script);
+
+	_service_name = NULL;
+	_service_state = 0;
+	_service_owner = 0;
+	_service_last_owner = 0;
+	_service_restarts_exceeded = 0;
+
+	return ret;
+}
+
+
+int
+S_user_event(const char *file, const char *script, char *name,
+	     int request, int arg1, int arg2, int target, int fd)
+{
+	int ret = RG_SUCCESS;
+
+	_service_name = name;
+	_service_owner = target;
+	_user_request = request;
+	_user_arg1 = arg1;
+	_user_arg2 = arg2;
+	_user_return = 0;
+
+	ret = do_slang_run(file, script);
+	if (ret < 0) {
+		_user_return = RG_ESCRIPT;
+	}
+
+	_service_name = NULL;
+	_service_owner = 0;
+	_user_request = 0;
+	_user_arg1 = 0;
+	_user_arg2 = 0;
+
+	/* XXX Send response code to caller - that 0 should be the
+	   new service owner, if there is one  */
+	if (fd >= 0) {
+		if (_user_return > 0) {
+			/* sl_start_service() squashes return code and
+			   node ID into one value.  <0 = error, >0 =
+			   success, return-value == node id running
+			   service */
+			send_ret(fd, name, 0, request, _user_return);
+		} else {
+			/* return value < 0 ... pass directly back;
+			   don't transpose */
+			send_ret(fd, name, _user_return, request, 0);
+		}
+		msg_close(fd);
+	}
+	_user_return = 0;
+	return ret;
+}
+
+
+int
+slang_do_script(event_t *pattern, event_t *ev)
+{
+	_event_type = ev->ev_type;
+	int ret = 0;
+
+	switch(ev->ev_type) {
+	case EVENT_NODE:
+		ret = S_node_event(
+				pattern->ev_script_file,
+				pattern->ev_script,
+				ev->ev.node.ne_nodeid,
+				ev->ev.node.ne_state,
+				ev->ev.node.ne_clean);
+		break;
+	case EVENT_RG:
+		ret = S_service_event(
+				pattern->ev_script_file,
+				pattern->ev_script,
+				ev->ev.group.rg_name,
+				ev->ev.group.rg_state,
+				ev->ev.group.rg_owner,
+				ev->ev.group.rg_last_owner);
+		break;
+	case EVENT_USER:
+		ret = S_user_event(
+				pattern->ev_script_file,
+				pattern->ev_script,
+				ev->ev.user.u_name,
+				ev->ev.user.u_request,
+				ev->ev.user.u_arg1,
+				ev->ev.user.u_arg2,
+				ev->ev.user.u_target,
+				ev->ev.user.u_fd);
+		break;
+	default:
+		break;
+	}
+
+	_event_type = EVENT_NONE;
+	return ret;
+}
+
+
+
+/**
+  Process an event given our event table and the event that
+  occurred.  Note that the caller is responsible for freeing the
+  event - do not free (ev) ...
+ */
+int
+slang_process_event(event_table_t *event_table, event_t *ev)
+{
+	int x, y;
+	event_t *pattern;
+
+	if (!__sl_initialized)
+		do_init_slang();
+
+	/* Get the service list once before processing events */
+	if (!_service_list || !_service_list_len)
+		_service_list = get_service_names(&_service_list_len);
+
+	_stop_processing = 0;
+	for (x = 1; x <= event_table->max_prio; x++) {
+		list_for(&event_table->entries[x], pattern, y) {
+			if (event_match(pattern, ev))
+				slang_do_script(pattern, ev);
+			if (_stop_processing)
+				goto out;
+		}
+	}
+
+	/* Default level = 0 */
+	list_for(&event_table->entries[0], pattern, y) {
+		if (event_match(pattern, ev))
+			slang_do_script(pattern, ev);
+		if (_stop_processing)
+			goto out;
+	}
+
+out:
+	/* Free the service list */
+	if (_service_list) {
+		for(x = 0; x < _service_list_len; x++) {
+			free(_service_list[x]);
+		}
+		free(_service_list);
+		_service_list = NULL;
+		_service_list_len = 0;
+	}
+
+	return 0;
+}
diff --git a/rgmanager/src/daemons/test.c b/rgmanager/src/daemons/test.c
index fa77edd..1a6862c 100644
--- a/rgmanager/src/daemons/test.c
+++ b/rgmanager/src/daemons/test.c
@@ -1,3 +1,21 @@
+/*
+  Copyright Red Hat, Inc. 2004-2006
+
+  This program is free software; you can redistribute it and/or modify it
+  under the terms of the GNU General Public License as published by the
+  Free Software Foundation; either version 2, or (at your option) any
+  later version.
+
+  This program is distributed in the hope that it will be useful, but
+  WITHOUT ANY WARRANTY; without even the implied warranty of
+  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+  General Public License for more details.
+
+  You should have received a copy of the GNU General Public License
+  along with this program; see the file COPYING.  If not, write to the
+  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+  MA 02139, USA.
+*/
 #include <libxml/parser.h>
 #include <libxml/xmlmemory.h>
 #include <libxml/xpath.h>
@@ -7,7 +25,9 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <list.h>
+#include <restart_counter.h>
 #include <reslist.h>
+#include <event.h>
 #include <pthread.h>
 
 #ifndef NO_CCS
@@ -66,20 +86,22 @@ test_func(int argc, char **argv)
 	fod_t *domains = NULL;
 	resource_rule_t *rulelist = NULL, *currule;
 	resource_t *reslist = NULL, *curres;
-	resource_node_t *tree = NULL;
+	resource_node_t *tree = NULL, *tmp, *rn = NULL;
 	int ccsfd, ret = 0, rules = 0;
+	event_table_t *events = NULL;
 
 	fprintf(stderr,"Running in test mode.\n");
 
 	conf_setconfig(argv[1]);
        	ccsfd = ccs_lock();
-	if (ccsfd == FAIL) {
+	if (ccsfd < 0) {
 		printf("Error parsing %s\n", argv[1]);
 		goto out;
 	}
 
 	load_resource_rules(agentpath, &rulelist);
 	construct_domains(ccsfd, &domains);
+	construct_events(ccsfd, &events);
 	load_resources(ccsfd, &reslist, &rulelist);
 	build_resource_tree(ccsfd, &tree, &rulelist, &reslist);
 
@@ -114,6 +136,11 @@ test_func(int argc, char **argv)
 			printf("=== Failover Domains ===\n");
 			print_domains(&domains);
 		}
+
+		if (events) {
+			printf("=== Event Triggers ===\n");
+			print_events(events);
+		}
 	}
 
 	ccs_unlock(ccsfd);
@@ -128,6 +155,13 @@ test_func(int argc, char **argv)
 		goto out;
 	}
 
+	list_do(&tree, tmp) {
+		if (tmp->rn_resource == curres) {
+			rn = tmp;
+			break;
+		}
+	} while (!list_done(&tree, tmp));
+
 	if (!strcmp(argv[1], "start")) {
 		printf("Starting %s...\n", argv[3]);
 
@@ -147,12 +181,29 @@ test_func(int argc, char **argv)
 		}
 		printf("Stop of %s complete\n", argv[3]);
 		goto out;
+	} else if (!strcmp(argv[1], "migrate")) {
+		printf("Migrating %s to %s...\n", argv[3], argv[4]);
+
+	#if 0
+		if (!group_migratory(curres)) {
+			printf("No can do\n");
+			ret = -1;
+			goto out;
+		}
+	#endif
+
+		if (res_exec(rn, RS_MIGRATE, argv[4], 0)) {
+			ret = -1;
+			goto out;
+		}
+		printf("Migration of %s complete\n", argv[3]);
+		goto out;
 	} else if (!strcmp(argv[1], "status")) {
 		printf("Checking status of %s...\n", argv[3]);
 
-		if (res_status(&tree, curres, NULL)) {
+		ret = res_status(&tree, curres, NULL);
+		if (ret) {
 			printf("Status check of %s failed\n", argv[3]);
-			ret = -1;
 			goto out;
 		}
 		printf("Status of %s is good\n", argv[3]);
@@ -160,6 +211,7 @@ test_func(int argc, char **argv)
 	}
 
 out:
+	deconstruct_events(&events);
 	deconstruct_domains(&domains);
 	destroy_resource_tree(&tree);
 	destroy_resources(&reslist);
@@ -175,7 +227,9 @@ tree_delta_test(int argc, char **argv)
 	resource_rule_t *rulelist = NULL, *currule, *rulelist2 = NULL;
 	resource_t *reslist = NULL, *curres, *reslist2 = NULL;
 	resource_node_t *tree = NULL, *tree2 = NULL;
-	int ccsfd, ret = 0;
+	resource_node_t *tn;
+	int ccsfd, ret = 0, need_init, need_kill;
+	char rg[64];
 
 	if (argc < 2) {
 		printf("Operation requires two arguments\n");
@@ -234,6 +288,40 @@ tree_delta_test(int argc, char **argv)
 	print_resource_tree(&tree);
 	printf("=== New Resource Tree ===\n");
 	print_resource_tree(&tree2);
+	printf("=== Operations (down-phase) ===\n");
+	list_do(&tree, tn) {
+		res_build_name(rg, sizeof(rg), tn->rn_resource);
+		/* Set state to uninitialized if we're killing a RG */
+		need_init = 0;
+
+		/* Set state to uninitialized if we're killing a RG */
+		need_kill = 0;
+		if (tn->rn_resource->r_flags & RF_NEEDSTOP) {
+			need_kill = 1;
+			printf("[kill] ");
+		}
+
+		res_condstop(&tn, tn->rn_resource, NULL);
+	} while (!list_done(&tree, tn));
+	printf("=== Operations (up-phase) ===\n");
+	list_do(&tree2, tn) {
+		res_build_name(rg, sizeof(rg), tn->rn_resource);
+		/* New RG.  We'll need to initialize it. */
+		need_init = 0;
+		if (!(tn->rn_resource->r_flags & RF_RECONFIG) &&
+		    (tn->rn_resource->r_flags & RF_NEEDSTART))
+			need_init = 1;
+
+		if (need_init) {
+			printf("[init] ");
+		}
+
+		if (need_init) {
+			res_stop(&tn, tn->rn_resource, NULL);
+		} else {
+			res_condstart(&tn, tn->rn_resource, NULL);
+		}
+	} while (!list_done(&tree2, tn));
 
 out:
 	destroy_resource_tree(&tree2);
@@ -289,6 +377,7 @@ main(int argc, char **argv)
 			goto out;
 		} else if (!strcmp(argv[1], "delta")) {
 			shift();
+			_no_op_mode(1);
 			ret = tree_delta_test(argc, argv);
 			goto out;
 		} else {
@@ -312,5 +401,5 @@ main(int argc, char **argv)
 out:
 	xmlCleanupParser();
 	malloc_dump_table();
-	return 0;
+	return ret;
 }
diff --git a/rgmanager/src/daemons/tests/delta-test001-test002.expected b/rgmanager/src/daemons/tests/delta-test001-test002.expected
index 030c5c9..34eaf37 100644
--- a/rgmanager/src/daemons/tests/delta-test001-test002.expected
+++ b/rgmanager/src/daemons/tests/delta-test001-test002.expected
@@ -4,6 +4,14 @@ Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 === New Resource List ===
 Resource type: service [ROOT]
@@ -11,12 +19,38 @@ Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
 }
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
diff --git a/rgmanager/src/daemons/tests/delta-test002-test003.expected b/rgmanager/src/daemons/tests/delta-test002-test003.expected
index d831641..043b74c 100644
--- a/rgmanager/src/daemons/tests/delta-test002-test003.expected
+++ b/rgmanager/src/daemons/tests/delta-test002-test003.expected
@@ -4,8 +4,30 @@ Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 === New Resource List ===
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
 Resource type: script [NEEDSTART]
 Agent: script.sh
 Attributes:
@@ -13,22 +35,36 @@ Attributes:
   file = /etc/init.d/httpd [ unique required ]
   service_name [ inherit("service%name") ]
 
-Resource type: service [ROOT]
-Instances: 1/1
-Agent: service.sh
-Attributes:
-  name = test1 [ primary unique required ]
-
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
 }
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script [ NEEDSTART ] {
     name = "initscript";
     file = "/etc/init.d/httpd";
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node script:initscript - CONDSTART
+[start] script:initscript
diff --git a/rgmanager/src/daemons/tests/delta-test003-test004.expected b/rgmanager/src/daemons/tests/delta-test003-test004.expected
index b1126a0..4bdd9ad 100644
--- a/rgmanager/src/daemons/tests/delta-test003-test004.expected
+++ b/rgmanager/src/daemons/tests/delta-test003-test004.expected
@@ -1,4 +1,18 @@
 === Old Resource List ===
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
 Resource type: script [NEEDSTOP]
 Agent: script.sh
 Attributes:
@@ -6,13 +20,21 @@ Attributes:
   file = /etc/init.d/httpd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-=== New Resource List ===
 Resource type: script [NEEDSTART]
 Agent: script.sh
 Attributes:
@@ -20,15 +42,17 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
-Resource type: service [ROOT]
-Instances: 1/1
-Agent: service.sh
-Attributes:
-  name = test1 [ primary unique required ]
-
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script [ NEEDSTOP ] {
     name = "initscript";
     file = "/etc/init.d/httpd";
@@ -38,9 +62,23 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script [ NEEDSTART ] {
     name = "initscript";
     file = "/etc/init.d/sshd";
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+Node script:initscript - CONDSTOP
+[stop] script:initscript
+=== Operations (up-phase) ===
+Node script:initscript - CONDSTART
+[start] script:initscript
diff --git a/rgmanager/src/daemons/tests/delta-test004-test005.expected b/rgmanager/src/daemons/tests/delta-test004-test005.expected
index 9919e56..a3b8b55 100644
--- a/rgmanager/src/daemons/tests/delta-test004-test005.expected
+++ b/rgmanager/src/daemons/tests/delta-test004-test005.expected
@@ -1,18 +1,18 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -20,11 +20,20 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip [NEEDSTART]
 Instances: 1/1
@@ -34,9 +43,24 @@ Attributes:
   monitor_link = 1
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script {
     name = "initscript";
     file = "/etc/init.d/sshd";
@@ -46,9 +70,18 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip [ NEEDSTART ] {
     address = "192.168.1.2";
     monitor_link = "1";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -56,3 +89,7 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node ip:192.168.1.2 - CONDSTART
+[start] ip:192.168.1.2
diff --git a/rgmanager/src/daemons/tests/delta-test005-test006.expected b/rgmanager/src/daemons/tests/delta-test005-test006.expected
index 879912c..4e0310c 100644
--- a/rgmanager/src/daemons/tests/delta-test005-test006.expected
+++ b/rgmanager/src/daemons/tests/delta-test005-test006.expected
@@ -1,16 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip [NEEDSTOP]
 Instances: 1/1
@@ -20,7 +21,6 @@ Attributes:
   monitor_link = 1
   nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -28,11 +28,20 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip [NEEDSTART]
 Instances: 1/1
@@ -42,12 +51,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip [ NEEDSTOP ] {
     address = "192.168.1.2";
     monitor_link = "1";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -58,9 +83,18 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip [ NEEDSTART ] {
     address = "192.168.1.2";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -68,3 +102,9 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+Node ip:192.168.1.2 - CONDSTOP
+[stop] ip:192.168.1.2
+=== Operations (up-phase) ===
+Node ip:192.168.1.2 - CONDSTART
+[start] ip:192.168.1.2
diff --git a/rgmanager/src/daemons/tests/delta-test006-test007.expected b/rgmanager/src/daemons/tests/delta-test006-test007.expected
index baa2187..bd4db79 100644
--- a/rgmanager/src/daemons/tests/delta-test006-test007.expected
+++ b/rgmanager/src/daemons/tests/delta-test006-test007.expected
@@ -1,16 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip [NEEDSTOP]
 Instances: 1/1
@@ -20,7 +21,6 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -28,11 +28,20 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip [NEEDSTART]
 Instances: 1/1
@@ -42,12 +51,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip [ NEEDSTOP ] {
     address = "192.168.1.2";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -58,9 +83,18 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip [ NEEDSTART ] {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -68,3 +102,9 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+Node ip:192.168.1.2 - CONDSTOP
+[stop] ip:192.168.1.2
+=== Operations (up-phase) ===
+Node ip:192.168.1.3 - CONDSTART
+[start] ip:192.168.1.3
diff --git a/rgmanager/src/daemons/tests/delta-test007-test008.expected b/rgmanager/src/daemons/tests/delta-test007-test008.expected
index 65140ea..19fe209 100644
--- a/rgmanager/src/daemons/tests/delta-test007-test008.expected
+++ b/rgmanager/src/daemons/tests/delta-test007-test008.expected
@@ -1,16 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,7 +21,6 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -28,11 +28,30 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: fs [NEEDSTART]
+Instances: 0/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -42,22 +61,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs [NEEDSTART]
-Instances: 0/1
-Agent: fs.sh
+Resource type: script
+Agent: script.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -68,9 +93,18 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -78,3 +112,5 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
diff --git a/rgmanager/src/daemons/tests/delta-test008-test009.expected b/rgmanager/src/daemons/tests/delta-test008-test009.expected
index d52c211..5168931 100644
--- a/rgmanager/src/daemons/tests/delta-test008-test009.expected
+++ b/rgmanager/src/daemons/tests/delta-test008-test009.expected
@@ -1,24 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: fs
 Instances: 0/1
@@ -28,9 +21,16 @@ Attributes:
   mountpoint = /mnt/cluster [ unique required ]
   device = /dev/sdb8 [ unique required ]
   fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -38,11 +38,30 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -52,22 +71,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
+Resource type: script
+Agent: script.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -78,15 +103,25 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs [ NEEDSTART ] {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -94,3 +129,7 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node fs:mount1 - CONDSTART
+[start] fs:mount1
diff --git a/rgmanager/src/daemons/tests/delta-test009-test010.expected b/rgmanager/src/daemons/tests/delta-test009-test010.expected
index 65f42fc..ef379fd 100644
--- a/rgmanager/src/daemons/tests/delta-test009-test010.expected
+++ b/rgmanager/src/daemons/tests/delta-test009-test010.expected
@@ -1,24 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: fs
 Instances: 1/1
@@ -28,9 +21,16 @@ Attributes:
   mountpoint = /mnt/cluster [ unique required ]
   device = /dev/sdb8 [ unique required ]
   fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -38,18 +38,28 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
+Resource type: nfsexport [NEEDSTART]
+Agent: nfsexport.sh
 Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
   nfslock [ inherit("service%nfslock") ]
 
 Resource type: fs
@@ -60,29 +70,45 @@ Attributes:
   mountpoint = /mnt/cluster [ unique required ]
   device = /dev/sdb8 [ unique required ]
   fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: nfsexport [NEEDSTART]
-Agent: nfsexport.sh
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
 Attributes:
-  name = Dummy Export [ primary ]
-  device [ inherit("device") ]
-  path [ inherit("mountpoint") ]
-  fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -93,15 +119,25 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -109,3 +145,5 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
diff --git a/rgmanager/src/daemons/tests/delta-test010-test011.expected b/rgmanager/src/daemons/tests/delta-test010-test011.expected
index 587a96e..38d7fb2 100644
--- a/rgmanager/src/daemons/tests/delta-test010-test011.expected
+++ b/rgmanager/src/daemons/tests/delta-test010-test011.expected
@@ -1,23 +1,25 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
+Resource type: nfsexport
+Agent: nfsexport.sh
 Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
   nfslock [ inherit("service%nfslock") ]
 
 Resource type: fs
@@ -28,18 +30,16 @@ Attributes:
   mountpoint = /mnt/cluster [ unique required ]
   device = /dev/sdb8 [ unique required ]
   fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: nfsexport
-Agent: nfsexport.sh
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
 Attributes:
-  name = Dummy Export [ primary ]
-  device [ inherit("device") ]
-  path [ inherit("mountpoint") ]
-  fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
 
-=== New Resource List ===
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -47,29 +47,20 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
+=== New Resource List ===
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -78,7 +69,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient [NEEDSTART]
 Agent: nfsclient.sh
@@ -87,7 +78,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient [NEEDSTART]
@@ -97,7 +88,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient [NEEDSTART]
@@ -107,7 +98,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTART]
@@ -117,7 +108,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTART]
@@ -127,21 +118,56 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -152,25 +178,37 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport [ NEEDSTART ] {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -178,6 +216,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -185,3 +224,9 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node nfsexport:Dummy Export - CONDSTART
+[start] nfsexport:Dummy Export
+[start] nfsclient:Admin group
+[start] nfsclient:User group
diff --git a/rgmanager/src/daemons/tests/delta-test011-test012.expected b/rgmanager/src/daemons/tests/delta-test011-test012.expected
index 516cb1a..8b9803d 100644
--- a/rgmanager/src/daemons/tests/delta-test011-test012.expected
+++ b/rgmanager/src/daemons/tests/delta-test011-test012.expected
@@ -1,34 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTOP]
@@ -86,22 +69,18 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test1 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -111,15 +90,27 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -128,7 +119,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -137,7 +128,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -147,7 +138,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -157,7 +148,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -167,7 +158,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTART]
@@ -177,31 +168,68 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -209,6 +237,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -219,31 +248,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient [ NEEDSTART ] {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -251,6 +293,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -258,3 +301,7 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node nfsclient:red - CONDSTART
+[start] nfsclient:red
diff --git a/rgmanager/src/daemons/tests/delta-test012-test013.expected b/rgmanager/src/daemons/tests/delta-test012-test013.expected
index e6c7cf7..c45223d 100644
--- a/rgmanager/src/daemons/tests/delta-test012-test013.expected
+++ b/rgmanager/src/daemons/tests/delta-test012-test013.expected
@@ -1,34 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTOP]
@@ -86,22 +69,18 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test1 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -111,15 +90,27 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -128,7 +119,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -137,7 +128,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -147,7 +138,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -157,7 +148,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -167,7 +158,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient [NEEDSTART]
@@ -177,37 +168,75 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient [ NEEDSTOP ] {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -215,6 +244,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -225,31 +255,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient [ NEEDSTART ] {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -257,6 +300,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -264,3 +308,9 @@ service {
     service_name = "test1";
   }
 }
+=== Operations (down-phase) ===
+Node nfsclient:red - CONDSTOP
+[stop] nfsclient:red
+=== Operations (up-phase) ===
+Node nfsclient:red - CONDSTART
+[start] nfsclient:red
diff --git a/rgmanager/src/daemons/tests/delta-test013-test014.expected b/rgmanager/src/daemons/tests/delta-test013-test014.expected
index 236f2be..d779448 100644
--- a/rgmanager/src/daemons/tests/delta-test013-test014.expected
+++ b/rgmanager/src/daemons/tests/delta-test013-test014.expected
@@ -1,34 +1,17 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -86,28 +69,18 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
-Instances: 1/1
-Agent: service.sh
-Attributes:
-  name = test1 [ primary unique required ]
-
-Resource type: service [ROOT] [NEEDSTART]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test2 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -117,33 +90,41 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: ip [NEEDSTART]
-Instances: 1/1
-Agent: ip.sh
+Resource type: script
+Agent: script.sh
 Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
-Resource type: fs
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: fs [NEEDSTART]
+Resource type: service [ROOT] [NEEDSTART]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -152,7 +133,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -161,7 +142,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -171,7 +152,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -181,7 +162,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -191,7 +172,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -201,37 +182,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs [NEEDSTART]
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip [NEEDSTART]
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -239,6 +276,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -249,31 +287,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -281,6 +332,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -290,31 +342,44 @@ service {
 }
 service [ NEEDSTART ] {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -322,6 +387,7 @@ service [ NEEDSTART ] {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -329,3 +395,13 @@ service [ NEEDSTART ] {
     service_name = "test2";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+[init] [stop] script:initscript
+[stop] ip:192.168.1.4
+[stop] nfsclient:red
+[stop] nfsclient:User group
+[stop] nfsclient:Admin group
+[stop] nfsexport:Dummy Export
+[stop] fs:mount2
+[stop] service:test2
diff --git a/rgmanager/src/daemons/tests/delta-test014-test015.expected b/rgmanager/src/daemons/tests/delta-test014-test015.expected
index 20d730d..6cf0310 100644
--- a/rgmanager/src/daemons/tests/delta-test014-test015.expected
+++ b/rgmanager/src/daemons/tests/delta-test014-test015.expected
@@ -1,58 +1,31 @@
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -61,7 +34,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient [NEEDSTOP]
 Agent: nfsclient.sh
@@ -70,7 +43,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -80,7 +53,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -90,7 +63,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -100,7 +73,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -110,28 +83,28 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test1 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test2 [ primary unique required ]
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -149,25 +122,41 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: fs
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -176,7 +165,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient [NEEDSTART]
 Agent: nfsclient.sh
@@ -185,7 +174,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -195,7 +184,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -205,7 +194,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -215,7 +204,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -225,37 +214,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient [ NEEDSTOP ] {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -263,6 +308,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -272,31 +318,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient [ NEEDSTOP ] {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -304,6 +363,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -314,31 +374,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient [ NEEDSTART ] {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -346,6 +419,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -355,31 +429,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient [ NEEDSTART ] {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -387,6 +474,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -394,3 +482,13 @@ service {
     service_name = "test2";
   }
 }
+=== Operations (down-phase) ===
+Node nfsclient:User group - CONDSTOP
+[stop] nfsclient:User group
+Node nfsclient:User group - CONDSTOP
+[stop] nfsclient:User group
+=== Operations (up-phase) ===
+Node nfsclient:User group - CONDSTART
+[start] nfsclient:User group
+Node nfsclient:User group - CONDSTART
+[start] nfsclient:User group
diff --git a/rgmanager/src/daemons/tests/delta-test015-test016.expected b/rgmanager/src/daemons/tests/delta-test015-test016.expected
index 51d32ea..7139041 100644
--- a/rgmanager/src/daemons/tests/delta-test015-test016.expected
+++ b/rgmanager/src/daemons/tests/delta-test015-test016.expected
@@ -1,59 +1,32 @@
 Warning: Max references exceeded for resource address (type ip)
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -62,7 +35,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -71,7 +44,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -81,7 +54,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -91,7 +64,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -101,7 +74,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -111,28 +84,28 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test1 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test2 [ primary unique required ]
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -150,25 +123,41 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: fs
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -177,7 +166,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -186,7 +175,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -196,7 +185,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -206,7 +195,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -216,7 +205,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -226,37 +215,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -264,6 +309,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -273,31 +319,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -305,6 +364,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -315,31 +375,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -347,6 +420,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -356,31 +430,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -388,6 +475,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -395,3 +483,5 @@ service {
     service_name = "test2";
   }
 }
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
diff --git a/rgmanager/src/daemons/tests/delta-test016-test017.expected b/rgmanager/src/daemons/tests/delta-test016-test017.expected
index 6889295..0b5723b 100644
--- a/rgmanager/src/daemons/tests/delta-test016-test017.expected
+++ b/rgmanager/src/daemons/tests/delta-test016-test017.expected
@@ -1,59 +1,32 @@
 Warning: Max references exceeded for resource address (type ip)
 === Old Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -62,7 +35,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -71,7 +44,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -81,7 +54,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -91,7 +64,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -101,7 +74,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -111,42 +84,28 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
-=== New Resource List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: script [NEEDSTART]
-Agent: script.sh
-Attributes:
-  name = script2 [ primary unique ]
-  file = /etc/init.d/script2 [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: script [NEEDSTART]
-Agent: script.sh
-Attributes:
-  name = script3 [ primary unique ]
-  file = /etc/init.d/script3 [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test1 [ primary unique required ]
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: service [ROOT]
+Resource type: fs
 Instances: 1/1
-Agent: service.sh
+Agent: fs.sh
 Attributes:
-  name = test2 [ primary unique required ]
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -164,25 +123,41 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: fs
+Resource type: service [ROOT]
 Instances: 1/1
-Agent: fs.sh
+Agent: service.sh
 Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -191,7 +166,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -200,7 +175,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -210,7 +185,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -220,7 +195,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -230,7 +205,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -240,37 +215,107 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script [NEEDSTART]
+Agent: script.sh
+Attributes:
+  name = script2 [ primary unique ]
+  file = /etc/init.d/script2 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script [NEEDSTART]
+Agent: script.sh
+Attributes:
+  name = script3 [ primary unique ]
+  file = /etc/init.d/script3 [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Old Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -278,6 +323,7 @@ service {
   ip [ NEEDSTOP ] {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -287,31 +333,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs [ NEEDSTOP ] {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -319,6 +378,7 @@ service {
   ip [ NEEDSTOP ] {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -329,31 +389,44 @@ service {
 === New Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -366,6 +439,14 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script {
     name = "initscript";
     file = "/etc/init.d/sshd";
@@ -373,32 +454,38 @@ service {
     ip [ NEEDSTART ] {
       address = "192.168.1.3";
       monitor_link = "yes";
+      nfslock = "0";
     }
     fs [ NEEDSTART ] {
       name = "mount2";
       mountpoint = "/mnt/cluster2";
       device = "/dev/sdb9";
       fstype = "ext3";
+      nfslock = "0";
       nfsexport {
         name = "Dummy Export";
         device = "/dev/sdb9";
         path = "/mnt/cluster2";
+        nfslock = "0";
         nfsclient {
           name = "Admin group";
           target = "@admin";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw";
         }
         nfsclient {
           name = "User group";
           target = "@users";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw,sync";
         }
         nfsclient {
           name = "red";
           target = "red";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw";
         }
       }
@@ -406,10 +493,12 @@ service {
     script [ NEEDSTART ] {
       name = "script2";
       file = "/etc/init.d/script2";
+      service_name = "test2";
     }
     ip [ NEEDSTART ] {
       address = "192.168.1.4";
       monitor_link = "yes";
+      nfslock = "0";
     }
   }
   script [ NEEDSTART ] {
@@ -418,3 +507,29 @@ service {
     service_name = "test2";
   }
 }
+=== Operations (down-phase) ===
+Node ip:192.168.1.3 - CONDSTOP
+[stop] ip:192.168.1.3
+Node fs:mount2 - CONDSTOP
+[stop] nfsclient:red
+[stop] nfsclient:User group
+[stop] nfsclient:Admin group
+[stop] nfsexport:Dummy Export
+[stop] fs:mount2
+Node ip:192.168.1.4 - CONDSTOP
+[stop] ip:192.168.1.4
+=== Operations (up-phase) ===
+Node ip:192.168.1.3 - CONDSTART
+[start] ip:192.168.1.3
+Node fs:mount2 - CONDSTART
+[start] fs:mount2
+[start] nfsexport:Dummy Export
+[start] nfsclient:Admin group
+[start] nfsclient:User group
+[start] nfsclient:red
+Node script:script2 - CONDSTART
+[start] script:script2
+Node ip:192.168.1.4 - CONDSTART
+[start] ip:192.168.1.4
+Node script:script3 - CONDSTART
+[start] script:script3
diff --git a/rgmanager/src/daemons/tests/delta-test017-test018.expected b/rgmanager/src/daemons/tests/delta-test017-test018.expected
new file mode 100644
index 0000000..6670d36
--- /dev/null
+++ b/rgmanager/src/daemons/tests/delta-test017-test018.expected
@@ -0,0 +1,558 @@
+=== Old Resource List ===
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: nfsexport
+Agent: nfsexport.sh
+Attributes:
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = User group [ primary unique ]
+  target = @users [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,sync
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = Admin group [ primary unique ]
+  target = @admin [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = yellow [ primary unique ]
+  target = yellow [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = magenta [ primary unique ]
+  target = magenta [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = red [ primary unique ]
+  target = red [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script2 [ primary unique ]
+  file = /etc/init.d/script2 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script3 [ primary unique ]
+  file = /etc/init.d/script3 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== New Resource List ===
+Resource type: clusterfs [NEEDSTART]
+Agent: clusterfs.sh
+Attributes:
+  name = argle [ primary ]
+  mountpoint = /mnt/cluster3 [ unique required ]
+  device = /dev/sdb10 [ unique required ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: nfsexport
+Agent: nfsexport.sh
+Attributes:
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = User group [ primary unique ]
+  target = @users [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,sync
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = Admin group [ primary unique ]
+  target = @admin [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = yellow [ primary unique ]
+  target = yellow [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = magenta [ primary unique ]
+  target = magenta [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = red [ primary unique ]
+  target = red [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script2 [ primary unique ]
+  file = /etc/init.d/script2 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script3 [ primary unique ]
+  file = /etc/init.d/script3 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== Old Resource Tree ===
+service {
+  name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  fs {
+    name = "mount1";
+    mountpoint = "/mnt/cluster";
+    device = "/dev/sdb8";
+    fstype = "ext3";
+    nfslock = "0";
+    nfsexport {
+      name = "Dummy Export";
+      device = "/dev/sdb8";
+      path = "/mnt/cluster";
+      nfslock = "0";
+      nfsclient {
+        name = "Admin group";
+        target = "@admin";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+      nfsclient {
+        name = "User group";
+        target = "@users";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw,sync";
+      }
+      nfsclient {
+        name = "red";
+        target = "red";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+    }
+  }
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test1";
+  }
+}
+service {
+  name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test2";
+    ip {
+      address = "192.168.1.3";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+    fs {
+      name = "mount2";
+      mountpoint = "/mnt/cluster2";
+      device = "/dev/sdb9";
+      fstype = "ext3";
+      nfslock = "0";
+      nfsexport {
+        name = "Dummy Export";
+        device = "/dev/sdb9";
+        path = "/mnt/cluster2";
+        nfslock = "0";
+        nfsclient {
+          name = "Admin group";
+          target = "@admin";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+        nfsclient {
+          name = "User group";
+          target = "@users";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw,sync";
+        }
+        nfsclient {
+          name = "red";
+          target = "red";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+      }
+    }
+    script {
+      name = "script2";
+      file = "/etc/init.d/script2";
+      service_name = "test2";
+    }
+    ip {
+      address = "192.168.1.4";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+  }
+  script {
+    name = "script3";
+    file = "/etc/init.d/script3";
+    service_name = "test2";
+  }
+}
+=== New Resource Tree ===
+service {
+  name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  fs {
+    name = "mount1";
+    mountpoint = "/mnt/cluster";
+    device = "/dev/sdb8";
+    fstype = "ext3";
+    nfslock = "0";
+    nfsexport {
+      name = "Dummy Export";
+      device = "/dev/sdb8";
+      path = "/mnt/cluster";
+      nfslock = "0";
+      nfsclient {
+        name = "Admin group";
+        target = "@admin";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+      nfsclient {
+        name = "User group";
+        target = "@users";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw,sync";
+      }
+      nfsclient {
+        name = "red";
+        target = "red";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+    }
+  }
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test1";
+    clusterfs [ NEEDSTART ] {
+      name = "argle";
+      mountpoint = "/mnt/cluster3";
+      device = "/dev/sdb10";
+      nfslock = "0";
+    }
+  }
+}
+service {
+  name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test2";
+    clusterfs [ NEEDSTART ] {
+      name = "argle";
+      mountpoint = "/mnt/cluster3";
+      device = "/dev/sdb10";
+      nfslock = "0";
+    }
+    ip {
+      address = "192.168.1.3";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+    fs {
+      name = "mount2";
+      mountpoint = "/mnt/cluster2";
+      device = "/dev/sdb9";
+      fstype = "ext3";
+      nfslock = "0";
+      nfsexport {
+        name = "Dummy Export";
+        device = "/dev/sdb9";
+        path = "/mnt/cluster2";
+        nfslock = "0";
+        nfsclient {
+          name = "Admin group";
+          target = "@admin";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+        nfsclient {
+          name = "User group";
+          target = "@users";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw,sync";
+        }
+        nfsclient {
+          name = "red";
+          target = "red";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+      }
+    }
+    script {
+      name = "script2";
+      file = "/etc/init.d/script2";
+      service_name = "test2";
+    }
+    ip {
+      address = "192.168.1.4";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+  }
+  script {
+    name = "script3";
+    file = "/etc/init.d/script3";
+    service_name = "test2";
+  }
+}
+=== Operations (down-phase) ===
+=== Operations (up-phase) ===
+Node clusterfs:argle - CONDSTART
+[start] clusterfs:argle
+Node clusterfs:argle - CONDSTART
+[start] clusterfs:argle
diff --git a/rgmanager/src/daemons/tests/runtests.sh b/rgmanager/src/daemons/tests/runtests.sh
index 3fa0c5b..76b548b 100755
--- a/rgmanager/src/daemons/tests/runtests.sh
+++ b/rgmanager/src/daemons/tests/runtests.sh
@@ -23,7 +23,7 @@ echo "Running sanity+memory leak checks on rgmanager tree operations..."
 for t in $TESTS; do
 	echo -n "  Checking $t.conf..."
 	../rg_test ../../resources test $t.conf > $t.out 2> $t.out.stderr
-	diff -w $t.expected $t.out
+	diff -uw $t.expected $t.out
 	if [ $? -ne 0 ]; then
 		echo "FAILED"
 		echo "*** Basic Test $t failed"
@@ -56,7 +56,7 @@ for t in $TESTS; do
 		for svc in $SERVICES; do
 			../rg_test ../../resources noop $t.conf $phase service $svc >> $t.$phase.out 2> $t.$phase.out.stderr
 		done
-		diff -w $t.$phase.expected $t.$phase.out
+		diff -uw $t.$phase.expected $t.$phase.out
 		if [ $? -ne 0 ]; then
 			echo "FAILED"
 			echo "*** Start Test $t failed"
@@ -89,7 +89,7 @@ for t in $TESTS; do
 	echo -n "  Checking delta between $prev and $t..."
 	../rg_test ../../resources delta \
 		$prev.conf $t.conf > delta-$prev-$t.out 2> delta-$prev-$t.out.stderr
-	diff -w delta-$prev-$t.expected delta-$prev-$t.out 
+	diff -uw delta-$prev-$t.expected delta-$prev-$t.out 
 	if [ $? -ne 0 ]; then
 		echo "FAILED"
 		echo "*** Differential test between $prev and $t failed"
diff --git a/rgmanager/src/daemons/tests/test001.expected b/rgmanager/src/daemons/tests/test001.expected
index bc080ad..43d90e8 100644
--- a/rgmanager/src/daemons/tests/test001.expected
+++ b/rgmanager/src/daemons/tests/test001.expected
@@ -4,8 +4,29 @@ Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test002.expected b/rgmanager/src/daemons/tests/test002.expected
index b21c247..09b160f 100644
--- a/rgmanager/src/daemons/tests/test002.expected
+++ b/rgmanager/src/daemons/tests/test002.expected
@@ -4,8 +4,29 @@ Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test003.expected b/rgmanager/src/daemons/tests/test003.expected
index 0ff5d58..b39a24e 100644
--- a/rgmanager/src/daemons/tests/test003.expected
+++ b/rgmanager/src/daemons/tests/test003.expected
@@ -1,4 +1,18 @@
 === Resources List ===
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -6,18 +20,25 @@ Attributes:
   file = /etc/init.d/httpd [ unique required ]
   service_name [ inherit("service%name") ]
 
-Resource type: service [ROOT]
-Instances: 1/1
-Agent: service.sh
-Attributes:
-  name = test1 [ primary unique required ]
-
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script {
     name = "initscript";
     file = "/etc/init.d/httpd";
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test004.expected b/rgmanager/src/daemons/tests/test004.expected
index 334cc12..bc2a264 100644
--- a/rgmanager/src/daemons/tests/test004.expected
+++ b/rgmanager/src/daemons/tests/test004.expected
@@ -1,4 +1,18 @@
 === Resources List ===
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
 Resource type: script
 Agent: script.sh
 Attributes:
@@ -6,18 +20,25 @@ Attributes:
   file = /etc/init.d/sshd [ unique required ]
   service_name [ inherit("service%name") ]
 
-Resource type: service [ROOT]
-Instances: 1/1
-Agent: service.sh
-Attributes:
-  name = test1 [ primary unique required ]
-
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script {
     name = "initscript";
     file = "/etc/init.d/sshd";
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test005.expected b/rgmanager/src/daemons/tests/test005.expected
index eea6f20..b05749a 100644
--- a/rgmanager/src/daemons/tests/test005.expected
+++ b/rgmanager/src/daemons/tests/test005.expected
@@ -1,16 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,12 +21,28 @@ Attributes:
   monitor_link = 1
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.2";
     monitor_link = "1";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -33,3 +50,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test006.expected b/rgmanager/src/daemons/tests/test006.expected
index 71714d8..2b77f91 100644
--- a/rgmanager/src/daemons/tests/test006.expected
+++ b/rgmanager/src/daemons/tests/test006.expected
@@ -1,16 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,12 +21,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.2";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -33,3 +50,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test007.expected b/rgmanager/src/daemons/tests/test007.expected
index 243f89f..ea4bcf1 100644
--- a/rgmanager/src/daemons/tests/test007.expected
+++ b/rgmanager/src/daemons/tests/test007.expected
@@ -1,16 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,12 +21,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -33,3 +50,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test008.expected b/rgmanager/src/daemons/tests/test008.expected
index 256fa11..82877fe 100644
--- a/rgmanager/src/daemons/tests/test008.expected
+++ b/rgmanager/src/daemons/tests/test008.expected
@@ -1,16 +1,27 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: fs
+Instances: 0/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,22 +31,28 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
-Instances: 0/1
-Agent: fs.sh
+Resource type: script
+Agent: script.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -43,3 +60,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test009.expected b/rgmanager/src/daemons/tests/test009.expected
index 52e66df..0c4c274 100644
--- a/rgmanager/src/daemons/tests/test009.expected
+++ b/rgmanager/src/daemons/tests/test009.expected
@@ -1,16 +1,27 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: ip
 Instances: 1/1
@@ -20,28 +31,35 @@ Attributes:
   monitor_link = yes
   nfslock [ inherit("service%nfslock") ]
 
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
+Resource type: script
+Agent: script.sh
 Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -49,3 +67,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test010.expected b/rgmanager/src/daemons/tests/test010.expected
index 04f1812..bd89cb3 100644
--- a/rgmanager/src/daemons/tests/test010.expected
+++ b/rgmanager/src/daemons/tests/test010.expected
@@ -1,23 +1,25 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
+Resource type: nfsexport
+Agent: nfsexport.sh
 Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
   nfslock [ inherit("service%nfslock") ]
 
 Resource type: fs
@@ -28,29 +30,45 @@ Attributes:
   mountpoint = /mnt/cluster [ unique required ]
   device = /dev/sdb8 [ unique required ]
   fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
-Resource type: nfsexport
-Agent: nfsexport.sh
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
 Attributes:
-  name = Dummy Export [ primary ]
-  device [ inherit("device") ]
-  path [ inherit("mountpoint") ]
-  fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
 
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
   }
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -58,3 +76,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test011.expected b/rgmanager/src/daemons/tests/test011.expected
index 18b745b..6ce15c4 100644
--- a/rgmanager/src/daemons/tests/test011.expected
+++ b/rgmanager/src/daemons/tests/test011.expected
@@ -1,34 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -86,31 +69,68 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -118,6 +138,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -125,3 +146,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test012.expected b/rgmanager/src/daemons/tests/test012.expected
index 1ab2548..97b07cc 100644
--- a/rgmanager/src/daemons/tests/test012.expected
+++ b/rgmanager/src/daemons/tests/test012.expected
@@ -1,34 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -86,37 +69,75 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
     }
@@ -124,6 +145,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -131,3 +153,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test013.expected b/rgmanager/src/daemons/tests/test013.expected
index 1fc484a..07564ff 100644
--- a/rgmanager/src/daemons/tests/test013.expected
+++ b/rgmanager/src/daemons/tests/test013.expected
@@ -1,34 +1,17 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -37,7 +20,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -46,7 +29,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -56,7 +39,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -66,7 +49,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -76,7 +59,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -86,37 +69,75 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -124,6 +145,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -131,3 +153,8 @@ service {
     service_name = "test1";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test014.expected b/rgmanager/src/daemons/tests/test014.expected
index 1954786..c7db038 100644
--- a/rgmanager/src/daemons/tests/test014.expected
+++ b/rgmanager/src/daemons/tests/test014.expected
@@ -1,58 +1,31 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -61,7 +34,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -70,7 +43,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = ro
 
 Resource type: nfsclient
@@ -80,7 +53,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -90,7 +63,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -100,7 +73,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -110,37 +83,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -148,6 +177,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -157,31 +187,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "ro";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -189,6 +232,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -196,3 +240,8 @@ service {
     service_name = "test2";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test015.expected b/rgmanager/src/daemons/tests/test015.expected
index 9663d54..6ac189b 100644
--- a/rgmanager/src/daemons/tests/test015.expected
+++ b/rgmanager/src/daemons/tests/test015.expected
@@ -1,58 +1,31 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -61,7 +34,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -70,7 +43,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -80,7 +53,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -90,7 +63,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -100,7 +73,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -110,37 +83,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -148,6 +177,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -157,31 +187,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -189,6 +232,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -196,3 +240,8 @@ service {
     service_name = "test2";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test016.expected b/rgmanager/src/daemons/tests/test016.expected
index 5d8923f..1ce2579 100644
--- a/rgmanager/src/daemons/tests/test016.expected
+++ b/rgmanager/src/daemons/tests/test016.expected
@@ -1,59 +1,32 @@
 Warning: Max references exceeded for resource address (type ip)
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -62,7 +35,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -71,7 +44,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -81,7 +54,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -91,7 +64,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -101,7 +74,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -111,37 +84,93 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -149,6 +178,7 @@ service {
   ip {
     address = "192.168.1.3";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -158,31 +188,44 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount2";
     mountpoint = "/mnt/cluster2";
     device = "/dev/sdb9";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb9";
       path = "/mnt/cluster2";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster2";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -190,6 +233,7 @@ service {
   ip {
     address = "192.168.1.4";
     monitor_link = "yes";
+    nfslock = "0";
   }
   script {
     name = "initscript";
@@ -197,3 +241,8 @@ service {
     service_name = "test2";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test017.expected b/rgmanager/src/daemons/tests/test017.expected
index a1d0d6f..4727803 100644
--- a/rgmanager/src/daemons/tests/test017.expected
+++ b/rgmanager/src/daemons/tests/test017.expected
@@ -1,72 +1,31 @@
 === Resources List ===
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = initscript [ primary unique ]
-  file = /etc/init.d/sshd [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = script2 [ primary unique ]
-  file = /etc/init.d/script2 [ unique required ]
-  service_name [ inherit("service%name") ]
-
-Resource type: script
-Agent: script.sh
-Attributes:
-  name = script3 [ primary unique ]
-  file = /etc/init.d/script3 [ unique required ]
-  service_name [ inherit("service%name") ]
-
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: service [ROOT]
 Instances: 1/1
 Agent: service.sh
 Attributes:
   name = test2 [ primary unique required ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.3 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: ip
-Instances: 1/1
-Agent: ip.sh
-Attributes:
-  address = 192.168.1.4 [ primary unique ]
-  monitor_link = yes
-  nfslock [ inherit("service%nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount1 [ primary ]
-  mountpoint = /mnt/cluster [ unique required ]
-  device = /dev/sdb8 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
-
-Resource type: fs
-Instances: 1/1
-Agent: fs.sh
-Attributes:
-  name = mount2 [ primary ]
-  mountpoint = /mnt/cluster2 [ unique required ]
-  device = /dev/sdb9 [ unique required ]
-  fstype = ext3
-  nfslock [ inherit("nfslock") ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
 
 Resource type: nfsexport
 Agent: nfsexport.sh
@@ -75,7 +34,7 @@ Attributes:
   device [ inherit("device") ]
   path [ inherit("mountpoint") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
 
 Resource type: nfsclient
 Agent: nfsclient.sh
@@ -84,7 +43,7 @@ Attributes:
   target = @users [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,sync
 
 Resource type: nfsclient
@@ -94,7 +53,7 @@ Attributes:
   target = @admin [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
 Resource type: nfsclient
@@ -104,7 +63,7 @@ Attributes:
   target = yellow [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -114,7 +73,7 @@ Attributes:
   target = magenta [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw,no_root_squash
 
 Resource type: nfsclient
@@ -124,37 +83,107 @@ Attributes:
   target = red [ required ]
   path [ inherit("path") ]
   fsid [ inherit("fsid") ]
-  nfslock [ inherit("nfsexport%nfslock") ]
+  nfslock [ inherit("service%nfslock") ]
   options = rw
 
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script2 [ primary unique ]
+  file = /etc/init.d/script2 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script3 [ primary unique ]
+  file = /etc/init.d/script3 [ unique required ]
+  service_name [ inherit("service%name") ]
+
 === Resource Tree ===
 service {
   name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   fs {
     name = "mount1";
     mountpoint = "/mnt/cluster";
     device = "/dev/sdb8";
     fstype = "ext3";
+    nfslock = "0";
     nfsexport {
       name = "Dummy Export";
       device = "/dev/sdb8";
       path = "/mnt/cluster";
+      nfslock = "0";
       nfsclient {
         name = "Admin group";
         target = "@admin";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
       nfsclient {
         name = "User group";
         target = "@users";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw,sync";
       }
       nfsclient {
         name = "red";
         target = "red";
         path = "/mnt/cluster";
+        nfslock = "0";
         options = "rw";
       }
     }
@@ -167,6 +196,14 @@ service {
 }
 service {
   name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
   script {
     name = "initscript";
     file = "/etc/init.d/sshd";
@@ -174,32 +211,38 @@ service {
     ip {
       address = "192.168.1.3";
       monitor_link = "yes";
+      nfslock = "0";
     }
     fs {
       name = "mount2";
       mountpoint = "/mnt/cluster2";
       device = "/dev/sdb9";
       fstype = "ext3";
+      nfslock = "0";
       nfsexport {
         name = "Dummy Export";
         device = "/dev/sdb9";
         path = "/mnt/cluster2";
+        nfslock = "0";
         nfsclient {
           name = "Admin group";
           target = "@admin";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw";
         }
         nfsclient {
           name = "User group";
           target = "@users";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw,sync";
         }
         nfsclient {
           name = "red";
           target = "red";
           path = "/mnt/cluster2";
+          nfslock = "0";
           options = "rw";
         }
       }
@@ -207,10 +250,12 @@ service {
     script {
       name = "script2";
       file = "/etc/init.d/script2";
+      service_name = "test2";
     }
     ip {
       address = "192.168.1.4";
       monitor_link = "yes";
+      nfslock = "0";
     }
   }
   script {
@@ -219,3 +264,8 @@ service {
     service_name = "test2";
   }
 }
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/daemons/tests/test018.expected b/rgmanager/src/daemons/tests/test018.expected
new file mode 100644
index 0000000..225b150
--- /dev/null
+++ b/rgmanager/src/daemons/tests/test018.expected
@@ -0,0 +1,291 @@
+=== Resources List ===
+Resource type: clusterfs
+Agent: clusterfs.sh
+Attributes:
+  name = argle [ primary ]
+  mountpoint = /mnt/cluster3 [ unique required ]
+  device = /dev/sdb10 [ unique required ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test1 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: service [ROOT]
+Instances: 1/1
+Agent: service.sh
+Attributes:
+  name = test2 [ primary unique required ]
+  autostart = 1 [ reconfig ]
+  hardrecovery = 0 [ reconfig ]
+  exclusive = 0 [ reconfig ]
+  nfslock = 0
+  recovery = restart [ reconfig ]
+  depend_mode = hard
+  max_restarts = 0 [ reconfig ]
+  restart_expire_time = 0 [ reconfig ]
+
+Resource type: nfsexport
+Agent: nfsexport.sh
+Attributes:
+  name = Dummy Export [ primary ]
+  device [ inherit("device") ]
+  path [ inherit("mountpoint") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = User group [ primary unique ]
+  target = @users [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,sync
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = Admin group [ primary unique ]
+  target = @admin [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = yellow [ primary unique ]
+  target = yellow [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = magenta [ primary unique ]
+  target = magenta [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw,no_root_squash
+
+Resource type: nfsclient
+Agent: nfsclient.sh
+Attributes:
+  name = red [ primary unique ]
+  target = red [ required ]
+  path [ inherit("path") ]
+  fsid [ inherit("fsid") ]
+  nfslock [ inherit("service%nfslock") ]
+  options = rw
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount1 [ primary ]
+  mountpoint = /mnt/cluster [ unique required ]
+  device = /dev/sdb8 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: fs
+Instances: 1/1
+Agent: fs.sh
+Attributes:
+  name = mount2 [ primary ]
+  mountpoint = /mnt/cluster2 [ unique required ]
+  device = /dev/sdb9 [ unique required ]
+  fstype = ext3
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.3 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: ip
+Instances: 1/1
+Agent: ip.sh
+Attributes:
+  address = 192.168.1.4 [ primary unique ]
+  monitor_link = yes
+  nfslock [ inherit("service%nfslock") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = initscript [ primary unique ]
+  file = /etc/init.d/sshd [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script2 [ primary unique ]
+  file = /etc/init.d/script2 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+Resource type: script
+Agent: script.sh
+Attributes:
+  name = script3 [ primary unique ]
+  file = /etc/init.d/script3 [ unique required ]
+  service_name [ inherit("service%name") ]
+
+=== Resource Tree ===
+service {
+  name = "test1";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  fs {
+    name = "mount1";
+    mountpoint = "/mnt/cluster";
+    device = "/dev/sdb8";
+    fstype = "ext3";
+    nfslock = "0";
+    nfsexport {
+      name = "Dummy Export";
+      device = "/dev/sdb8";
+      path = "/mnt/cluster";
+      nfslock = "0";
+      nfsclient {
+        name = "Admin group";
+        target = "@admin";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+      nfsclient {
+        name = "User group";
+        target = "@users";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw,sync";
+      }
+      nfsclient {
+        name = "red";
+        target = "red";
+        path = "/mnt/cluster";
+        nfslock = "0";
+        options = "rw";
+      }
+    }
+  }
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test1";
+    clusterfs {
+      name = "argle";
+      mountpoint = "/mnt/cluster3";
+      device = "/dev/sdb10";
+      nfslock = "0";
+    }
+  }
+}
+service {
+  name = "test2";
+  autostart = "1";
+  hardrecovery = "0";
+  exclusive = "0";
+  nfslock = "0";
+  recovery = "restart";
+  depend_mode = "hard";
+  max_restarts = "0";
+  restart_expire_time = "0";
+  script {
+    name = "initscript";
+    file = "/etc/init.d/sshd";
+    service_name = "test2";
+    clusterfs {
+      name = "argle";
+      mountpoint = "/mnt/cluster3";
+      device = "/dev/sdb10";
+      nfslock = "0";
+    }
+    ip {
+      address = "192.168.1.3";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+    fs {
+      name = "mount2";
+      mountpoint = "/mnt/cluster2";
+      device = "/dev/sdb9";
+      fstype = "ext3";
+      nfslock = "0";
+      nfsexport {
+        name = "Dummy Export";
+        device = "/dev/sdb9";
+        path = "/mnt/cluster2";
+        nfslock = "0";
+        nfsclient {
+          name = "Admin group";
+          target = "@admin";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+        nfsclient {
+          name = "User group";
+          target = "@users";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw,sync";
+        }
+        nfsclient {
+          name = "red";
+          target = "red";
+          path = "/mnt/cluster2";
+          nfslock = "0";
+          options = "rw";
+        }
+      }
+    }
+    script {
+      name = "script2";
+      file = "/etc/init.d/script2";
+      service_name = "test2";
+    }
+    ip {
+      address = "192.168.1.4";
+      monitor_link = "yes";
+      nfslock = "0";
+    }
+  }
+  script {
+    name = "script3";
+    file = "/etc/init.d/script3";
+    service_name = "test2";
+  }
+}
+=== Event Triggers ===
+Event Priority Level 100:
+  Name: Default
+    (Any event)
+    File: /usr/share/cluster/default_event_script.sl
diff --git a/rgmanager/src/resources/Makefile b/rgmanager/src/resources/Makefile
index d543468..c1f4a71 100644
--- a/rgmanager/src/resources/Makefile
+++ b/rgmanager/src/resources/Makefile
@@ -21,11 +21,12 @@ RESOURCES=fs.sh service.sh ip.sh nfsclient.sh nfsexport.sh \
 	script.sh netfs.sh clusterfs.sh smb.sh \
 	apache.sh openldap.sh samba.sh mysql.sh \
 	postgres-8.sh tomcat-5.sh lvm.sh lvm_by_lv.sh lvm_by_vg.sh \
-	SAPInstance SAPDatabase oracledb.sh
+	SAPInstance SAPDatabase named.sh \
+	oracledb.sh
 
 METADATA=apache.metadata openldap.metadata samba.metadata \
 	mysql.metadata postgres-8.metadata tomcat-5.metadata \
-	lvm.metadata
+	named.metadata lvm.metadata
 
 TARGETS=${RESOURCES} ocf-shellfuncs svclib_nfslock
 
@@ -34,6 +35,9 @@ UTIL_TARGETS= \
 	utils/httpd-parse-config.pl utils/tomcat-parse-config.pl \
 	utils/member_util.sh
 
+EVENT_TARGETS= \
+	default_event_script.sl
+
 all:
 
 install: all
@@ -44,6 +48,7 @@ install: all
 	install $(TARGETS) ${sharedir}
 	install $(UTIL_TARGETS) ${sharedir}/utils
 	install -m 644 $(METADATA) ${sharedir}
+	install -m 644 $(EVENT_TARGETS) ${sharedir}
 
 uninstall:
 	${UNINSTALL} ${UTIL_TARGETS} ${sharedir}/utils
diff --git a/rgmanager/src/resources/clusterfs.sh b/rgmanager/src/resources/clusterfs.sh
index 0bf14f0..bf2a3d1 100755
--- a/rgmanager/src/resources/clusterfs.sh
+++ b/rgmanager/src/resources/clusterfs.sh
@@ -366,7 +366,7 @@ mountInUse () {
 	typeset junk
 
 	if [ $# -ne 2 ]; then
-		logAndPrint $LOG_ERR "Usage: mountInUse device mount_point".
+		ocf_log err "Usage: mountInUse device mount_point".
 		return $FAIL
 	fi
 
@@ -408,7 +408,7 @@ isMounted () {
 		ocf_log err "isMounted: Could not match $1 with a real device"
 		return $FAIL
 	fi
-	mp=$2
+	mp=$(readlink -f $2)
 	
 	while read tmp_dev tmp_mp
 	do
@@ -444,14 +444,14 @@ isAlive()
 	declare rw
 	
 	if [ $# -ne 1 ]; then
-	        logAndPrint $LOG_ERR "Usage: isAlive mount_point"
+	        ocf_log err "Usage: isAlive mount_point"
 		return $FAIL
 	fi
 	mount_point=$1
 	
 	test -d $mount_point
 	if [ $? -ne 0 ]; then
-		logAndPrint $LOG_ERR "$mount_point is not a directory"
+		ocf_log err "$mount_point is not a directory"
 		return $FAIL
 	fi
 	
@@ -738,7 +738,7 @@ Cannot mount $dev on $mp, the device or mount point is already in use!"
 	#
 	# Mount the device
 	#
-	logAndPrint $LOG_DEBUG "mount $fstype_option $mount_options $dev $mp"
+	ocf_log debug "mount $fstype_option $mount_options $dev $mp"
 	mount $fstype_option $mount_options $dev $mp
 	ret_val=$?
 	if [ $ret_val -ne 0 ]; then
@@ -761,6 +761,7 @@ stopFilesystem() {
 	typeset -i try=1
 	typeset -i max_tries=3		# how many times to try umount
 	typeset -i sleep_time=2		# time between each umount failure
+	typeset -i refs=0
 	typeset done=""
 	typeset umount_failed=""
 	typeset force_umount=""
@@ -815,6 +816,18 @@ stop: Could not match $OCF_RESKEY_device with a real device"
 		esac
 	fi
 
+	#
+	# Check the rgmanager-supplied reference count if one exists.
+	# If the reference count is <= 1, we can safely proceed
+	#
+	if [ -n "$OCF_RESKEY_RGMANAGER_meta_refcnt" ]; then
+		refs=$OCF_RESKEY_RGMANAGER_meta_refcnt
+		if [ $refs -gt 1 ]; then
+			((refs--))
+			ocf_log debug "Not unmounting $OCF_RESOURCE_INSTANCE - still in use by $refs other service(s)"
+			return $OCF_SUCCESS
+		fi
+	fi
 
 	#
 	# Always do this hackery on clustered file systems.
@@ -825,10 +838,10 @@ stop: Could not match $OCF_RESKEY_device with a real device"
 		mkdir -p $mp/.clumanager/statd
 		pkill -KILL -x lockd
 		# Copy out the notify list; our 
-			# IPs are already torn down
-			if notify_list_store $mp/.clumanager/statd; then
-				notify_list_broadcast $mp/.clumanager/statd
-			fi
+		# IPs are already torn down
+		if notify_list_store $mp/.clumanager/statd; then
+			notify_list_broadcast $mp/.clumanager/statd
+		fi
 	fi
 
 	# Always invalidate buffers on clusterfs resources
@@ -857,7 +870,7 @@ stop: Could not match $OCF_RESKEY_device with a real device"
 			sync; sync; sync
 			ocf_log info "unmounting $dev ($mp)"
 
-			umount $dev
+			umount $mp
 			if  [ $? -eq 0 ]; then
 				umount_failed=
 				done=$YES
@@ -907,8 +920,20 @@ stop: Could not match $OCF_RESKEY_device with a real device"
 
 case $1 in
 start)
-	startFilesystem
-	exit $?
+	declare tries=0
+	declare rv
+
+	while [ $tries -lt 3 ]; do
+		startFilesystem
+		rv=$?
+		if [ $rv -eq 0 ]; then
+			exit 0
+		fi
+
+		((tries++))
+		sleep 3
+	done
+	exit $rv
 	;;
 stop)
 	stopFilesystem
@@ -916,16 +941,12 @@ stop)
 	;;
 status|monitor)
   	isMounted ${OCF_RESKEY_device} ${OCF_RESKEY_mountpoint}
- 	if [ $? -ne $YES ]; then
-		ocf_log err "fs:${OCF_RESKEY_name}: ${OCF_RESKEY_device} is not mounted on ${OCF_RESKEY_mountpoint}"
-		exit $OCF_ERR_GENERIC
-	fi
+ 	[ $? -ne $YES ] && exit $OCF_ERR_GENERIC
 
  	isAlive ${OCF_RESKEY_mountpoint}
- 	[ $? -eq $YES ] && exit 0
-
-	ocf_log err "clusterfs:${OCF_RESKEY_name}: Mount point is not accessible!"
-	exit $OCF_ERR_GENERIC
+ 	[ $? -ne $YES ] && exit $OCF_ERR_GENERIC
+ 	
+	exit 0
 	;;
 restart)
 	stopFilesystem
diff --git a/rgmanager/src/resources/default_event_script.sl b/rgmanager/src/resources/default_event_script.sl
new file mode 100644
index 0000000..e961266
--- /dev/null
+++ b/rgmanager/src/resources/default_event_script.sl
@@ -0,0 +1,314 @@
+define node_in_set(node_list, node)
+{
+	variable x, len;
+
+	len = length(node_list);
+	for (x = 0; x < len; x++) {
+		if (node_list[x] == node)
+			return 1;
+	}
+
+	return 0;
+}
+
+define move_or_start(service, node_list)
+{
+	variable len;
+	variable state, owner;
+	variable depends;
+
+	depends = service_property(service, "depend");
+	if (depends != "") {
+		(owner, state) = service_status(depends);
+		if (owner < 0) {
+			debug(service, " is not runnable; dependency not met");
+			return ERR_DEPEND;
+		}
+	}
+
+	(owner, state) = service_status(service);
+	debug("Evaluating ", service, " state=", state, " owner=", owner);
+
+	len = length(node_list);
+	if (len == 0) {
+		debug(service, " is not runnable");
+		return ERR_DOMAIN;
+	}
+
+	if (((event_type != EVENT_USER) and (state == "disabled")) or (state == "failed")) {
+		%
+		% Commenting out this block will -not- allow you to
+		% recover failed services from event scripts.  Sorry.
+		% All it will get you is a false log message about
+		% starting this service.
+		%
+		% You may enable disabled services, but I recommend
+		% against it.
+		%
+		debug(service, " is not runnable");
+		return -1;
+	}
+
+	if (node_list[0] == owner) {
+		debug(service, " is already running on best node");
+		return ERR_RUNNING;
+	}
+
+	if ((owner >= 0) and (node_in_set(node_list, owner) == 1)) {
+		notice("Moving ", service, " from ", owner,
+		       " to ", node_list);
+		if (service_stop(service) < 0) {
+			return ERR_ABORT;
+		}
+	} else {
+		notice("Starting ", service, " on ", node_list);
+	}
+
+	return service_start(service, node_list);
+}
+
+
+%
+% Returns the set of online nodes in preferred/shuffled order which
+% are allowed to run this service.  Gives highest preference to current
+% owner if nofailback is specified.
+% 
+define allowed_nodes(service)
+{
+	variable anodes;
+	variable online;
+	variable nodes_domain;
+	variable ordered, restricted, nofailback;
+	variable state, owner;
+	variable depends;
+
+	(nofailback, restricted, ordered, nodes_domain) =
+			service_domain_info(service);
+
+	(owner, state) = service_status(service);
+
+	anodes = nodes_online();
+
+	% Shuffle the array so we don't start all services on the same
+	% node.  TODO - add RR, Least-services, placement policies...
+	online = shuffle(anodes);
+
+	if (restricted == 1) {
+		anodes = intersection(nodes_domain, online);
+	} else {
+		% Ordered failover domains (nodes_domain) unioned with the
+		% online nodes basically just reorders the online node list
+		% according to failover domain priority rules.
+		anodes = union(intersection(nodes_domain, online),
+			       online);
+	}
+
+	if ((nofailback == 1) or (ordered == 0)) {
+		
+		if ((owner < 0) or (node_in_set(anodes, owner) == 0)) {
+			return anodes;
+		}
+		
+		% Because union takes left as priority, we can
+		% return the union of the current owner with the
+		% allowed node list.  This means the service will
+		% remain on the same node it's currently on.
+		return union(owner, anodes);
+	}
+
+	return anodes;
+}
+
+
+define default_node_event_handler()
+{
+	variable services = service_list();
+	variable x;
+	variable nodes;
+
+	% debug("Executing default node event handler");
+	for (x = 0; x < length(services); x++) {
+		nodes = allowed_nodes(services[x]);
+		()=move_or_start(services[x], nodes);
+	}
+}
+
+
+define default_service_event_handler()
+{
+	variable services = service_list();
+	variable x;
+	variable depends;
+	variable depend_mode;
+	variable policy;
+	variable nodes;
+	variable tmp;
+	variable owner;
+	variable state;
+
+	% debug("Executing default service event handler");
+
+	if (service_state == "recovering") {
+
+		policy = service_property(service_name, "recovery");
+		debug("Recovering",
+		      " Service: ", service_name,
+		      " Last owner: ", service_last_owner,
+		      " Policy: ", policy,
+		      " RTE: ", service_restarts_exceeded);
+
+		if (policy == "disable") {
+			() = service_stop(service_name, 1);
+			return;
+		}
+
+		nodes = allowed_nodes(service_name);
+		if (policy == "restart" and service_restarts_exceeded == 0) {
+			nodes = union(service_last_owner, nodes);
+		} else {
+			% relocate 
+			tmp = subtract(nodes, service_last_owner);
+			if (length(tmp) == 0) {
+				() = service_stop(service_name,0);
+				return;
+			}
+
+			nodes = union(tmp, service_last_owner);
+		}
+
+		()=move_or_start(service_name, nodes);
+
+		return;
+	}
+
+	for (x = 0; x < length(services); x++) {
+		if (service_name == services[x]) {
+			% don't do anything to ourself! 
+			continue;
+		}
+
+		%
+		% Simplistic dependency handling
+		%
+		depends = service_property(services[x], "depend");
+		depend_mode = service_property(services[x], "depend_mode");
+
+		% No dependency; do nothing
+		if (depends != service_name) {
+			continue;
+		}
+
+		(owner, state) = service_status(services[x]);
+		if ((service_state == "started") and (owner < 0) and
+		    (state == "stopped")) {
+			info("Dependency met; starting ", services[x]);
+			nodes = allowed_nodes(services[x]);
+			()=move_or_start(services[x], nodes);
+		}
+
+		% service died - stop service(s) that depend on the dead
+		if ((service_owner < 0) and (owner >= 0) and
+		    (depend_mode != "soft")) {
+			info("Dependency lost; stopping ", services[x]);
+			()=service_stop(services[x]);
+		}
+	}
+}
+
+define default_config_event_handler()
+{
+	% debug("Executing default config event handler");
+}
+
+define default_user_event_handler()
+{
+	variable ret;
+	variable nodes;
+	variable reordered;
+	variable x;
+	variable target = user_target;
+	variable found = 0;
+	variable owner, state;
+
+	nodes = allowed_nodes(service_name);
+	(owner, state) = service_status(service_name);
+
+	if (user_request == USER_RESTART) {
+
+		if (owner >= 0) {
+			reordered = union(owner, nodes);
+			nodes = reordered;
+		}
+
+		notice("Stopping ", service_name, " for relocate to ", nodes);
+
+		found = service_stop(service_name);
+		if (found < 0) {
+			return ERR_ABORT;
+		}
+
+		ret = move_or_start(service_name, nodes);
+
+	} else if ((user_request == USER_RELOCATE) or 
+		   (user_request == USER_ENABLE)) {
+
+		if (user_target > 0) {
+			for (x = 0; x < length(nodes); x++) {
+				%
+				% Put the preferred node at the front of the 
+				% list for a user-relocate operation
+				%
+				if (nodes[x] == user_target) {
+					reordered = union(user_target, nodes);
+					nodes = reordered;
+					found = 1;
+				}
+			}
+	
+			if (found == 0) {
+				warning("User specified node ", user_target,
+					" is offline");
+			}
+		}
+
+		if ((owner >= 0) and (user_request == USER_RELOCATE)) {
+			if (service_stop(service_name) < 0) {
+				return ERR_ABORT;
+			}
+
+			%
+			% The current owner shouldn't be the default
+			% for a relocate operation
+			%
+			reordered = subtract(nodes, owner);
+			nodes = union(reordered, owner);
+		}
+
+		ret = move_or_start(service_name, nodes);
+
+	} else if (user_request == USER_DISABLE) {
+
+		ret = service_stop(service_name, 1);
+
+	} else if (user_request == USER_STOP) {
+
+		ret = service_stop(service_name);
+
+	} 
+
+	%
+	% todo - migrate
+	%
+
+	return ret;
+}
+
+if (event_type == EVENT_NODE)
+	default_node_event_handler();
+if (event_type == EVENT_SERVICE)
+	default_service_event_handler();
+if (event_type == EVENT_CONFIG)
+	default_config_event_handler();
+if (event_type == EVENT_USER)
+	user_return=default_user_event_handler();
+
diff --git a/rgmanager/src/resources/fs.sh b/rgmanager/src/resources/fs.sh
index a1e589a..b3381c3 100755
--- a/rgmanager/src/resources/fs.sh
+++ b/rgmanager/src/resources/fs.sh
@@ -529,7 +529,7 @@ isMounted () {
 			"fs (isMounted): Could not match $1 with a real device"
 		return $FAIL
 	fi
-	mp=$2
+	mp=$(readlink -f $2)
 	
 	while read tmp_dev tmp_mp
 	do
@@ -797,21 +797,14 @@ activeMonitor() {
 
 
 #
-# Enable quotas on the mount point if the user requested them
+# Decide which quota options are enabled and return a string 
+# which we can pass to quotaon
 #
-enable_fs_quotas()
+quota_opts()
 {
-	declare -i need_check=0
-	declare -i rv
 	declare quotaopts=""
-	declare mopt
 	declare opts=$1
-	declare mp=$2
-
-	if [ -z "`which quotaon`" ]; then
-		ocf_log err "quotaon not found in $PATH"
-		return $OCF_ERR_GENERIC
-	fi
+	declare mopt
 
 	for mopt in `echo $opts | sed -e s/,/\ /g`; do
 		case $mopt in
@@ -830,6 +823,33 @@ enable_fs_quotas()
 		esac
 	done
 
+	echo $quotaopts
+	return 0
+}
+
+
+
+#
+# Enable quotas on the mount point if the user requested them
+#
+enable_fs_quotas()
+{
+	declare -i need_check=0
+	declare -i rv
+	declare quotaopts=""
+	declare mopt
+	declare opts=$1
+	declare mp=$2
+
+	if [ -z "`which quotaon`" ]; then
+		ocf_log err "quotaon not found in $PATH"
+		return $OCF_ERR_GENERIC
+	fi
+
+	quotaopts=$(quota_opts $opts)
+
+	ocf_log info "quotaopts = $quotaopts"
+
 	[ -z "$quotaopts" ] && return 0
 
 	# Ok, create quota files if they don't exist
@@ -1089,6 +1109,7 @@ stopFilesystem() {
 	typeset force_umount=""
 	typeset self_fence=""
 	typeset fstype=""
+	typeset quotaopts=""
 
 
 	#
@@ -1154,11 +1175,15 @@ stop: Could not match $OCF_RESKEY_device with a real device"
 			;;
 		$YES)
 			sync; sync; sync
-			ocf_log info "unmounting $mp"
+			quotaopts=$(quota_opts $OCF_RESKEY_options)
+			if [ -n "$quotaopts" ]; then
+				ocf_log debug "Turning off quotas for $mp"
+		       		quotaoff -$quotaopts $mp &> /dev/null
+			fi
 
 			activeMonitor stop || return $OCF_ERR_GENERIC
 
-			quotaoff -gu $mp &> /dev/null
+			ocf_log info "unmounting $mp"
 			umount $mp
 			if  [ $? -eq 0 ]; then
 				umount_failed=
diff --git a/rgmanager/src/resources/netfs.sh b/rgmanager/src/resources/netfs.sh
index 15759bf..c683ee6 100755
--- a/rgmanager/src/resources/netfs.sh
+++ b/rgmanager/src/resources/netfs.sh
@@ -335,7 +335,7 @@ isMounted () {
 	fi
 
 	fullpath=$1
-	mp=$2
+	mp=$(readlink -f $2)
 
 	while read tmp_fullpath tmp_mp
 	do
diff --git a/rgmanager/src/resources/ocf-shellfuncs b/rgmanager/src/resources/ocf-shellfuncs
index eb0147f..98156c0 100755
--- a/rgmanager/src/resources/ocf-shellfuncs
+++ b/rgmanager/src/resources/ocf-shellfuncs
@@ -174,6 +174,10 @@ ocf_log() {
 	esac
 
 	pretty_echo $__OCF_PRIO "$__OCF_MSG"
+
+	if [ -z "`which clulog 2> /dev/null`" ]; then
+		return 0
+	fi
 	clulog -p $__LOG_PID -n $__LOG_NAME \
 		-s $__OCF_PRIO_N "$__OCF_MSG"
 }
diff --git a/rgmanager/src/resources/script.sh b/rgmanager/src/resources/script.sh
index 9a9455c..0141c80 100755
--- a/rgmanager/src/resources/script.sh
+++ b/rgmanager/src/resources/script.sh
@@ -115,5 +115,5 @@ ${OCF_RESKEY_file} $1
 declare -i rv=$?
 if [ $rv -ne 0 ]; then
 	ocf_log err "script:$OCF_RESKEY_name: $1 of $OCF_RESKEY_file failed (returned $rv)"
-	return $OCF_ERR_GENERIC
+	exit $OCF_ERR_GENERIC
 fi
diff --git a/rgmanager/src/resources/service.sh b/rgmanager/src/resources/service.sh
index da89301..339657d 100755
--- a/rgmanager/src/resources/service.sh
+++ b/rgmanager/src/resources/service.sh
@@ -1,8 +1,26 @@
 #!/bin/bash
 
 #
-# Dummy OCF script for resource group; the OCF spec doesn't support abstract
-# resources. ;(
+#  Copyright Red Hat, Inc. 2004-2006
+#
+#  This program is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU General Public License as published by the
+#  Free Software Foundation; either version 2, or (at your option) any
+#  later version.
+#
+#  This program is distributed in the hope that it will be useful, but
+#  WITHOUT ANY WARRANTY; without even the implied warranty of
+#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+#  General Public License for more details.
+#
+#  You should have received a copy of the GNU General Public License
+#  along with this program; see the file COPYING.  If not, write to the
+#  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+#  MA 02139, USA.
+#
+
+#
+# Dummy OCF script for resource group
 #
 
 # Grab nfs lock tricks if available
@@ -38,7 +56,7 @@ meta_data()
             <content type="string"/>
         </parameter>
     
-        <parameter name="domain">
+        <parameter name="domain" reconfig="1">
             <longdesc lang="en">
                 Fail over domains define lists of cluster members
                 to try in the event that a resource group fails.
@@ -49,7 +67,7 @@ meta_data()
             <content type="string"/>
         </parameter>
 
-        <parameter name="autostart">
+        <parameter name="autostart" reconfig="1">
             <longdesc lang="en">
 	    	If set to yes, this resource group will automatically be started
 		after the cluster forms a quorum.  If set to no, this resource
@@ -59,10 +77,10 @@ meta_data()
             <shortdesc lang="en">
 	    	Automatic start after quorum formation
             </shortdesc>
-            <content type="boolean"/>
+            <content type="boolean" default="1"/>
         </parameter>
 
-        <parameter name="hardrecovery">
+        <parameter name="hardrecovery" reconfig="1">
             <longdesc lang="en">
 	    	If set to yes, the last owner will reboot if this resource
 		group fails to stop cleanly, thus allowing the resource
@@ -74,10 +92,10 @@ meta_data()
             <shortdesc lang="en">
 	    	Reboot if stop phase fails
             </shortdesc>
-            <content type="boolean"/>
+            <content type="boolean" default="0"/>
         </parameter>
 
-        <parameter name="exclusive">
+        <parameter name="exclusive" reconfig="1">
             <longdesc lang="en">
 	    	If set, this resource group will only relocate to
 		nodes which have no other resource groups running in the
@@ -91,7 +109,7 @@ meta_data()
             <shortdesc lang="en">
 	        Exclusive resource group
             </shortdesc>
-            <content type="boolean"/>
+            <content type="boolean" default="0"/>
         </parameter>
 
 	<parameter name="nfslock">
@@ -107,10 +125,10 @@ meta_data()
 	    <shortdesc lang="en">
 	        Enable NFS lock workarounds
 	    </shortdesc>
-	    <content type="boolean"/>
+	    <content type="boolean" default="0"/>
 	</parameter>
                 
-        <parameter name="recovery">
+        <parameter name="recovery" reconfig="1">
             <longdesc lang="en">
 	        This currently has three possible options: "restart" tries
 		to restart failed parts of this resource group locally before
@@ -123,8 +141,60 @@ meta_data()
             <shortdesc lang="en">
 	    	Failure recovery policy
             </shortdesc>
+            <content type="string" default="restart"/>
+        </parameter>
+
+        <parameter name="depend">
+            <longdesc lang="en">
+		Top-level service this depends on, in "service:name" format.
+            </longdesc>
+            <shortdesc lang="en">
+		Service dependency; will not start without the specified
+		service running.
+            </shortdesc>
             <content type="string"/>
         </parameter>
+
+        <parameter name="depend_mode">
+            <longdesc lang="en">
+	    	Dependency mode
+            </longdesc>
+            <shortdesc lang="en">
+		Service dependency mode.
+		hard - This service is stopped/started if its dependency
+		       is stopped/started
+		soft - This service only depends on the other service for
+		       initial startip.  If the other service stops, this
+		       service is not stopped.
+            </shortdesc>
+            <content type="string" default="hard"/>
+        </parameter>
+
+        <parameter name="max_restarts">
+            <longdesc lang="en">
+	    	Maximum restarts for this service.
+            </longdesc>
+            <shortdesc lang="en">
+	    	Maximum restarts for this service.
+            </shortdesc>
+            <content type="string" default="0"/>
+        </parameter>
+
+        <parameter name="restart_expire_time">
+            <longdesc lang="en">
+	    	Restart expiration time
+            </longdesc>
+            <shortdesc lang="en">
+	    	Restart expiration time.  A restart is forgotten
+		after this time.  When combined with the max_restarts
+		option, this lets administrators specify a threshold
+		for when to fail over services.  If max_restarts
+		is exceeded in this given expiration time, the service
+		is relocated instead of restarted again.
+            </shortdesc>
+            <content type="string" default="0"/>
+        </parameter>
+
     </parameters>
 
     <actions>
@@ -135,10 +205,11 @@ meta_data()
         <action name="status" timeout="5" interval="1h"/>
         <action name="monitor" timeout="5" interval="1h"/>
 
+        <action name="reconfig" timeout="5"/>
         <action name="recover" timeout="5"/>
         <action name="reload" timeout="5"/>
         <action name="meta-data" timeout="5"/>
-        <action name="verify-all" timeout="5"/>
+        <action name="validate-all" timeout="5"/>
     </actions>
     
     <special tag="rgmanager">
@@ -149,7 +220,7 @@ meta_data()
         <child type="netfs" start="4" stop="6"/>
 	<child type="nfsexport" start="5" stop="5"/>
 
-	<child type="nfsclient" start="6" stop=""/>
+	<child type="nfsclient" start="6" stop="4"/>
 
         <child type="ip" start="7" stop="2"/>
         <child type="smb" start="8" stop="3"/>
@@ -166,6 +237,7 @@ EOT
 #
 case $1 in
 	start)
+		[ -d "/var/run/cluster/rgmanager" ] && touch "/var/run/cluster/rgmanager/$OCF_RESOURCE_INSTANCE"
 		#
 		# XXX If this is set, we kill lockd.  If there is no
 		# child IP address, then clients will NOT get the reclaim
@@ -180,6 +252,7 @@ case $1 in
 		exit 0
 		;;
 	stop)
+		[ -d "/var/run/cluster/rgmanager" ] && rm -f "/var/run/cluster/rgmanager/$OCF_RESOURCE_INSTANCE"
 		exit 0
 		;;
 	recover|restart)
@@ -195,7 +268,10 @@ case $1 in
 		meta_data
 		exit 0
 		;;
-	verify-all)
+	validate-all)
+		exit 0
+		;;
+	reconfig)
 		exit 0
 		;;
 	*)
diff --git a/rgmanager/src/resources/svclib_nfslock b/rgmanager/src/resources/svclib_nfslock
index de996b1..2101e1e 100644
--- a/rgmanager/src/resources/svclib_nfslock
+++ b/rgmanager/src/resources/svclib_nfslock
@@ -1,5 +1,22 @@
 #!/bin/bash
 #
+#  Copyright Red Hat Inc., 2006
+#
+#  This program is free software; you can redistribute it and/or modify it
+#  under the terms of the GNU General Public License as published by the
+#  Free Software Foundation; either version 2, or (at your option) any
+#  later version.
+#
+#  This program is distributed in the hope that it will be useful, but
+#  WITHOUT ANY WARRANTY; without even the implied warranty of
+#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+#  General Public License for more details.
+#
+#  You should have received a copy of the GNU General Public License
+#  along with this program; see the file COPYING.  If not, write to the
+#  Free Software Foundation, Inc.,  675 Mass Ave, Cambridge, 
+#  MA 02139, USA.
+#
 # Do reclaim-broadcasts when we kill lockd during shutdown/startup
 # of a cluster service.
 #
@@ -163,6 +180,17 @@ notify_list_broadcast()
 	declare lockd_pid=$(pidof lockd)
 	declare nl_dir=$1
 
+	# First of all, send lockd a SIGKILL.  We hope nfsd is running.
+	# If it is, this will cause lockd to reset the grace period for
+	# lock reclaiming.
+	if [ -n "$lockd_pid" ]; then
+		ocf_log info "Asking lockd to drop locks (pid $lockd_pid)"
+		kill -9 $lockd_pid
+	else
+		ocf_log warning "lockd not running; cannot notify clients"
+		return 1
+	fi
+	
         while read dev family addr maskbits; do
 		if [ "$family" != "inet" ]; then
 			continue
diff --git a/rgmanager/src/resources/utils/named-parse-config.pl b/rgmanager/src/resources/utils/named-parse-config.pl
new file mode 100644
index 0000000..4ac39c3
--- /dev/null
+++ b/rgmanager/src/resources/utils/named-parse-config.pl
@@ -0,0 +1,26 @@
+#!/usr/bin/perl -w
+
+##
+##  parse named.conf (from stdin) and add options from cluster.conf
+##  
+##  ./named-parse-config.pl "directory" "pid-file" "listen-on"
+##
+use strict;
+
+if ($#argv < 2) {
+	die ("Not enough arguments");
+}
+
+while (my $line = <STDIN>) {
+	chomp($line);
+	$line =~ s/(.*?)\s*$/$1/;
+	if ($line =~ /^\s*options\s+\{/) {
+		print $line, "\n";
+		print "\tdirectory \"$ARGV[0]\";\n";
+		print "\tpid-file \"$ARGV[1]\";\n";
+		print "\tlisten-on { $ARGV[2] };\n";
+	} else {
+		print $line, "\n";
+	}
+}
+
diff --git a/rgmanager/src/resources/utils/ra-skelet.sh b/rgmanager/src/resources/utils/ra-skelet.sh
index 67630e2..5530ae6 100644
--- a/rgmanager/src/resources/utils/ra-skelet.sh
+++ b/rgmanager/src/resources/utils/ra-skelet.sh
@@ -65,7 +65,7 @@ stop_generic()
 	if [ ! -d "/proc/$pid" ]; then
 		return 0;
 	fi
-                                
+
 	kill -TERM "$pid"
 
 	if [ $? -ne 0 ]; then
diff --git a/rgmanager/src/utils/clusvcadm.c b/rgmanager/src/utils/clusvcadm.c
index a608374..0701c18 100644
--- a/rgmanager/src/utils/clusvcadm.c
+++ b/rgmanager/src/utils/clusvcadm.c
@@ -222,7 +222,8 @@ int
 main(int argc, char **argv)
 {
 	extern char *optarg;
-	char *svcname=NULL, nodename[256];
+	char *svcname=NULL, nodename[256], tmp[96];
+	char *printname=NULL;
 	int opt;
 	int msgfd = -1, fd, fod = 0;
 	SmMessageSt msg;
@@ -314,6 +315,12 @@ main(int argc, char **argv)
 		return 1;
 	}
 
+	if (strncmp(svcname, "service:", 8)) {
+		snprintf(tmp, sizeof(tmp), "service:%s", svcname);
+		printname = svcname;
+		svcname = tmp;
+	}
+
 	/* No login */
 	fd = clu_connect(RG_SERVICE_GROUP, 0);
 	if (fd < 0) {
@@ -343,15 +350,18 @@ main(int argc, char **argv)
 	build_message(&msg, action, svcname, svctarget, fod, 0);
 
 	if (action != RG_RELOCATE) {
-		printf("Member %s %s %s", nodename, actionstr, svcname);
+		printf("Member %s %s %s", nodename, actionstr,
+		       printname?printname:svcname);
 		printf("...");
 		fflush(stdout);
 		msgfd = msg_open(msgtarget, RG_PORT, 0, 5);
 	} else {
 		if (node_specified)
-			printf("Trying to relocate %s to %s", svcname, nodename);
+			printf("Trying to relocate %s to %s",
+			       printname?printname:svcname, nodename);
 		else
-			printf("Trying to relocate %s", svcname);
+			printf("Trying to relocate %s",
+			       printname?printname:svcname);
 		printf("...");
 		fflush(stdout);
 		msgfd = msg_open(me, RG_PORT, 0, 5);
@@ -395,7 +405,7 @@ main(int argc, char **argv)
 	/* Decode */
 	swab_SmMessageSt(&msg);
 	switch (msg.sm_data.d_ret) {
-	case SUCCESS:
+	case RG_SUCCESS:
 		printf("success\n");
 
 		/* Non-start/relo request: done */
@@ -417,31 +427,8 @@ main(int argc, char **argv)
 	    	printf("Service %s is now running on %s\n", svcname, 
 			memb_id_to_name(membership, msg.sm_data.d_svcOwner));
 		break;
-	case RG_EFAIL:
-		printf("failed\n");
-		break;
-	case RG_ENODEDEATH:
-		printf("node processing request died\n");
-		printf("(Status unknown)\n");
-		break;
-	case RG_EABORT:
-		printf("cancelled by resource manager\n");
-		break;
-	case RG_ENOSERVICE:
-		printf("failed: Service does not exist\n");
-		break;
-	case RG_EDEADLCK:
-		printf("failed: Operation would deadlock\n");
-		break;
-	case RG_EAGAIN:
-		printf("failed: Try again (resource groups locked)\n");
-		break;
-	case RG_ERUN:
-		printf("failed: Service is already running\n");
-		return 0;
-		break;
 	default:
-		printf("failed: unknown reason %d\n", msg.sm_data.d_ret);
+		printf("%s\n", rg_strerror(msg.sm_data.d_ret));
 		break;
 	}
 


hooks/post-receive
--
Cluster Project


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]