Each Rice node defines a RiceCacheExporterFactoryBean. This creates a RiceCacheAdministratorImpl that wraps an OSCache caching implementation with a registered RiceDistributedCacheListener. That RiceDistributedCacheListener is responsible for distributing local cache events to a topic and receiving remote cache events from that same topic.
OSCache will initialize the RiceDistributedCacheListener (which is an OSCache AbstractBroadcastingListener and/or CacheEntryEventListener). The RiceDistributedCacheListener will then register itself as a Java topic service under the name specified in the OSCache properties. The RiceCacheAdministratorImpl (by default, "OSCacheNotificationService") sets those properties. This name is namespace-less (and, therefore, "global").
When a given node flushes its cache, the RiceDistributedCacheListener detects this and sends an asynchronous cache flush message to the topic (all other OSCacheNotificationServices registered in the bus). Correspondingly, the RiceDistributedCacheListener invoke method will be called to handle events coming from the bus. The implementation simply unpacks the OSCache ClusterNotification and delivers it to the superclass (handleClusterNotification).
If a topic can be invoked by client nodes (not the standalone server, which presumably has all the client keys imported), then other nodes in the cluster that have registered a service in the topic must cooperate. Due to the current keystore-based, point-to-point bus security mechanism, this is only tenable for small cluster sizes. For large clusters (more than two or three nodes), there is significant overhead in keystore-crossing. (This is proportional to the factorial of cluster size!) Applications that have not been configured to trust each other will simply not receive these notifications.
You may be able to work around this by ensuring that actions that cause cluster cache notification only occur in the standalone server (for example, uploading new document types).