MessageProcessingFailed instead of a CustomProcessingFailed since it makes more sense to handle it like a conventional handler errorNuGet dependencies updated
The handler being run while the Bus shutdown is initiated could not send messages because the Bus was signaled as "Stopped" too early
Abc.Zebus.Persistence.Tests to the InternalsVisibleTo list to prepare the release of the PersistenceAdded Abc.Zebus.Persistence to the InternalsVisibleTo list to prepare the release of the Persistence
Sending a message with a null Routing Key now throws an explicit exception (instead of NullReferenceException)
Abc.Zebus.Lotus.CustomProcessingFailed is now mutable, allowing users to pool itScan\Pipes to Dispatch\Pipes (theoretically a breaking change, but the API is quite internal)RoutingType since it wasn't usedThe new MarkPeerAsRespondingCommand/ MarkPeerAsNotRespondingCommand commands allow to mark a Peer as (not) responding (NOT a standard operation, use with care)
The Persistence is now acked when a message cannot be deserialized, to prevent the Persistence from sending it over and over
A race condition could prevent the Bus from starting properly
Send() will throw if the target Peer is not respondingIProvideQueueLength now exposes a Purge() method, that is called when the queue length provider exceeds queue thresholds
Fixed thread-safety issue in MessageDispatch.SetHandled
The new SubscriptionModeAttribute allows to control automatic subscriptions more explicitly
The "HANDLE" log is now accurate for async
Split the "HANDLE" log into "RECV" and "HANDLE", making the distinction between the time a message is received and the time it is handled by user code
Directories don't decommission other Directories/self
Starting multiple Buses on the same machine simultaneously could result in identical message ids
The tree-backed local Directory cache is now fully operational (routing performance improvement, faster routing rules updates, smaller memory footprint, etc.)
Dynamic subscriptions for outgoing messages can be disabled on the Cassandra Directory implementation to handle massive dynamic subscriptions (not recommended)
The SocketConnected/SocketDisconnected feature was removed (it was largely undocumented / unused, so it made to a minor)
The local Directory cache doesn't lose subscriptions when a Peer is decommissioned
Reduced the Directory cache memory footprint
Fixed a bug in the Directory cache that prevented multiple Peers from receiving the same messages
Messages received from the Directory during the Registration procedure could be lost
The Directory server now deletes existing dynamic subscriptions when a Peer registers
The Directory server now handles PeerSubscriptionsForTypesUpdated with "null" BindingKeys
The project is now built/tested on AppVeyor
When creating two identical dynamic subscriptions, disposing one does not dispose the other anymore.