Releases: kamon-io/kamon-akka
v2.0.1 - Akka 2.6 and Artery Support
Akka 2.6 and Artery Support
Long awaited and it is finally out! Automatic context propagation across Artery connections has been finally included in this release. Also this comes right after the Akka 2.6 release and the module has been updated for compatibility.
v2.0.0-RC3 - Minor Fixes
Changes since RC2:
Fixes
- Active actors were not being tracked properly, cause the value to grow to infinity and beyond due to missing guards on the monitors for un-instrumented actors.
- Improve the way in which dropped messages are counted to avoid having "fake" pending messages in actor groups. This is still not perfect, it could happen that if many messages are sent to an actor while it is stopping then pending messages will not clear out all of them, but this is the best we can do at the moment. We'll keep track of this improvement here: https://github.com/kamon-io/kamon-akka/issues/54
- Use the "typed" path (e.g.
systemName/user/ActorType/ChildType
) for auto-grouping filtering instead of the actual actor path.
v2.0.0-RC2 - Active Actors Tracking
Fixes
- Ensure that router cells are not counted as active Actors in the Actor System metrics.
v2.0.0-RC1 - Auto Grouping and Upgrade to 2.0.0-RC1
Highlights
Filters Relocation
Starting on Kamon 2.0, we no longer are forcing people to define all filters under kamon.util.filters
but rather let them have the filters wherever they make more sense, as long as they preserve the includes/excludes, so all Akka-related filters are now under the kamon.instrumentation.akka.filters
path. Here is an excerpt from the reference configuration:
kamon.instrumentation.akka {
filters {
groups {
# Special filter used for auto-grouping.
auto-grouping {
includes = [ "*/user/**" ]
excludes = [ ]
}
}
# Decides how Actors are going to be tracked and traced.
#
actors {
doomsday-wildcard = off
# Decides which actors will have metric tracking enabled.
track {
includes = [ ]
excludes = [ "*/system/**", "*/user/IO-**" ]
}
# Decides which actors generate Spans for the messages they process.
trace {
includes = [ "*/user/**", "*/system/sharding**" ]
excludes = [ ]
}
# Decides which actors generate Spans for the messages they process,
# even if that requires them to start a new
# trace.
start-trace {
includes = [ ]
excludes = [ ]
}
}
# Decides which routers should have metric tracking enabled.
#
routers {
includes = [ ]
excludes = [ ]
}
# Decides which dispatchers should have metric tracking enabled.
#
dispatchers {
includes = [ "**" ]
excludes = [ ]
}
}
}
Actors auto-grouping
In all these years working with instrumenting Akka, we realized that in most cases people don't want to monitor actors individually, specially when there are thousands of actors in your system; instead of that, people will want to group those actors together for tracking metrics and that's something we have been offering as Actor Groups for some time already which require some manual configuration to get working. But, as it turns out, most of the time these groups are created for actors of the same type, at the same level in the actors tree so, as of this release, the instrumentation can automatically do that for you!
With auto-grouping enabled, all actors that are not being explicitly monitored (either individually, or as part of manually defined groups or because they belong to a router) as eligible for auto-grouping. The group names are based on the actor classes instead of their names, so you can expect to see something like user/ParentActorClass/ChildActorClass
as the group names.
Doomsday Filter
A common practice we have seen on users is to add a **
to the tracked actors filter for testing purposes, which is sort of fine, but then makes their systems either explode or have serious issues in production. Starting on this release we are not allowing to use **
on the tracked actors filter nor the start trace filters, unless you explicitly enable this via the kamon.instrumentation.akka.filters.actors.doomsday-wildcard
setting. In most cases you will not need to this, since auto-grouping and targeting specific actors with the filters should be more than enough.
Single Artifact for Akka 2.4 and Akka 2.5
We are now publishing a single artifact called kamon-akka
which contains the instrumentation for both Akka versions.
v1.1.4 - Minor Fixes and Improvements
Fixes
- Ensures that dropped messages are taken into account on the Actor Group metrics when any of the members is stopped. This was reported on #52 and fixed by @mladens via 982bfd5
Improvements
- Starting on this release, Spans will only be created if there is a current trace happening (previously, all actors matching the "akka.actor.traced-actor" filter would generate Spans, even if that meant starting a new trace). This ensures that we will not be generating trillions of Spans because of scheduled actions and receive timeouts and so on. Also, given this change on behavior we are now tracing by default all user and sharding related actors. Contributed by @ivantopo via #50.
v1.1.3 - Minor Fixes
Fixes
- There was a memory leak happening when a PinnedDispatcher was created/shutdown several times. This was reported on #48 and fixed by @mladens via #49.
- There was a small update to the way in which an Actor's class name was being picked up, which previously would produce many actor class names as
TypedCreatorFunctionConsumer
instead of the actual actor class. This was contributed by @ivantopo via a2b0a12
v1.1.2 - Creating Actor Groups Programmatically
There is only one small feature on this release: defining actor groups programmatically. Up until now the only way to define actor groups has been via filters configuration but in preparation for the upcoming Cluster Sharding support in kamon-akka-remote
we are now allowing adding actor groups like this:
// add a new actor group
Akka.addActorGroup("group-by-code", new GlobPathFilter("*/user/some/path/*"))
// remove the actor group definition
Akka.removeActorGroup("group-by-code")
Keep in mind that removing an actor group definition will only affect definitions included programatically and removing a definition doesn't remove the metrics from actors that are already members of a group.
Contributors
v1.1.1 - Router Metrics Bugfix
There was a small bug in the selection of the dispatcher name when tracking routers/routees: if there was a deployment configuration (under akka.actor.deployment
for the routees as this one here:
akka {
actor {
deployment {
/picking-the-right-dispatcher-in-pool-router {
router = round-robin-pool
resizer = {
lower-bound = 5
upper-bound = 64
messages-per-resize = 20
}
}
"/picking-the-right-dispatcher-in-pool-router/*" {
dispatcher = custom-dispatcher
}
}
}
}
custom-dispatcher {
executor = "thread-pool-executor"
type = PinnedDispatcher
}
Then at the time the router is created the dispatcher name would be the akka.actor.default-dispatcher
but when the routees were created the dispatcher would be properly set to custom-dispatcher
. This was causing two different sets of metrics being created for each router. With these changes the instrumentation will try to resolve any deployment configuration for the routees when instrumenting the router and use that dispatcher instead of the default one.
Known limitations:
If someone gets fancy about separating the routees' dispatchers with mode complicated deployment configurations (e.g. having one dispatcher for router/$a
and another for router/$b
) the bug might kick again, although that seems like a rather uncommon situation.
v1.1.0 - Router Metrics Improvements and Dropping Akka 2.3 Support
Router Metrics Improvements
This release introduces a few changes related to router metrics:
- Introduced two new metrics:
akka.router.pending-messages
as a range sampler tracking how many messages are waiting to be processed across all routees of a router.akka.router.members
as a range sampler tracking the number of routees in a router.
- All routers now have a
routerClass
androuteeClass
tags. - The
dispatcher
tag used on BalancingPool routers has been fixed. For this type of routers the routees would always get a special dispatcher assigned, but the router itself could run on a different dispatcher (usually the default dispatcher). This duplicates the number of metrics created for BalancingDispatchers and leaves half of them alive when removing the router. Since this PR the router actor will have the dispatcher tag value to match the dispatcher name of the routees, which is not 100% accurate, but we can live with that.
Backwards incompatible changes:
The akka.group.mailbox-size
metric was renamed to akka.group.pending-messages
to stay consistent with the previous change (pending messages is much more close to reality than mailbox size since there is no single mailbox there). The semantics remain the same, only the name was changed.
Dropping Akka 2.3 Support
Starting on this release we are no longer publishing artifacts for Akka 2.3 and completely removed the related project from the codebase. Akka 2.3 reached end-of-life when Akka 2.5 was released, over a year ago, even Akka 2.4 reached EOL as well. For now we are only dropping support for Akka 2.3 and plan to continue supporting Akka 2.4 as long as the maintenance burden remains low.
v1.0.1 - Minor Fixes
Fixes:
- A NullPointerException would be thrown if
akka.actor.serialize-messages=on
, more details on kamon-io/kamon-akka-remote#9. This only affects Akka 2.5, which creates a copy of the original envelope when doing the roundtrip to check serialization. Solved by ensuring that copies of the envelope will copy the correspondent context as well.