-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exception in recalculateGroupVolume #124
Comments
Seems like it couldn't find a zone for the given player (meaning, the player was not the coordinator), but it received a last-change event from it. I also think that the zone UUID isn't always the coordinator players UUID. The scenario is where you started a group, but the coordinator player dropped out or was removed, or demoted for some reason. This is probably a bug, and the zone needs to be found via other means (basically, find the zone where this player is included). |
I think I understand somewhat. I don't have any Sonos devices and am trying to maintain some code that was written by someone else. The application is running 24x7 to monitor and control the devices so it's very possible that a coordinator player drops at some point. The exception gets trapped in the main app, but by that point it seems to be too late to do anything about it. |
When I look at that traceback it is the line below that is throwing the exception. That would imply that this.uuid is no longer the uuid of a zone because it is not finding the zone by that uuid, hence undefined. I guess that happens when a player drops and a topology change event occurs. If a topology change occurred which changes system.zones, but the group this thing was in did not get updated in a timely manner for some reason, then that is exactly the behavior you would expect. The other case, if I understand Jishi correctly, is that the uuid of the group is the uuid of the coordinator. If the coordinator uuid in the zone was changed but the code that calls this assumed that this one is still the coordinator then you would get the same result. A possible fix,, as Jishi suggested, would be to look in the members of all zones for the uuid in question if we can't find a zone with this uuid. function recalculateGroupVolume() { if (relevantMembers.length === 0) { |
I don't recall fully how the model looks, but I think that the zone might have a coordinator property that should be correct? If not, maybe it should have since the coordinator can change. If not, you might as well always look in the whole zone for any member match, the performance hit is neglible IMHO. |
The zone does have a coordinator property. It is (or should be) maintained by topology change notifications. What I don't know is if something is a coordinator and it drops, does the system model state get updated correctly in all cases. Based on the behavior the OP is describing, it looks like the model state is not consistent. Because when I look at the calling code, it is calling this method on what it thinks is the coordinator. But it can't be if you can't find a zone with its uuid in this method. |
I think this merely stems from a misinterpretation on my part. The zones has an UUID in the topology which in 99% of the cases is the same as the coordinator, but in the even that a coordinator is demoted or removed from a group, another player is promoted to coordinator, but the old UUID sticks. I didn't anticipate that. The last change events are most likely fired by the coordinator always, but my lookup that uses the zone uuid can then be wrong in some cases. Just searching for the zone which has a coordinator uuid that matches should then be sufficient. And yes, the coordinator demotion/promotion should trigger a topology change, and thus also update the zone information. Might exist some race condition though where it would actually fail to find a zone (because the topology change is delayed), but then we can scrap the notification altogether. I wrote this 10 years ago, so my memory is a bit dim on the details :D |
In a program that makes use of this node module, we're seeing the following exception:
TypeError: Cannot read properties of undefined (reading 'members') at Player.recalculateGroupVolume (/var/polyglot/pg3/ns/000db9532fec_7/node_modules/sonos-discovery/lib/prototypes/Player/recalculateGroupVolume.js:4:32) at NotificationListener.notificationHandler (/var/polyglot/pg3/ns/000db9532fec_7/node_modules/sonos-discovery/lib/models/Player.js:370:25) at NotificationListener.emit (node:events:525:35) at /var/polyglot/pg3/ns/000db9532fec_7/node_modules/sonos-discovery/lib/NotificationListener.js:125:17 at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
This is happening every few days so repeatable, but not reproducible on demand.
The text was updated successfully, but these errors were encountered: