-
-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Roborock Core as a platform #691
Comments
It should work, but you have to set it up using map_source:
camera: image.s7_roborock_downstairs I suppose calibration points won't be added to the core, right? |
Ah that did it - Guess I should have read the docs better. I do plan to try to get calibration points added to core as an extra state attribute on the image entity, I think I'll be able to get that approved |
In case of failure I think it won't be hard to inject appropriate code using a custom integration. Please include me in a potential PR regarding this feature |
Can do. Anything else that would be helpful for me to expose? I have just started diving into your code, but one thing that is important with Roborock vacuums is that commands are map specific and room ids are not. So if I want to clean room id 12 on the downstairs map, if I click on that room to clean it, but I have the upstairs map selected as my current map, it will attempt to clean room id 12 upstairs. Is there a means on the card to set the map you are interacting with as the current map? |
It is possible to add configs for multiple maps using More info e.g. here: #248 |
Could you perhaps give more detail on how you are getting the map coordinates? I am using the new native Roborock integration. I have the image showing up but the calibration is off because I am not sure how to map the dock location to the image map. |
@dkirby-ms he modified the code of Roborock integration |
is there any ongoing development on this issue? |
I am unfortunately a bit blocked. Not much I can do at the moment and core devs have to make decisions. |
Is there a recommended workaround for generating calibration points at the moment? Docs point me towards https://github.com/PiotrMachowski/Home-Assistant-custom-components-Xiaomi-Cloud-Map-Extractor but I don't have a Xiaomi account as I've been using the Roborock app. |
You can use this integration instead: |
I did just switch to the official integration because I was having issues with that one and was hoping this would be more stable. Is the humbertogontijo version preferred for the vacuum map card? |
@jason-curtis at this moment the official integration doesn't provide data that is necessary to use the map functionality in this card |
I as well very much look forward to this card supporting the official Roborock integration |
@PiotrMachowski since our original plan failed. Would a service call work? I.e you could call vacuum: send command with a service that is like get_map_card_info and we get all of the initial info you need like calibration points, room dimensions, etc? Then we could still use the image entity to update what the map looks like? |
@Lash-L I think service call should be ok, but I also thought about implementing a dedicated WS API method. The downside of this approach is that it probably won't be possible for users to use it manually. It would also be nice to make it possible for the card to be notified that something has changed in the calibration (I think this happens quite often during map building). Can it be solved by generating an event when it happens? |
Hi! First of all, thanks for the integrations! @Lash-L Could you better explain how to get the calibration points so users can do it in the mean time? |
I think conceptually that's okay - i don't know if i really have time for it right now, whereas a service might be easier. I could also cache the latest mapdata and the service could get the latest mapdata if you think that would be better?
@carlos-48 You can't. I modified things inside the actual code base in my development environment |
Don't worry about it, at this moment I'm rewriting Map Extractor, then I'll have to adjust the card, so you have plenty of time. I think service call should be enough for my purposes |
Hey there, I have a suggestion regarding this. When you was developing the custom integration you had more freedom to do what ever you pleased and at your own saying right? Why not bring the custom integration up to date making it on par with the core integration and that way you can implement all the features at any time without having to go through the core devs themselves. This way the custom integration is always the "same" as the core, but at the same time you are able to implement whatever you want at whatever time. This is a newbie talking btw, I am just thinking out loud! Cause so far I had 3 options, in my case with a roborock:
|
i wanted to use this card so bad that i was willing to forsake the roborock app and use xiaomi home instead (and therefore lose camera view and pictures of objects detected) however the newest roborock s8 maxv thats already out for quite a while now, is not supported in the xiaomi home app, therefore i cannot use it in there. now there doesnt seem to be any option available for me to get a vacuum card working.... I am left without any working option now it seems... |
The official integration uses an |
Is there any progress here? |
Thanks @pedro639 |
The hacs custom integration is not working. I command the robot to do something and it goes out and straight back to the dock |
Agree with @borgqueenx, the custom roborock integrations are crashing all the time and then there is no good use of this fantastic card |
Hi all, just to get things right:
is this all correct? So would it be possible to e.g. create some script that updates this in the config automatically for me and use this map then? Thanks a lot, looks like an amazing extension 👍 Willing to help to get this running with roborock core. |
Not really, they have not been added to the HA core because of technical reasons. More info here
Calibration points are calculated based on map dimensions - they are not returned by any Roborock command
You can add them manually to the card's configuration. It is also possible to use any entity as a source of calibration.
I don't think it is possible at this moment |
Thanks for the clarification @PiotrMachowski , I tried to catch up on the existing issues, I see that a general solution would be the best way to go, but... also will take more time, it's not even a complete proposal yet, so we are far away from an implementation. Referring to @Lash-L comment (home-assistant/core#105424 (comment))
what about doing it "old-fashined" and just write the additional properties to a yaml file and use this a "input" for the map? |
Anykind of IO is probably a no go for me imo. I have had so little time to actually work on any of my hobby projects - but I would be happy to accept a workaround fix. What I had in mind is a new command. Something like GET_CALIBRATION Ideally, this would be getting cached and loaded, but since this shouldn't need to be done frequently, it can just do it all once there. Although - it might make more sense just to add a function like "update_map_data" on the api object, and then in core when the map data is parsed, it calls .update_map_data(map_data) and stores it there. Then when the GET_CALIBRATION command is sent - it gets the calibration data that is stored. And as a clarification - GET_CALIBRATION cannot be a new service - but rather a new command inside the roborock python package. |
A new command should be ok for me as well - it can be cached e.g. trigger-based template sensor and this new entity will be used by the card as a source of calibration. |
Anything I can do to make some progress on this? |
Workaround description: #754 (comment) |
up. Any news on that? |
Hey @PiotrMachowski Got the calibration custom command finally added. Soon(TM), users can send a command to the vacuum get_map_calibration and get the calibration points of their active map. How would you like to proceed from here? |
That's a great news @Lash-L! Does calling this command introduce any additional load on the API/HA (like a new map download/parsing), or is it just retrieving a calculated map of values? |
It unfortunately does a new map download and parse. It was tricky as I couldn't pass the data down from ha as core devs don't want any code in core meant to just support a custom component. My understanding is that it is basically a one time call for each map right? If that's not the case I can try something else |
I can't really tell, I do not own a vacuum that is compatible with Roborock app 😉 but as long as a position of map itself doesn't change on the image then I suppose everything should be fine. Is it possible to detect the moment when this call should be performed? (detect that the map has changed) |
I think i can make a better version of it in the future that should happen everytime the image updates. i.e. update Image through HA core -> pass data back to python library -> Store -> when a command is sent give the value Is there anyhting else other than calibration points that you would need that you don't have access to now? For now is it good enough to just assume calibration points don't change? And we can update again when i build the 'better' system? |
You potentially can try to move the map parsing from HA to the python library itself. In that case you should be able to store the latest data inside library and return it to HA using commands.
The other part that would be nice to have are rooms for automatic room cleaning mode generation (room names, IDs and coordinates).
I think it should be ok for most cases, the part that I worry about is changing maps for multiple floor support. |
If I did this what would be the ideal way for you to get the data on the frontend? Still a command? I realized a slight hiccup in my original thought process. The send_command service call doesn't actually return anything and per core devs, it should never return anything. In the lovelace card are you able to access the actual pythonic objects? I assume not |
The ideal way would be to add
Why? There are commands that return responses on the Xiaomi vacuum side, I thought
No, there is no chance for something like that |
I'm not sure if i can get an 'extra' service call approved in core review
Are you sure? Send command seems to also not return anything here
Figured that |
I tried that and got rejected. home-assistant/core#112431 |
If I may ask the question: If I switch maps like a different floor the calibration points need to be updated at the same time as the map image in order to not break the card. It would be very helpful to include the name of the map along with the metadata to allow using different display offsets in the card. |
I'll try to work with anything you will be able to provide 🙂 As a last resort I thought about adding some sneaky monkey patching using a custom integration, but I would really prefer to not do it.
It doesn't on the HA side, but miio library returns data
generally the answer you got would make sense in the ideal world, but unfortunately it also limits power users 😞 |
Yeah so roborock can do that to and that's what happens in the library - but you aren't able to access that in anyway through the lovelace card right? |
Exactly |
Integration repository
https://www.home-assistant.io/integrations/roborock/
Supported features
Checklist
piotr.machowski.dev [at] gmail.com
(Retrieving entities info; please provide your GitHub username in the email)Vacuum entity/entities
n/a
Service calls
n/a
Other info
Creating this issue here to keep track of my work and ask questions. I plan to do the incorporation myself so I have left service calls and entities blank.
Roborock core holds the map in a image entity instead of a camera entity, so I got the calibration points from the map parser .calibration() and set up the following:
However, the image does not show up
Before I went any further I wanted to check with you that images are supported by the card.
The text was updated successfully, but these errors were encountered: