-
If I wanted to lift and shift the Rf2 data, is the rf2 data in Resrouce directory only. In copying the workspace directory to a new server and starting snow owl with a bind mount I could see the transferred rf2 data? |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 3 replies
-
Hi @Cv66user, Not sure I understand your question. We do not store RF2 data directly, we index all incoming RF2 rows during SNOMED CT RF2 import into several Elasticsearch indices and use the documents for searching, etc. You can bind the indices directory of Elasticsearch to your local computer's path and move that folder afterward to another computer if you want to. I hope this helps. Cheers, |
Beta Was this translation helpful? Give feedback.
-
Thank you I span up snow owl and installed the rf2 data inside the container. If the container is destroyed I will loose this data. So I thought if I can copy out the rf2 data from the container with docker copy and then use docker run with -v bind mount pointing to the rf2 data is what I am ultimately trying to achieve . It sounds like once imported, it goes in to a properity format resource folder, which Is what I'm having trouble then mounting |
Beta Was this translation helpful? Give feedback.
-
Thank you for your advice In my Host 'usr/share/' I dont have a snowOwl directory or elastic search folder I built Snow Owl by "docker run -d --name snowOwlServer -p 8080:8080 b2ihealthcare/snow-owl-oss:7.17.0". I then ingested the RF2 data via the API OK, If I mount the elastic folder, with "docker run --name test-v /var/rf2datanew/resources/indexes/elastic-snowowl:/usr/share/snowowl/resources/indexes/elastic-snowowl/". I now get a Access Denied error: "Caused by: java.nio.file.AccessDeniedException: /usr/share/snowowl/resources/indexes/elastic-snowowl/data" That pwd refers to the container path? I observe we specify snowowl:root So I tried "chown -R snowowl:root elastic-snowowl/" as well as "chown -R myUserName elastic-snowowl/ " but I still get Access Denied |
Beta Was this translation helpful? Give feedback.
-
Hi @Cv66user, If you omit any bind mount or volume ( Yes, if you mount an external directory you need to properly configure ownership of the directory structure.
This will mount the entire resource folder under Snow Owl to your selected host path. I hope this helps. Cheers, |
Beta Was this translation helpful? Give feedback.
-
Thank you for the advice, I have now attempted the following chown -R 1000:0 /var/rf2datanew/resources docker stop theWorkingSnowOwlContainer Then Couldn't start embedded elasticsearch java.lang.RuntimeException: Couldn't start embedded elasticsearch But I am able to start the original working container at will? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
How does your docker-compose file look like? Or how did you start Snow Owl?
Our example explicitly mounts the data folder to a folder on the host machine so you are never going to lose any data when destroying the container.
https://github.com/b2ihealthcare/snow-owl/blob/8.x/docker/docker-compose.yml#L26
If you have manually created a container and imported RF2 data then you can use the docker cp command to copy the folder contents to an external location and then mount it to the
elasticsearch
service as you described it.I hope this helps.