-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: File not found when running examples #79
Comments
Hello @andieich, I think we need to improve our docs, the README should put an emphasis on initialising the submodules. They are mentioned but we do not specifically stipulate the initialisation part: As for the inference failure, it seems like there is an issue with our script somewhere, unsure without more log. What you could try is the Faster Inference with Custom Inferencer at the end of https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection_inference.ipynb This bypasses the need for a subprocessed script if you are not concerned about licensing issues. Worth trying at least to confirm that YOLOv8, etc... work at least! |
Thanks a lot for your explanations.
Could you indicate me to what I am doing wrong? |
There is no good way to get the images when installing with Pip, I never really thought about it because I always expected people would clone the repository or change the examples: They are quite heavy and it would be unreasonable to have them in the Pypi package. Out of curiosity, which model did you download? |
Our tests are still passing as of yesterday: https://github.com/colour-science/colour-checker-detection/actions/runs/8595516611, I'm starting to think that you might not have pulled the right model: https://huggingface.co/colour-science/colour-checker-detection-models/resolve/main/models/colour-checker-detection-l-seg.pt |
Thanks for your reply. I redid the examples and it still doesn't work, so I must do something wrong. Here is the example notebook I modified to use with the weights you referred to. Do you know what might be the issue? |
I continued to play around with the package. Do you think it might have something to do with resizing the images? I mean that the package expects another size of the image/detected chart? |
The model should resize all the input to 1280px, so I don't think this is a resolution issue, I have the feeling it could be related to the way the image are read. Do you have OpenImageIO or Imageio installed? |
I finally had time to continue the script. When I use
The detection works as desired. Do you have an idea what I can try? The model works but not within The only thing I changes within the
But this cannot be the issue since the same path works in I'm using |
And I found out one more thing. When I just run a part of the
it only works if
When I manually pass the image paths to When I comment out the line But when I use this adapted function here:
|
Finally, I made more progress. In |
Oh wow, this is so weird! What hardware are you running on? Asking because in the example I set |
I'm on a M1 also! Would it be possible to do a |
Sure, here's the
|
Would it be please possible, if you don't mind, as a test to install via
Just so that we have the same stack. |
OK, I will do that. Which Python version do you use? And do you run the terminal in Rosetta mode? |
Nope, arm64 all the way! |
@KelSolaar: OK, and which Python version? |
Python 3.11! |
I installed everything like you (and additionally |
Thank you! Would you please happen to have the images too so that I can test exactly on the same data? |
Actually, scratch that, I managed to repro, hang on tight. |
I have updated the example notebook, two takeaways: The code is now as follows: def inferencer_agpl(image, **kwargs):
model = YOLO(
os.path.join(
os.path.expanduser("~"),
".colour-science",
"colour-checker-detection",
"colour-checker-detection-l-seg.pt",
),
)
data = []
# NOTE: YOLOv8 expects "BGR" arrays.
if isinstance(image, np.ndarray):
image = image[..., ::-1]
image = colour.io.convert_bit_depth(image, np.uint8.__name__)
# `device=0` for CUDA GPU
for result in model(image):
if result.boxes is None:
continue
if result.masks is None:
continue
data_boxes = result.boxes.data
data_masks = result.masks.data
for i in range(data_boxes.shape[0]):
data.append(
(
data_boxes[i, 4].cpu().numpy(),
data_boxes[i, 5].cpu().numpy(),
data_masks[i].data.cpu().numpy(),
)
)
return data I read the image directly before using the Let me know how it goes! |
Thanks! Yes, sorry, I didn't explain, but I used your two example images.
Now it is:
The addition of I think this might have to do with the version of I also tried the updated code (without the |
Let's try to get on the same baseline, would it be possible to use the latest I took your notebook and reduced it to something that works for me with the aforementioned fixes: test_mps_detection.ipynb.zip Let's try to get this one working! |
Description
I am trying to run the
examples_detection_segmentation.ipynb
notebook.First, for some reason, the example images were not downloaded, and the directory to which
ROOT_RESOURCES_EXAMPLES
refers is empty. I manually downloaded the images and changed theROOT_RESOURCES_EXAMPLES
variable to refer to the downloaded photos.Now to my actual problem. An error occurs, however, when I do the "detection" step:
Apparently, the results file in a temporary directory is not found:
The images were correctly plotted in the
previous part (Caption: "Images").
Do you have an idea what I might try? T
Code for Reproduction
No response
Exception Message
No response
Environment Information
No response
The text was updated successfully, but these errors were encountered: