Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect requirements for downsampled movies coming off of motion correction #412

Open
danielsf opened this issue Jan 20, 2022 · 4 comments

Comments

@danielsf
Copy link
Contributor

Various members of the science team have expressed displeasure working with the webm movies that automatically get produced as a part of our motion correction pipeline. See, for instance

AllenInstitute/brain_observatory_qc#16

We should reach out to them to ask their specific concerns and determine what artifacts we can reasonably produce and store to address their needs.

@danielsf
Copy link
Contributor Author

For my own purposes, I wrote a quick module to produce mp4 videos of pre- and post- motion corrected movies

https://github.com/AllenInstitute/ophys_etl_pipelines/tree/danielsf/dev/downsampled/video

I will try to get this reviewed and merged before this ticket is adopted (we're probably going to need it to help evaluate motion correction configurations for the 2022 labeling effort). Hopefully it can serve as a starting point for this discussion.

@danielsf
Copy link
Contributor Author

When I asked Nick Mei why we adopted webm in the first place, he said it was due to its felicitous compression characteristics and the fact that it was royalty-free. He pointed me to this resource

https://blog.pogrebnyak.info/codecs-and-containers-explained/

@danielsf
Copy link
Contributor Author

danielsf commented Feb 3, 2022

This is related to #425 (the difference being that #425 reflects a specific request made during one of our SSF check-in metings)

@danielsf
Copy link
Contributor Author

danielsf commented Feb 7, 2022

Running ophys_etl.modules.video.side_by_side_video (as implemented in #433) with the following input JSON

{
  "input_frame_rate_hz": 6.0,
  "kernel_size": 2,
  "kernel_type": "mean",
  "left_video_path": "/allen/programs/braintv/production/neuralcoding/prod55/spe
cimen_734689833/ophys_session_806855673/ophys_experiment_806928824/806855673_pla
ne1.h5",
  "log_level": "INFO",
  "lower_quantile": 0.0,
  "n_parallel_workers": 5,
  "output_frame_rate_hz": 2.0,
  "output_path": "/allen/programs/mindscope/workgroups/surround/motion_correctio
n_labeling_2022/806928824/806928824_2Hz_side_by_side_with_reticle.tiff",
  "quality": 7,
  "reticle": true,
  "right_video_path": "/allen/programs/mindscope/workgroups/surround/motion_corr
ection_labeling_2022/806928824/806928824_motion_corrected_video.h5",
  "speed_up_factor": 1,
  "upper_quantile": 1.0,
  "video_dtype": "uint16"
}

appears to produce the product the scientists want to use for motion correction QC. We should consider replacing our current webm generation code with this in suite2p_registration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant