-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Add lineplot visualization code. #17
base: main
Are you sure you want to change the base?
Conversation
I think this function could be simplified. First, we can restrict it to plot only one temporal frequency. This would make it easier to use in a tutorial. Second, I think the user should pass the filter definitions explicitly. These two changes would simplify the code a lot. With these changes, the user is in charge of passing the definitions of the filters corresponding to each matrix column:
The RGB composite would then be created from the animations and the stimulus background using a different function ( I think it'd be nice to include an example, too. If you save the animation explicitly as an MP4, you can then embed it into the sphinx gallery (e.g. at the end of this example: https://gallantlab.github.io/pymoten/_sources/auto_examples/introduction/demo_show_filters.rst.txt) re the MATLAB code: I added a link to the README. |
NB: This can be used to normalize activations (e.g. issue #17)
I think simplifications of the code depend on what you want to put on the user. I think there is also some irreducible complexity, but let me explain: In the case of plotting one temporal frequency, first, the code already has an option for that in the Another complexity that the code currently addresses is that for temporal frequencies greater than zero, there can be two filters (for opposing directions of motion at the same temporal and spatial frequency) that map to lines drawn in identical places. Thus, the code contains a check for this, and gives the user an option for how to combine the lines (via the Another complexity in the code is that the zero temporal frequency filters are plotted in a different format than the rest (as dots, rather than lines, because they are simple Gaussians that have no orientation). Again, I don't think there is a way to do this substantially more compactly than as it is done now. We could just scrap the idea of doing the RGB plot at all, but having worked with these features a lot, I have found substantial value in having a way to plot the full model overlaid on the stimulus. Alternatives to the RGB plot all involve multiple panels for the plot (e.g. one for each temporal frequency). These can be constructed with the code as is (with a simple call to plt.subplots() and a loop), but I have found these more difficult to parse once constructed. There is too much looking around at different images / panels for my taste. One could remove the selection of temporal and spatial frequencies, but that seems like a common use case that would be useful to have inside the function. If it's not inside the function, I suspect that your example lines will precede the function quite often, which feels like making the end user do more work than necessary to me. In terms of working with filter definitions - both filter definitions and parameters are properties of the pyramid class, so both seem equally accessible. I think that the logic for combining 180-degree-off motion directions is more straightforward when working with parameters than with filters. Per where to put these functions - I did consider making it a method of the pyramid class, but I thought it would be natural to look for it in the 'viz' module (since it was there), so that's where I put it. Could go either way. Perhaps all functions in the viz module could be done as methods of the pyramid class? For a demo, how would you like it written? Since it requires feature values, should it compute those values (and replicate what is in |
Added code to create static & movie visualizations of the line-overlay plot of the motion energy feature values that I have used in talks. Needs a bit more testing and an example. Basic example building on extant example would be:
import moten
from scipy.stats import zscore
from IPython.display import HTML
video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4'
luminance_images = moten.io.video2luminance(video_file, nimages=100)
nimages, vdim, hdim = luminance_images.shape
pyramid = moten.get_default_pyramid(vhsize=(vdim, hdim), fps=24)
moten_features = pyramid.project_stimulus(luminance_images)
me_z = zscore(moten_features, axis=0)
anim = moten.viz.plot_moten_value_movie(luminance_images, me_z, pyramid.parameters, vmin=-3, vmax=3)
HTML(anim.to_html5_video())
As a side note, this specific visualization doesn't look entirely amazing, I think because of the scaling issue with motion energy features that I've mentioned before: big filters yield much bigger values than small filters, thus are difficult to display in the same plot. Thus, in this example the features are z-scored, but that can do weird things with small (here, 100 frame, 4 second) datasets. Z scoring feature values produces nicer visualizations with larger dataset.
Side note II: this repo should reference the orignal matlab code that Shinji and I shared a few years back, too. I added a line to moten/core.py, might be nice to mention it in the README.rst too.