-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation with custom dataset #38
Comments
@mhmd-mst
|
Thanks for you reply,
Thanks again for you help. |
fake_recon_index = db The query_shape[0], dummy_db_shape[0], or data_shape[0] are all about the number of fingerprints you have. In other words, it is data length, not a batch size. FYI, #17 explains the search process for sequence query.
|
In the code I noticed if for example the query set contains 3 audio files, the query.mm file will have fp of all of them, if we have query_shape of all fps as (9,128) such that 3 fps for each audio, if I choose test_seq_len as '3' , the np.dot() used in the enumerate(candidates) loop will not be as meaningful as the case in #17 because now the fps are part of two queries am I right in this interpretation? |
@mhmd-mst test_ids = [0, 3, 6] # start_id of query sequence from each file With this test_ids, the
Yes. If you used test_ids =[0,3,6] with |
Hello, |
It depends on the dimension of fingerprint and how to define the |
Hello, I am using a pretrained model that you commented in one of your issues, and I want to use it for my own dataset, I modified the dataset class:
and the get_data_source function in generate.py:
I am using a sample of 3 db audios files and 2 query audio files.
Thank in advance, and thanks for you greate work
The text was updated successfully, but these errors were encountered: