-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1 data point per document vs 1 data point per package #18
Comments
Currently we are returning the final response on package level instead of document/module level hence the user will get one data point with either approach. a. For 1-document-1-embedding approach: b. For 1-package-1-embedding approach: |
Based on what is more relevant for Epi package search we can take either route. From embedding/visualisation point I would note: Other approaches of aggregating embedding from a single package could be the following:
|
This was discussed today with the WHO Collaboratory team at our monthly stand up and there were not a very strong push to either side. There seemed to be a small preference for returning specific tool (i.e., 1 data point per package), with the caveat that we should then also indicate which document led to the high score. One potential option may also be to create and present both alternatives and see which one gathers more positive user feedback. |
I also agree that we should have results drill down to the module though I'm not sure how many embeddings this implies. From my experiments (https://github.com/paulkorir/working-with-embeddings/blob/master/experiments.py) I would imagine that you will only have one embedding model. I could be wrong. |
OK. I've been getting up to scratch with the topic and it seems to me that the application of an embedding model on a set of documents results in the as set of vectors. There is only ever one embedding model at play. This embedding model can be at the level of word, sentence or document. Therefore, the decision to be made is which level of embedding model will be most useful. In my opinion, we should be trying them all and examine the results to select the best one. |
for what it's worth - the 2d map as we have talked about it until now has always been at the package level. If we do it at the document level, we may get a cluster for one package, or not. 👍 |
I hear you. I believe that it may be most useful for the user to see the viz in terms of the final tool they need, not necessarily the package. In any case, it would be useful for the user to toggle between the package and module level. At the package level they will know what to install but at the module level they will know which function to run. |
I don't think we can identify specific functions with the initial infrastructure because the source data (= the documentation we feed to the language model) is not structured by function. What can be done is what Dina proposed: we return the package name, and a link to the source document that lead us to return this result. From here, the user can read the document and see how they can perform their task, which will often be a combination of steps/functions. In a future version, we can try to make a "best guess" at the function call(s) to perform the queried task but I believe it's a distinct issue. Likely something that will require using the generative feature of our language model. We can open a new issue to track this. |
Can this be solved at the level of documentation extraction? It could be substantially easier to do this during data extraction than downstream during search. |
No, because a large portion of the source documents do not present the tool by function but by task or topic and these tasks usually involve multiple functions. See for example https://epiforecasts.io/EpiNow2/articles/estimate_infections_workflow.html or https://epiverse-trace.github.io/finalsize/articles/finalsize.html |
I see. That makes sense. However, I thought that the reference documentation (e.g. https://epiverse-trace.github.io/finalsize/reference/dot-final_size.html) would also be included. These would be at the function level. |
This one is even better and it is pertinent to a single function: https://epiverse-trace.github.io/finalsize/reference/final_size.html. |
Yes, both this and the other type of document I shared are included. But it is unclear which ones will usually lead to better results. Which is why I propose we delay this specific feature until we have good results at the package level and we can identify which document (reference manual or articles/vignettes) produced these results. Since it seems we are slightly deviating from the initial conversation, I have opened #21. In this issue, let's try to stick to the initial question: how to go from multiple documents / package to 1 point per package? Should we concatenate documents before feeding them to the LM? Should we compute embeddings per document but only return the one with the highest score? etc. |
The current approach that Avinash is using to summarise the multiple documents into a single data point is to average the embeddings. I had a quick look at the approach using a PCA to generate the map (to be refined in epiverse-connect/epiverse-map#10) and the various documents for a given package (one colour for each package on the plot below) are spread across the map. I'm therefore afraid that we get averages that are not representing what we want. I wonder if we could have better results by:
|
The primary concern with concatenating documents is that the resulting embeddings will be heavily influenced by the number of documents in each folder. Folders with more documents will have a disproportionate impact on the overall representation. |
Noted. However, given that the search process tries to find the set of vectors which match most closely with the search vector, provided that the resulting embeddings are non-random (and they should be because of the encoded semantic content), then the number of embeddings per folder should not be a problem. It would be useful to see if the search results are distorted by a disproportionate number of embeddings. |
I don't follow why this would be the case. As long as we have a single vector / package, all packages should have the same weight, no? |
This again came up in a discussion with @avinashladdha:
Do we want to have 1 data point per document or 1 data point per package? How to make this happen?
From a user point of view, it probably makes more sense to have a single data point (single point on the map & single search answer) per package.
Currently, we have multiple documents per package so if we wanted to have 1 point per package, how can we do this?
I don't know if it makes sense to concatenate all documents to have a single document per package as we may end up averaging points we a large amount of variability and end up with an average that is not meaningful.
@avinashladdha mentioned we could have a post-process deduplication step where we keep only the best score / best matching document for each package. Are there any downsides to this approach? How could we apply something similar to the map?
The text was updated successfully, but these errors were encountered: