The bulk of the code is in the jupyter notebook called Multimodal Learning. Submission.pdf has a lot of writing in it. The coolest part is definitely LiMBeRModel.py The simplest is FlickrEval.py For anyone interested, I have some self-contained environments for the daring. Good Luck.
-
Notifications
You must be signed in to change notification settings - Fork 0
noahmfoster/MultiModalSocrates
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
MultimodalSocrates
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published