Skip to content

Latest commit

 

History

History
169 lines (87 loc) · 10.5 KB

20211017-20211030.md

File metadata and controls

169 lines (87 loc) · 10.5 KB

意见性分享


课程和报告分享

  • 2nd 3DReps workshop @ ICCV_2021

We are organizing the Second #3DReps workshop at @ICCV_2021 to bring together researchers working on learned 3D representations. We have an amazing lineup of invited speakers and posters. Please join us tomorrow (Oct 17)! Full schedule here: https://ivl.cs.brown.edu/3DReps/

Invited speakers include Katerina Fragkiadaki, Noah Goodman, @lipmanya, Jitendra Malik, Dmitriy Smirnov, @SongShuran, and Andrea Vedaldi @Oxford_VGG.

We are also organizing breakout sessions for poster presenters on three topics: Humans and Robots, Surfaces and Points, and Generation and Recognition. Join us on http://Gather.town for this interactive breakout.

Made possible thanks to my wonderful co-organizers @FidlerSanja, Leo Guibas, @orlitany, @tanner_schmidt_, @vincesitzmann, @shubhtuls, and @GordonWetzstein.

reference: https://twitter.com/drsrinathsridha/status/1449527589362671617?s=20

  • CVEU workshop @ ICCV_2021

CVEU (Creative Video Editing and Understanding) workshop has another exciting Keynote Speaker! Professor @magrawala from Stanford University @Stanford will be having an exciting talk! Title: Making (and Breaking) Video. Time: 13:45 PM - 14:30 PM PST

For more detailed information, visit our website: https://cveu.github.io

The date of the talk will be October 17th, 2021.

reference: https://twitter.com/cveu_workshop/status/1446407246431281159?s=20

  • 3DGV

Our next speaker is Rana Hanocka (Chicago).

Talk title: "Neural 3D Reconstruction".

Cover

Host: Shubham Tulsiani (CMU) Panelists: Or Litany (NVIDIA)

10/20 11:00 am (PDT) 10/20 2:00 pm (EDT)

Link: https://www.youtube.com/watch?v=3MUE75qGg7s

reference: https://twitter.com/qixing_huang/status/1450544323595018251?s=20

  • Multi-Object Tracking

Next Tue at 5 pm, we’ll host Laura Leal-Taixé(@lealtaixe) from Technical University of Munich, who will talk about new paradigms in formulating multi-object tracking: "Shifting Paradigms in Multi-Object Tracking".

zoom info: [email protected] or just DM!

reference: https://twitter.com/KuisAICenter/status/1451842254759464960?s=20


成果推荐及讨论

  • MuJoCo physics simulator

    DeepMind

    We’ve acquired the MuJoCo physics simulator (http://mujoco.org) and are making it free for all, to support research everywhere. MuJoCo is a fast, powerful, easy-to-use, and soon to be open-source simulation tool, designed for robotics research: http://dpmd.ai/mujoco-blog

    • Sam Aardvark: Wow. Fantastic. One question: why would you do this? Do you make money if it’s open source? If so, how? If not, you obviously want a return on your investment, how are you getting it?

      • Josh Marlow: DeepMind also benefits from the research produced by other labs. Making research infrastructure more accessible -> more research -> more papers -> more work they can potentially incorporate into their own. Also, if they use this internally, they benefit from oss contributions.
    • Yuke Zhu: Kudos to @DeepMind, MuJoCo (physics engine behind robosuite) is now free and soon open-source to all. We hope that it will expedite reproducible and open robotics research further. Check out the list of great work from the Robot Learning community using robosuite here: http://robosuite.ai/docs/references.html

    reference: https://twitter.com/DeepMind/status/1450118090143014913?s=20

  • robosuite

    Yuke Zhu

    robosuite updates. After eight months of dev effort, excited to release our v1.3 version! We integrate advanced graphics renderers with our simulation framework and provide vision APIs to bridge robot perception and decision-making research. Try it out! https://robosuite.ai

    I am incredibly proud that this version received contributions from two undergraduate scholars, Abhishek and Divyansh (@DIVYANSHJHA), along with our core team @josiah_is_wong @AjayMandlekar @RobobertoMM. We wholeheartedly welcome the community to contribute to robosuite!

    reference: https://twitter.com/yukez/status/1450519455042260996?s=20

  • Equivariant Flows

    Pim de Haan

    Our paper on equivariant flows for sampling from quantum field lattice theories got accepted to the NeurIPS ML4Physical Sciences workshop! Neural nets can now sample from much larger lattices. With Corrado Rainone, Miranda Cheng and Roberto Bondesan.

    Cover

    The model is by design equivariant wrt the lattice symmetries and the sign-flip symmetry of the phi^4 theory we experiment with. In contrast, realNVP never learns all the symmetries, and predicts a different likelihood for symmetry-transformed samples.

    Cover

    After ages of trying deep models, we found that a continuous normalizing flow with a shallow vector field model works best. We use a Fourier Feature basis and then apply a single time and location dependent weight tensor. Does reverse KL work better with shallow nets? I’m particularly excited about this because after studying physics, but only ever doing research in ML, I now finally got the opportunity to do a tiny bit of research in physics. The circle is round.

    reference: https://twitter.com/pimdehaan/status/1451608955252350977?s=20

  • PyTorch 1.10

    PyTorch

    ICYMI: PyTorch 1.10 was released last Thursday. Here are some highlights of the release. Stay tuned for tweet threads in the next couple weeks delving deeper into these cool new features! CUDA Graphs are now in beta, and allow you to capture (and replay!) static CUDA workloads without needing to relaunch kernels, leading to massive overhead reductions! Our integration allows for seamless interop between CUDA graphs and the rest of your model. FX, an easy to use Python platform for writing Python-to-Python transforms of PyTorch programs, is now in stable. FX makes it easy to programmatically do things like fusing convolution w/ batch norm. Stay tuned for some FX examples of cool things that users have built! nn.Module parametrization (moving from Beta to Stable) allows you to implement reparametrizations in an user-extensible manner. For example, you can apply spectral normalization or enforce that the parameter is orthogonal! See https://twitter.com/rasbt/status/1451542449130811399 for an example. We have also turned conjugation on complex tensors from an O(N) copy into a constant time operation (just like transpose)! This allows us to fuse conjugation with other PyTorch operators, like matmuls, for as much as 50% increase in speed & 30% increase in memory savings!

    Cover
  • Canonical Capsules

    Andrea Tagliasacchi

    Have you ever wondered what "capsules" are in deep learning? For primary capsules, it is actually very very simple, and explained excellently by Weiwei in his upcoming talk at #NeurIPS21, linked below: https://youtu.be/tUQJV2W7Z8g https://canonical-capsules.github.io

    But why should you care? ...well, if you work on point clouds this architecture achieves state-of-the-art on a variety of classical (and new!) tasks.

    And what is the secret sauce? Realizing that popular 3D dataset provide you with MASSIVE inductive bias!! Just look at how aligned these models are... smells like human in the loop.

    However, by designing an architecture that can automatically learn how to transfer an object into a learnt canonical frame. In a fully self-supervised fashion, of course! We obtain results that are vastly superior to recent state-of-the-art methods, again, without the need of pre-aligned datasets.

    reference: https://twitter.com/taiyasaki/status/1453903951519125505?s=20

  • Photorealistic Full Body Avatars

    Alexander Richard

    Ever wished you could connect with your loved ones over the distance and still feel as if you were in the same space? We are working to make this come true in our lab in Pittsburgh! Some of our work on photorealistic full body avatars has just been shared on Facebook Connect.

    reference: https://twitter.com/AlexRichardCS/status/1453808262437089284?s=20

  • Graph Neural Diffusion

    Hannes Stärk

    New VideoFilm projector! James Rowbottom and @b_p_chamberlain explain their paper "GRAND: Graph Neural Diffusion" and their #NeurIPS2021 paper "Beltrami Flow and Neural Diffusion on Graphs" out of @mmbronstein's @TwitterResearch group! They introduce a new GNN based on diffusion operators on graphs which generalizes many types of common GNNs! Join these GraphML discussions on Tuesdays: https://hannes-stark.com/logag-reading-group.

    reference: https://twitter.com/HannesStaerk/status/1453778816187387908?s=20

  • Latent Explorer Achiever

    Deepak Pathak

    How could we enable an agent to perform many tasks? Supervising for every new task is impractical.

    We present Latent Explorer Achiever (LEXA) that explores by discovering goals far beyond the frontier and then achieves test tasks, specified via images, in a zero-shot manner.

    video link:https://twitter.com/pathak2206/status/1450953076936957954

  • differentiable simulation for cutting

    Animesh Garg

    Want to try out differentiable simulation for cutting -- check out the code release https://github.com/NVlabs/DiSECt

    Also included is a dataset of force profiles, meshes, & node motions from high-quality Ansys LS-DYNA simulations.

    RSS Paper: https://diff-cutting-sim.github.io Thx @eric_heiden

  • MultiBench

    Paul Liang

    Excited to present MultiBench, a large-scale benchmark for multimodal representation learning across affective computing, healthcare, robotics, finance, HCI, and multimedia domains at #NeurIPS2021 benchmarks track! 🎉

    paper: https://arxiv.org/abs/2107.07502 code: https://github.com/pliang279/MultiBench

    reference: https://twitter.com/pliang279/status/1454161847268134914

Image