-
关于AI技术的思考
There's still this illusion that "AI" is somewhat intelligent: these are models fitted to large datasets using non-linear optimizations, hoping to replicate their distributions. While these are incredibly useful tools, there is nothing we would consider human-like intelligence.
-
Theodore Galanos: GPT3 has definitely shown instances of human-like intelligence, although I doubt that qualification is all important for AGI. I agree it's not specifically the model or the architecture but I do entirely believe it is the 'large datasets', specifically the language ones.
- John Manoochehri: inb4 it's like something computational which we have detected in human intelligence.
- Theodore Galanos: Very far from an expert, my view is generalization to, if not arbitrary, rather novel downstream tasks with very little experience (and I believe generalization is a standard aspect of human intelligence?). Few-shot, zero-shot, prompt engineering, have shown glimpses of that.
- John Manoochehri: Except it's the opposite: all the ML training req'd is exactly what we have to assume humans don't have, at least in the very prominent case of language acquisition.
-
Zayd: True intelligence is the unsupervised learning. GPT-3 is good at data (and combinations of) it has seen. It can't go beyond that.
reference: https://twitter.com/MattNiessner/status/1434504873073156105?s=20
-
-
如何找实习
How to find a research internship? Doing an internship helps
• expand your network
• explore new topics !
• learn new research skills
• earn more money!
But how can we find good internship opportunities?
1.Find a supportive advisor
Choose an advisor that encourages/supports/allows students doing summer internships. Some may require you to work on funded projects during the summers or delay graduation should you do internships. Reach out to prior students and learn more!
2.Stay connected
Many internship opportunities are not broadcasted through public posting, but via emails among faculty members. Stay connected so that you get the best info (e.g., ask your friends/advisor forward relevant posting).
3.Get on Twitter
There are many great internship (or jobs) opportunities advertised on Twitter.
4.Apply broadly
You never know whether your expertise/interest match one of the teams in industrial labs (as their focuses may vary dramatically each year). Remember, you cannot get an offer from a position you did not apply to!
5.Apply early
Most of the positions are accepted on a rolling basis. If you apply late, you may find that most of the positions have been filled already.
6.Send cold emails
If you find researchers whose interests match yours very well, don't be shy and send them cold emails!
More on cold emails: https://twitter.com/jbhuang0604/status/1420611695848869892
7.Extend and finish your project
Summer went by very fast. Very ofter you will need a few extra months to finish the work and turn it into a good paper. This involves your advisor's support, company policy (e.g. code/data release or IP), and immigration issues (e.g. CPT).
reference:https://twitter.com/jbhuang0604/status/1438337355031719941
-
对审稿机制的提问
So my question is, is @openreviewnet going so survey reviewers/authors to try to improve their system? And what's the improvements that are being currently being engineered? I'd gladly contribute $$ to see it improve in time.
reference: https://twitter.com/taiyasaki/status/1434188996377145347?s=20
-
分析从学习到工作的转变
After advising PhD & Master students for over a decade, there is one thing I find most students need to unlearn: the half-ass work mentality acquired during years of tests and homework. Let me explain. 1/N
For most of their education students are evaluated using tests & homework. We are all familiar with the process. The student is asked to do some work; they turn it in, and get a grade (eg a B+,B, A, etc.) 2/N
But when they make it to grad school (or to their first job) it is quite different. Once they turn in work, they are not given a grade. They are given feedback and asked to do the work again. Sometimes several times. 3/N
Many students struggle enormously with this change. They are not trained to do the same thing over and over until they get it right. They are trained to do better than average on their first try and to forget about what they did as soon as they turn their work in. 4/N
So students show up with a first draft of a paper and tell you “I am done!” So you give them feedback and ask them to keep working on it. And you do this again, and again, and tension can begin to build. 5/N
Some students struggle to understand this, and will keep the paper hostage for many months thinking: this time I will be really done (I'll get an A). 6/N
At that point, I find it useful to have this conversation: “Look, the paper will need to go through 20 to 30 iterations to be completed. If each iteration takes 2 months, then we will be ready in more than three years. It is up to you.” 7/N
It is a tough aha moment that helps them realize work is always unfinished. You are there to help improve the work, not to grade it. And that only by iterating with others they will manage to make a sword out of the lump of steel. 8/N
But I don't blame them. I also had to learn that lesson. Because most of us grow up in an educational system that grades us on our first try, instead of pushing us to do the same thing over multiple times. 9/N
As an entrepreneur, I see this in my company as well, with young hires, which are trying to do “good enough” work and react against feedback. If they struggle to learn this lesson for too long, it is a bad sign. 10/N
Could we prepare people better? I think so. Having some activities that do not end in a grade, but that are iterated through long periods of time should be something we could add to earlier stages of education. 11/N
For the time being, if you are a student reading this. Know this is coming and prepare yourself for it. Also, if you don't encounter it, your mentors may not be helping you grow. Keep on at it. Great work is never finished, only improved gradually over time. /END
reference: https://twitter.com/cesifoti/status/1437043299852996608?s=20
-
如何避免论文写作中的常见问题
Active voice Friends don’t let friends use passive voice! Using passive voice hides the subject and creates ambiguous, indirect, and wordy sentences. Things don't "get done" by themselves. Take responsibility for what you do and use active voice whenever possible.
Statements in positive form Tell your readers "what is" instead of "what is not". not honest -> dishonest did not remember -> forgot did not pay any attention to -> ignored did not have much confidence in -> distrusted
Which (non-restrictive) vs. That (restrictive) As a WHICH adjective clause is non-essential and non-defining, you go on a "which hunt" and break down long sentences with into simpler ones. More on restrictive/non-restrictive adjective clauses: https://youtu.be/NjTM4booWHo
Respectively Do not ask your readers to solve mental correspondence problems. Revise the sentence to get rid of "respectively".
Fancy words I used to think using fancy words make the paper more "academic", but I now prefer simplicity and clarity.
Needless words Replacing needless words with simple ones!
Remove vague pronoun references Find and remove all the ambiguous pronoun references in your paper, e.g., that, this, it, these, those. Replace these vague pronoun references with SPECIFIC noun or noun phrase.
Few / A few / Quite a few Be specific about the quantity you intend to describe. A few = Some (not many but some) Few = "Only" a few (a small number of) Quite a few = Many
Note that Avoid "Note that" and "It should be noted that." Readers don't like to be frequently reminded to pay attention.
Resources Check out many wonderful writing advices on the web! http://jlakes.org/ch/web/The-elements-of-style.pdf https://vision.sjtu.edu.cn/writing.html https://taoxie.cs.illinois.edu/advice.htm
Case Often redundant! Before: In many cases, the rooms lacked air conditioning. After: Many of the rooms lacked air conditioning.
Before: It has rarely been case that any mistake has been made. After: Few mistakes have been made.
Source: The Elements of Style
However We often use "however" as "nevertheless/in spite of that" in a paper. Do NOT start with a sentence with however to avoid confusion. Why? When starting with however, it means "in whatever manner/way" or "to whatever degree/extent".
Transitions Use transitions to connect sentences. • Space: above, below, inside • Cause and effect: as a result, because, since • Similarity: as, likewise, similarly • Contrast: although, however, on the other hand, in contrast Source: https://stanford.edu/class/ee267/WIM/writing_style_guide.pdf
-
original twitter link(from Wojciech Jarosz):
https://twitter.com/wkjarosz/status/1437282272663724039
webpage:
-
New Tutorial! Add Visual SLAM capabilities to ROS1 Navigation Stack. Test Spatial Intelligence on new or existing #robots with simple ROS integration with SLAMcore algorithms. #ROS #VisualSLAM #Navigation #tutorial
- Andrew Davison: New tutorial on how to easily integrate SLAMcore's visual SLAM SDK with the ROS1 Navigation Stack.
reference: https://twitter.com/SLAMcoreLtd/status/1435566081511395333?s=20
-
Next Thurs 11am PT I'll be giving a free lecture from my parallel computing class about why CPUs and GPUs are so fast. It will be live, synchronous, and include Q/A and small-group breakouts.
Limited to first 500.
Date and time:Fri, September 17, 2021 2:00 AM – 4:00 AM CST registration web: https://www.eventbrite.com/e/sample-live-virtual-lecture-a-modern-multi-core-processor-tickets-170395993002
reference:https://twitter.com/kayvonf/status/1436212406196183045
-
"Introduction to Deep Learning -- 170 Video Lectures from 𝐀daptive Linear Neurons to 𝐙ero-shot Classification with Transformers" Just organized all DL-related videos I recorded in 2021. Hoping it might be useful for one or the other person out there: homepage:https://sebastianraschka.com/blog/2021/dl-course.html
-
New iccv2021 oral paper with JohnXu_2015! Texformer estimates high-quality 3D human texture from a single image. The Transformer-based method allows efficient information interchange between the image space and UV texture space.
Project page and code: https://mmlab-ntu.com/project/texformer/
reference: https://twitter.com/ccloy/status/1435192176577695746
-
Our partners at @UKAEAofficial and @CreatecCumbria brought Spot to @SellafieldLtd to demonstrate how Spot can automate nuclear inspections, support decommissioning, and reduce risk for people in hazardous environments. Watch the full video: https://www.youtube.com/watch?v=hmDQTYNlG4c&ab_channel=Createc
reference: https://twitter.com/BostonDynamics/status/1435589380438102019?s=20
-
Our ICCV2021 paper "Talk-to-Edit: Fine-Grained Facial Editing via Dialog":
Website: https://mmlab-ntu.com/project/talkedit/…
Code: https://github.com/yumingj/Talk-to-Edit…
Fine-grained attribute manipulation through interactive dialog.
CelebA-Dialog, a large-scale visual-language facial editing dataset.
reference:https://twitter.com/liuziwei7/status/1436161234781421578
-
I'm excited to share our SIGGRAPH Asia 2021 paper Fast Volume Rendering with Spatiotemporal Reservoir Resampling (Volumetric ReSTIR). This a collaboration between Utah Graphics Lab @cem_yuksel and NVIDIA @cwyman. We introduce fast volume rendering with spatiotemporal reservoir resampling to achieve low-noise, interactive volumetric path tracing in complex scenes with arbitrary dynamic lighting, volumetric emission, and volume animations.
Project Webpage: https://dqlin.xyz/pubs/2021-sa-VOR/
reference: https://twitter.com/DaqiLin/status/1436503002265710593?s=20
-
Our code for HyperNeRF is now available!
Like Nerfies, we've included a Colab demo so that you can try out a basic version of our method right in your browser.
We’ve enhanced the dataset processing Colab--it can now handle non-face sequences out of the box. Try it out on various scenes! Like before, the demo is limited by the lower-end GPUs and TPUs available through Colab. Check out the repository for instructions on how to train a fully-featured model on your own machines.
Project Page: https://hypernerf.github.io/
Github: https://github.com/google/hypernerf
reference: https://twitter.com/KeunhongP/status/1436505902387843072?s=20
-
Smash through Apple's 5 meter LiDAR limit! How did we do it? Our fusion 3D reconstruction engine leverages both photo based and LiDAR based processing. This is essential for capturing architectural exteriors. View it on Sketchfab: https://sketchfab.com/3d-models/salems-riverfront-carousel-building-ca2fdcf32a48475b8066fd6cbb358869
reference: https://twitter.com/EveryPointIO/status/1436391883870142464?s=20
-
I am excited to announce our latest work on #agile #quadrotor control under uncertainty. L1-NMPC allows us to fly high-speed trajectories with unknown payloads, including ice-cold beer!
Paper: http://rpg.ifi.uzh.ch/docs/Arxiv21_L1_Drew.pdf
Video: https://www.youtube.com/watch?v=8oB1rG5iYc4&ab_channel=UZHRoboticsandPerceptionGroup
reference: https://twitter.com/davsca1/status/1437456243745042441?s=20
-
1.1 Million points from scanning with Ray-Bans?! Check out @jonstephens85 getting hands on with the new #RayBanStories. 3D Scanning is getting interesting! #3DScanning #3DModeling #SmartGlassses
- Jonathan Stephens: Sooooo many! Already brainstormed a few use cases for equipment inspection, product logging, etc. just the value of video with voice annotation is huge.
reference: https://twitter.com/EveryPointIO/status/1438647907951923202?s=20
-
Beyond happy to announce our @ICCV_2021 Oral paper:"Panoptic Narrative Grounding"
homepage: https://bcv-uniandes.github.io/panoptic-narrative-grounding/ 8
-
"A Farewell to the Bias-Variance Tradeoff?An Overview of the Theory of Overparameterized Machine Learning" (https://arxiv.org/abs/2109.02355). Good reference and pointers for updating my model evaluation & bias-variance trade-off intro slides for teaching in a few weeks.