diff --git a/contents/awards.md b/contents/awards.md index d2aefe9..25e87cc 100644 --- a/contents/awards.md +++ b/contents/awards.md @@ -1,7 +1,7 @@ +- **Dean's List (All Semester)** at University of Minnesota Twin Cities. - - +- Special Corporate Scholarships, 2023. **(1/30)** - School Special Academic Scholarship, 2023.**(1%)** diff --git a/contents/config.yml b/contents/config.yml index 3371666..6cafcaa 100644 --- a/contents/config.yml +++ b/contents/config.yml @@ -1,6 +1,6 @@ title: Congrui Yin's Homepage -page-top-title: Congrui Yin (殷骢睿) -top-section-bg-text: An LLM Designer and Efficient Machine Learning System Nerd -home-subtitle: Congrui Yin | 殷骢睿 -copyright-text: '© Congrui Yin 2023. All Rights Reserved.' +page-top-title: Jerry (Congrui) Yin's Homepage +top-section-bg-text: A NLP & MLSys Nerd +home-subtitle: Jerry (Congrui) Yin (殷骢睿) +copyright-text: '© Congrui Yin 2024. All Rights Reserved.' diff --git a/contents/home.md b/contents/home.md index 19f753e..da266b3 100644 --- a/contents/home.md +++ b/contents/home.md @@ -1,19 +1,48 @@ +#### News + +I’m actively applying for a MLsys Ph.D. position in 2025 Fall! If you need a student who is familiar with both NLP and computer systems with extensive industry experiences, feel free to Contact Me! + + #### Biography -I am currently a junior Undergraduate Student pursuing a bachelor's Degree in computer science at College of Liberal Arts, University of Minnesota Twins Cities. +I am currently a senior Undergraduate Student pursuing a bachelor's Degree in computer science at College of Liberal Arts, University of Minnesota Twin Cities, supervised by Prof. [Zirui Liu](https://zirui-ray-liu.github.io/). In the summer of 2023, I visited [TsinghuaNLP](https://github.com/thunlp) and conducted research under Prof. [Zhiyuan Liu](https://nlp.csai.tsinghua.edu.cn/~lzy/). + +I have experience in NLP and computer systems(both architecture and high performance machine learning systems), along with extensive industry research internship experience. This includes: + +* Participating in the pretraining of the Yi-Large model at 01.AI. +* Contributing to ML Infra of the pretraining of the on-device small model [MiniCPM-2B](https://huggingface.co/collections/openbmb/minicpm-2b-65d48bf958302b9fd25b698f) at ModelBest (with TsinghuaNLP) +* Participating in the finetuning of the CodeLLM [Raccoon](https://raccoon.sensetime.com/code) (Copilot-like) at SenseTime (with CUHK MMLab). + +#### Research Interests + +My current passion revolves around building **EFFICIENT** system solutions to AGI and LLM(VLM) for **RELIABLE** Hardware Design, this includes: + +1. Machine Learning System + * Training: Design more effective training system and algorithms, examples include [BMTrain](https://github.com/OpenBMB/BMTrain). + * Quantization (e.g. Attempting to finetune Llama 3.1 405B on a single A100 80GB GPU), the other example includes [IAPT](https://arxiv.org/pdf/2405.18203). + * Long context inference: example includes [Cross Layer Attention](https://github.com/JerryYin777/Cross-Layer-Attention). +2. LLM(VLM) for RELIABLE Hardware Design + * Synthesise pretraining and finetuning common knowledge of CodeLLM, exploring the boundary capabilities of LLM/VLM for hardware design (e.g. pretrain/finetune a VerilogLLM). + * Align the simulation code with the waveform image data to finetune VerilogVLM. + +#### Misc + +Before transferring to the University of Minnesota, I studied at Nanchang University, majoring in Artificial Intelligence in a top-tier class with a School Academic Special Scholarship. I was the leader of Nanchang University Supercomputer Cluster Team ([NCUSCC](https://ncuscc.github.io/)) Leader, with experience of ASC22 and SC23(IndySCC). + +I am passionate about open source and firmly believe in its potential to disseminate knowledge widely, leverage technology to lead innovation to the world and contribute to the advancement of human society. I am proud to have garnered over **1k stars** and acquired over **250 followers** on [GitHub](https://github.com/JerryYin777). I occasionally share my explorations in the machine learning system and LLM field on [Zhihu](https://www.zhihu.com/people/ycr222/posts) in Mandarin. + +#### Contact
-My research interests lie in Large Multimodal Models (LMMs) and their application in diverse practical scenarios, such as biological and system large models. My focus also extends to developing efficient machine learning systems aimed at expediting the training and inference processing of LMMs (especially LLMs), leveraging expertise in high-performance computing and distributed systems. +✉️ [yin00486 [at] umn.edu](mailto:yin00486@umn.edu) -Before transferring to the University of Minnesota, I studied at Nanchang University, majoring in Artificial Intelligence in a top-tier class with a School Academic Special Scholarship. I was honored to be advised by Professor [Zichen Xu](https://good.ncu.edu.cn/Pages/Professor.html) at [GOOD LAB](https://good.ncu.edu.cn) starting from March 2022, where my focus was on solving data-centric challenges and building efficient and reliable systems. I was the leader of Nanchang University Supercomputer Cluster Team ([NCUSCC](https://hpc.ncuscc.tech/)) Leader, with experience of ASC22 and SC23(IndySCC). + -I was also fortunately recruited as a research assistant at **TOP** NLP Lab [TsinghuaNLP](https://github.com/thunlp) in Beijing from July to September 2023, advised by Professor [Zhiyuan Liu](https://nlp.csai.tsinghua.edu.cn/~lzy/), trying to build efficient distributed large language model training framework [BMTrain](https://github.com/OpenBMB/BMTrain) and Develop 10B Chinese LLM [CPM-Bee](https://github.com/OpenBMB/CPM-Bee/blob/main/README_en.md). + -I am passionate about open source and firmly believe in its potential to disseminate knowledge widely, leverage technology to lead innovation to the world and contribute to the advancement of human society. I am proud to have garnered over **1000 stars** and acquired **155 followers** on GitHub. It is gratifying to know that my open-source projects have benefitted numerous individuals, and I have personally gained valuable knowledge from the open-source community. + -#### Contact -* Github: [JerryYin777](https://github.com/JerryYin777) + diff --git a/contents/project.md b/contents/project.md new file mode 100644 index 0000000..89fcff6 --- /dev/null +++ b/contents/project.md @@ -0,0 +1,16 @@ + + + + diff --git a/contents/publications.md b/contents/publications.md index d1113ee..b9d73aa 100644 --- a/contents/publications.md +++ b/contents/publications.md @@ -1,4 +1,7 @@ +For full paper list (not now, but I'm sure there will be more great work in the future), please refer to my [Google Scholar](https://scholar.google.com/citations?user=7gsdLw4AAAAJ&hl=en) -- *X. Gao, W. Zhu, J. Gao and C. Yin. (2023). F-PABEE: Flexible-Patience-Based Early Exiting For Single-Label and Multi-Label Text Classification Tasks. 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).* [[Paper]](https://ieeexplore.ieee.org/abstract/document/10095864) +- *W. Zhu, Y. Ni, C. Yin, A. Tian, X. Wang, G. Xie. (2024). IAPT: Instance-Aware Prompt Turing for Large Language Models. The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).*[[Paper]](https://arxiv.org/pdf/2405.18203) -- *C. Yin. (2023). Multi-scale and multi-task learning for human audio forensics based on convolutional networks. International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2023).* [[Paper]](https://doi.org/10.1117/12.2681344) \ No newline at end of file +- *X. Gao, W. Zhu, J. Gao and C. Yin. (2023). F-PABEE: Flexible-Patience-Based Early Exiting For Single-Label and Multi-Label Text Classification Tasks. 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023).* [[Paper]](https://ieeexplore.ieee.org/abstract/document/10095864) + +- *C. Yin. (2023). Multi-scale and multi-task learning for human audio forensics based on convolutional networks. International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2023).* [[Paper]](https://doi.org/10.1117/12.2681344) \ No newline at end of file diff --git a/contents/service.md b/contents/service.md new file mode 100644 index 0000000..292c193 --- /dev/null +++ b/contents/service.md @@ -0,0 +1,3 @@ + +- Reviewer for EMNLP'2024 +- Reviewer for ACL'2024 \ No newline at end of file diff --git a/index.html b/index.html index 4741ec5..84d199d 100644 --- a/index.html +++ b/index.html @@ -3,7 +3,7 @@ - +