Skip to content
View JerryYin777's full-sized avatar
🎯
Focusing
🎯
Focusing

Highlights

  • Pro

Block or report JerryYin777

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
JerryYin777/README.md

LinkedIn GitHub Google Scholar 知乎

News

I’m actively looking for Machine Learning System and O1/MLLM/LLM Industrial/Research Opportunity. If you need a reliable teammate who is familiar with both NLP and computer systems with extensive industry experiences, feel free to Contact Me!

Biography

My name is Jerry Yin. I am currently a senior Undergraduate Student pursuing a bachelor's Degree in computer science at College of Liberal Arts, University of Minnesota Twin Cities, supervised by Prof. Zirui Liu and Jiayi Yuan (Ph.D. Candidate). In the summer of 2023, I visited TsinghuaNLP and conducted research under Prof. Zhiyuan Liu, where I received invaluable mentorship from Weilin Zhao (Ph.D. Candidate) and Xu Han (Research Assistant Professor), for which I am deeply grateful.

I have experience in NLP and computer systems(both architecture and high performance machine learning systems), along with extensive industry research internship experience. This includes:

  • Participating in the pretraining of the Yi-Lightning model at 01.AI.
  • Contributing to ML Infra of the pretraining of the foundation model at ModelBest (with TsinghuaNLP).
  • Participating in the finetuning of the CodeLLM Raccoon (Copilot-like) at SenseTime (with CUHK MMLab).

Research Interests

My current passion revolves around building EFFICIENT system solutions to AGI (Now I am interested in O1-like models ML Infra), this includes:

  1. Machine Learning System
    • Training: Design more effective training system and algorithms, example includes BMTrain.
    • Parameter Efficient Fine Tuning (PEFT): Improve LoRA-like architecture and low bit model compression. example includes IAPT.
    • Long context inference: example includes Cross Layer Attention.
  2. LLM & LLM applications
    • CodeLLM
    • Foundation LLM (Yi-lightning)
    • RAG (GraphRAG): Examples includes PaperHelper.

Misc

  • Before transferring to the University of Minnesota, I studied at Nanchang University, majoring in Artificial Intelligence in a top-tier class with a School Academic Special Scholarship. I was the leader of Nanchang University Supercomputer Cluster Team (NCUSCC) Leader, with experience of ASC22 and SC23(IndySCC).

  • I am passionate about open source and firmly believe in its potential to disseminate knowledge widely, leverage technology to lead innovation to the world and contribute to the advancement of human society. I am proud to have garnered over 4k stars and acquired over 370 followers on GitHub. I occasionally share my explorations in the machine learning system and LLM field on 知乎 in Mandarin.

  • I love league of legends and valorant games!

Contact

  • I am very enthusiastic about discussing academic issues or any interesting project-related topics! If you'd like to engage in a discussion or collaborate, feel free to contact me via email at any time.

  • If you're an undergraduate student feeling uncertain, especially from a diverse background, and hope to gain some learning experiences from me, I also warmly welcome that!

  • ✉️ yin00486 [at] umn.edu

Pinned Loading

  1. OpenBMB/BMTrain OpenBMB/BMTrain Public

    Efficient Training (including pre-training and fine-tuning) for Big Models

    Python 570 78

  2. CGCL-codes/naturalcc CGCL-codes/naturalcc Public

    NaturalCC: An Open-Source Toolkit for Code Intelligence

    Python 279 46

  3. bklieger-groq/g1 bklieger-groq/g1 Public

    g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains

    Python 4k 367

  4. NanoGPT-Pytorch2.0-Implementation NanoGPT-Pytorch2.0-Implementation Public

    This is a repo for my NanoGPT Pytorch2.0 Implementation when torch2.0 released soon, faster and simpler, a good tutorial learning GPT.

    Python 60 2

  5. Cross-Layer-Attention Cross-Layer-Attention Public

    Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)

    Python 9

  6. PaperHelper PaperHelper Public

    PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References

    Python 11 2