diff --git a/contents/home.md b/contents/home.md
index da266b3..481188a 100644
--- a/contents/home.md
+++ b/contents/home.md
@@ -17,11 +17,11 @@ I have experience in NLP and computer systems(both architecture and high perform
My current passion revolves around building **EFFICIENT** system solutions to AGI and LLM(VLM) for **RELIABLE** Hardware Design, this includes:
-1. Machine Learning System
+1. Machine Learning System
* Training: Design more effective training system and algorithms, examples include [BMTrain](https://github.com/OpenBMB/BMTrain).
* Quantization (e.g. Attempting to finetune Llama 3.1 405B on a single A100 80GB GPU), the other example includes [IAPT](https://arxiv.org/pdf/2405.18203).
* Long context inference: example includes [Cross Layer Attention](https://github.com/JerryYin777/Cross-Layer-Attention).
-2. LLM(VLM) for RELIABLE Hardware Design
+2. LLM(VLM) for RELIABLE Hardware Design
* Synthesise pretraining and finetuning common knowledge of CodeLLM, exploring the boundary capabilities of LLM/VLM for hardware design (e.g. pretrain/finetune a VerilogLLM).
* Align the simulation code with the waveform image data to finetune VerilogVLM.
diff --git a/contents/publications.md b/contents/publications.md
index b9d73aa..1a4c34a 100644
--- a/contents/publications.md
+++ b/contents/publications.md
@@ -1,4 +1,4 @@
-For full paper list (not now, but I'm sure there will be more great work in the future), please refer to my [Google Scholar](https://scholar.google.com/citations?user=7gsdLw4AAAAJ&hl=en)
+For full paper list (not now, but I'm sure there will be more great work in the future), please refer to my [Google Scholar](https://scholar.google.com/citations?user=7gsdLw4AAAAJ&hl=en).
- *W. Zhu, Y. Ni, C. Yin, A. Tian, X. Wang, G. Xie. (2024). IAPT: Instance-Aware Prompt Turing for Large Language Models. The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).*[[Paper]](https://arxiv.org/pdf/2405.18203)