From 7a9a02a655a1544d8866553ef4a2747a373535b8 Mon Sep 17 00:00:00 2001 From: mgqa34 Date: Wed, 6 Mar 2024 15:42:49 +0800 Subject: [PATCH 1/4] update readme Signed-off-by: mgqa34 --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 79fabf4..02d1f5b 100644 --- a/README.md +++ b/README.md @@ -17,13 +17,14 @@ FATE-LLM is a framework to support federated learning for large language models( ### Standalone deployment Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment). -Deploy FATE-Standalone version with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm` +* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm` +* To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm` ### Cluster deployment Use [FATE-LLM deployment packages](https://github.com/FederatedAI/FATE/wiki/Download#llm%E9%83%A8%E7%BD%B2%E5%8C%85) to deploy, refer to [FATE-Cluster deployment](https://github.com/FederatedAI/FATE#cluster-deployment) for more deployment details. ## Quick Start - [Federated ChatGLM3-6B Training](./doc/tutorial/parameter_efficient_llm/ChatGLM3-6B_ds.ipynb) -- [Builtin Models In PELLM](./doc/tutorial/builtin_models.md) +- [Builtin Models In PELLM](./doc/tutorial/builtin_pellm_models.md) - [Offsite Tuning Tutorial](./doc/tutorial/offsite_tuning/Offsite_tuning_tutorial.ipynb) - [FedKSeed](./doc/tutorial/fedkseed/fedkseed-example.ipynb) \ No newline at end of file From 10643aa77e190668d65bb3dc07bd15374e0310e8 Mon Sep 17 00:00:00 2001 From: mgqa34 Date: Wed, 6 Mar 2024 15:44:51 +0800 Subject: [PATCH 2/4] update builtin_pellm_models doc Signed-off-by: mgqa34 --- doc/tutorial/builtin_pellm_models.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/tutorial/builtin_pellm_models.md b/doc/tutorial/builtin_pellm_models.md index 70c3f37..a812102 100644 --- a/doc/tutorial/builtin_pellm_models.md +++ b/doc/tutorial/builtin_pellm_models.md @@ -6,7 +6,7 @@ After reading the training tutorial above, it's easy to use other models listing | Model | ModuleName | ClassName | DataSetName | -| -------------- | ----------------- | --------------| --------------- | | +| -------------- | ----------------- | --------------| --------------- | | Qwen2 | pellm.qwen | Qwen | prompt_dataset | | Bloom-7B1 | pellm.bloom | Bloom | prompt_dataset | | LLaMA-2-7B | pellm.llama | LLaMa | prompt_dataset | From 9de48e8cfb865a9bcd7f731e528d718b3bd65a92 Mon Sep 17 00:00:00 2001 From: mgqa34 Date: Wed, 6 Mar 2024 15:45:37 +0800 Subject: [PATCH 3/4] fix builtin_pellm_models doc Signed-off-by: mgqa34 --- doc/tutorial/builtin_pellm_models.md | 1 - 1 file changed, 1 deletion(-) diff --git a/doc/tutorial/builtin_pellm_models.md b/doc/tutorial/builtin_pellm_models.md index a812102..e2a3d49 100644 --- a/doc/tutorial/builtin_pellm_models.md +++ b/doc/tutorial/builtin_pellm_models.md @@ -12,7 +12,6 @@ After reading the training tutorial above, it's easy to use other models listing | LLaMA-2-7B | pellm.llama | LLaMa | prompt_dataset | | LLaMA-7B | pellm.llama | LLaMa | prompt_dataset | | ChatGLM3-6B | pellm.chatglm | ChatGLM | prompt_dataset | -| ChatGLM-6B | pellm.chatglm | ChatGLM | prompt_dataset | | GPT-2 | pellm.gpt2 | GPT2 | seq_cls_dataset | | ALBERT | pellm.albert | Albert | seq_cls_dataset | | BART | pellm.bart | Bart | seq_cls_dataset | From c53749bfaa29384fc63a31a11f83253125d3dce3 Mon Sep 17 00:00:00 2001 From: mgqa34 Date: Wed, 6 Mar 2024 15:49:38 +0800 Subject: [PATCH 4/4] update deployment desc of llm-2.0 Signed-off-by: mgqa34 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 02d1f5b..cfd98ae 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ FATE-LLM is a framework to support federated learning for large language models( ### Standalone deployment Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment). -* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm` +* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then make a new directory `{fate_install}/fate_llm` and clone the code into it, install the python requirements, and add `{fate_install}/fate_llm/python` to `PYTHONPATH` * To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm` ### Cluster deployment