-
Notifications
You must be signed in to change notification settings - Fork 430
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
support Codeqwen-7b-chat model (#718)
- Loading branch information
Showing
9 changed files
with
121 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
11 changes: 11 additions & 0 deletions
11
examples/pytorch/llm/scripts/codeqwen1half_7b_chat/lora/infer.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Experimental environment: 3090 | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift infer \ | ||
--ckpt_dir "output/codeqwen1half-7b-chat/vx-xxx/checkpoint-xxx" \ | ||
--load_dataset_config true \ | ||
--max_new_tokens 2048 \ | ||
--temperature 0.1 \ | ||
--top_p 0.7 \ | ||
--repetition_penalty 1. \ | ||
--do_sample true \ | ||
--merge_lora false \ |
31 changes: 31 additions & 0 deletions
31
examples/pytorch/llm/scripts/codeqwen1half_7b_chat/lora/sft.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
# Experimental environment: 3090,A10,V100... | ||
# 20GB GPU memory | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift sft \ | ||
--model_type codeqwen1half-7b-chat \ | ||
--model_revision master \ | ||
--sft_type lora \ | ||
--tuner_backend peft \ | ||
--dtype AUTO \ | ||
--output_dir output \ | ||
--ddp_backend nccl \ | ||
--dataset leetcode-python-en \ | ||
--train_dataset_sample -1 \ | ||
--num_train_epochs 3 \ | ||
--max_length 2048 \ | ||
--check_dataset_strategy warning \ | ||
--lora_rank 8 \ | ||
--lora_alpha 32 \ | ||
--lora_dropout_p 0.05 \ | ||
--lora_target_modules DEFAULT \ | ||
--gradient_checkpointing true \ | ||
--batch_size 1 \ | ||
--weight_decay 0.1 \ | ||
--learning_rate 1e-4 \ | ||
--gradient_accumulation_steps 16 \ | ||
--max_grad_norm 0.5 \ | ||
--warmup_ratio 0.03 \ | ||
--eval_steps 100 \ | ||
--save_steps 100 \ | ||
--save_total_limit 2 \ | ||
--logging_steps 10 \ |
13 changes: 13 additions & 0 deletions
13
examples/pytorch/llm/scripts/codeqwen1half_7b_chat_awq/lora/infer.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# Experiment env: A10, RTX3090/4090, A100 | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift infer \ | ||
--ckpt_dir "output/codeqwen1half-7b-chat-awq/vx-xxx/checkpoint-xxx" \ | ||
--load_dataset_config true \ | ||
--use_flash_attn false \ | ||
--max_new_tokens 2048 \ | ||
--temperature 0.1 \ | ||
--top_p 0.7 \ | ||
--repetition_penalty 1. \ | ||
--do_sample true \ | ||
--stream false \ | ||
--merge_lora false \ |
27 changes: 27 additions & 0 deletions
27
examples/pytorch/llm/scripts/codeqwen1half_7b_chat_awq/lora/sft.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# Experiment env: A10, RTX3090/4090, A100 | ||
CUDA_VISIBLE_DEVICES=0 \ | ||
swift sft \ | ||
--model_type codeqwen1half-7b-chat-awq \ | ||
--dataset leetcode-python-en \ | ||
--train_dataset_mix_ratio 3 \ | ||
--batch_size 4 \ | ||
--max_length 1024 \ | ||
--use_loss_scale true \ | ||
--gradient_accumulation_steps 2 \ | ||
--learning_rate 5e-5 \ | ||
--use_flash_attn true \ | ||
--eval_steps 2000 \ | ||
--save_steps 2000 \ | ||
--train_dataset_sample -1 \ | ||
--num_train_epochs 3 \ | ||
--check_dataset_strategy none \ | ||
--gradient_checkpointing true \ | ||
--weight_decay 0.1 \ | ||
--max_grad_norm 1.0 \ | ||
--warmup_ratio 0.03 \ | ||
--save_total_limit 2 \ | ||
--logging_steps 10 \ | ||
--sft_type lora \ | ||
--lora_target_modules ALL \ | ||
--lora_rank 8 \ | ||
--lora_alpha 32 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters