diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md deleted file mode 100644 index 08b500a2..00000000 --- a/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index 0fc2e450..00000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,59 +0,0 @@ -# Contributing to llama-recipes -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to llama-recipes, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. - -## Tests -Llama-recipes currently comes with a basic set of unit tests (covering the parts of the main training script and training loop) but we strive to increase our test coverage in the future in order to mitigate silent errors. -When submitting a new feature PR please make sure to cover the newly added code with a unit test. -Run the tests locally to ensure the new feature does not break an old one. -We use **pytest** for our unit tests and to run them locally you need to install llama-recipes with optional [tests] dependencies enabled: -``` -pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 llama-recipes[tests] -``` -For development and contributing to llama-recipes please install from source with all optional dependencies: -``` -pip install -U pip setuptools -pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .[tests,auditnlg,vllm] -``` -The unit tests can be found in the [tests](./tests/) folder and you can run them from the main directory using: -``` -python -m pytest tests/ -``` -To run all tests of a single file you can give the filename directly: -``` -python -m pytest tests/test_finetuning.py -``` -To run a specific test you can filter for its name with -``` -python -m pytest tests/test_finetuning.py -k test_finetuning_peft -``` -To add a new test simply create a new test file under the tests folder (filename has to start with `test_`). -Group tests spanning the same feature in the same file and create a subfolder if the tests are very extensive. \ No newline at end of file diff --git a/LICENSE b/LICENSE deleted file mode 100644 index bbe189a3..00000000 --- a/LICENSE +++ /dev/null @@ -1,125 +0,0 @@ -LLAMA 2 COMMUNITY LICENSE AGREEMENT -Llama 2 Version Release Date: July 18, 2023 - -"Agreement" means the terms and conditions for use, reproduction, distribution and -modification of the Llama Materials set forth herein. - -"Documentation" means the specifications, manuals and documentation -accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and- -libraries/llama-downloads/. - -"Licensee" or "you" means you, or your employer or any other person or entity (if -you are entering into this Agreement on such person or entity's behalf), of the age -required under applicable laws, rules or regulations to provide legal consent and that -has legal authority to bind your employer or such other person or entity if you are -entering in this Agreement on their behalf. - -"Llama 2" means the foundational large language models and software and -algorithms, including machine-learning model code, trained model weights, -inference-enabling code, training-enabling code, fine-tuning enabling code and other -elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and- -libraries/llama-downloads/. - -"Llama Materials" means, collectively, Meta's proprietary Llama 2 and -Documentation (and any portion thereof) made available under this Agreement. - -"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you -are an entity, your principal place of business is in the EEA or Switzerland) and Meta -Platforms, Inc. (if you are located outside of the EEA or Switzerland). - -By clicking "I Accept" below or by using or distributing any portion or element of the -Llama Materials, you agree to be bound by this Agreement. - -1. License Rights and Redistribution. - - a. Grant of Rights. You are granted a non-exclusive, worldwide, non- -transferable and royalty-free limited license under Meta's intellectual property or -other rights owned by Meta embodied in the Llama Materials to use, reproduce, -distribute, copy, create derivative works of, and make modifications to the Llama -Materials. - - b. Redistribution and Use. - - i. If you distribute or make the Llama Materials, or any derivative works -thereof, available to a third party, you shall provide a copy of this Agreement to such -third party. - ii. If you receive Llama Materials, or any derivative works thereof, from -a Licensee as part of an integrated end user product, then Section 2 of this -Agreement will not apply to you. - - iii. You must retain in all copies of the Llama Materials that you -distribute the following attribution notice within a "Notice" text file distributed as a -part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, -Copyright (c) Meta Platforms, Inc. All Rights Reserved." - - iv. Your use of the Llama Materials must comply with applicable laws -and regulations (including trade compliance laws and regulations) and adhere to the -Acceptable Use Policy for the Llama Materials (available at -https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into -this Agreement. - - v. You will not use the Llama Materials or any output or results of the -Llama Materials to improve any other large language model (excluding Llama 2 or -derivative works thereof). - -2. Additional Commercial Terms. If, on the Llama 2 version release date, the -monthly active users of the products or services made available by or for Licensee, -or Licensee's affiliates, is greater than 700 million monthly active users in the -preceding calendar month, you must request a license from Meta, which Meta may -grant to you in its sole discretion, and you are not authorized to exercise any of the -rights under this Agreement unless or until Meta otherwise expressly grants you -such rights. - -3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE -LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE -PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, -EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY -WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR -FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE -FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING -THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR -USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. - -4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE -LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, -NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS -AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, -CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN -IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF -ANY OF THE FOREGOING. - -5. Intellectual Property. - - a. No trademark licenses are granted under this Agreement, and in -connection with the Llama Materials, neither Meta nor Licensee may use any name -or mark owned by or associated with the other or any of its affiliates, except as -required for reasonable and customary use in describing and redistributing the -Llama Materials. - - b. Subject to Meta's ownership of Llama Materials and derivatives made by or -for Meta, with respect to any derivative works and modifications of the Llama -Materials that are made by you, as between you and Meta, you are and will be the -owner of such derivative works and modifications. - - c. If you institute litigation or other proceedings against Meta or any entity -(including a cross-claim or counterclaim in a lawsuit) alleging that the Llama -Materials or Llama 2 outputs or results, or any portion of any of the foregoing, -constitutes infringement of intellectual property or other rights owned or licensable -by you, then any licenses granted to you under this Agreement shall terminate as of -the date such litigation or claim is filed or instituted. You will indemnify and hold -harmless Meta from and against any claim by any third party arising out of or related -to your use or distribution of the Llama Materials. - -6. Term and Termination. The term of this Agreement will commence upon your -acceptance of this Agreement or access to the Llama Materials and will continue in -full force and effect until terminated in accordance with the terms and conditions -herein. Meta may terminate this Agreement if you are in breach of any term or -condition of this Agreement. Upon termination of this Agreement, you shall delete -and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the -termination of this Agreement. - -7. Governing Law and Jurisdiction. This Agreement will be governed and -construed under the laws of the State of California without regard to choice of law -principles, and the UN Convention on Contracts for the International Sale of Goods -does not apply to this Agreement. The courts of California shall have exclusive -jurisdiction of any dispute arising out of this Agreement. diff --git a/README.md b/README.md index 1edbd792..11195570 100644 --- a/README.md +++ b/README.md @@ -1,24 +1,36 @@ -# SLAM-LLM: **S**peech, **L**anguage, **A**udio, **M**usic Processing with Large Language Model - -# News -- [Update Mar. 13, 2024] Please join [slack](https://join.slack.com/t/slam-llm/shared_invite/zt-2cxmm7fue-tEKmZcL1hB8s2R2GQdTTiA). We will sync our updates here. - +
+

+ SLAM-LLM +

+

+ SLAM-LLM is a deep learning toolkit that allows researchers and +developers to train custom multimodal large language model (MLLM), focusing on Speech, Language, Audio, Music processing. We provide detailed recipes for training and high-performance checkpoints for inference.
+

+

+ SLAM-LLM Logo +

+

+

+ version + version + python + mit +
# Table of Contents -1. [Setup](#setup) -2. [Fine-tuning](#fine-tuning) - - [Single GPU](#single-gpu) - - [Multi GPU One Node](#multiple-gpus-one-node) - - [Multi GPU Multi Node](#multi-gpu-multi-node) -3. [Inference](#inference) - - [Batch Inference](#batch-inference) - - [Real-time Inference](#real-time-inference) -4. [License and Acceptable Use Policy](#license) -5. [Citation](#citation) +1. [News](#news) +2. [Installation](#installation) +3. [Uasge](#uasge) + - [List of Recipes](#list-of-recipes) + - [Configuration Priority](#configuration-priority) +4. [Features](#features) +5. [Acknowledge](#acknowledge) -# Setup +# News +- [Update Apr. 28, 2024] Recipes for automated audio captioning (AAC) with SOTA performance has been supported. +- [Update Mar. 31, 2024] Recipes for automatic speech recognition (ASR) with SOTA performance has been supported. -## Installation +# Installation ```bash git clone https://github.com/huggingface/transformers.git cd transformers @@ -36,265 +48,36 @@ cd SLAM-LLM pip install -e . ``` -**For more in depth information checkout the following:** - -* [Single GPU Fine-tuning](./docs/single_gpu.md) -* [Multi-GPU Fine-tuning](./docs/multi_gpu.md) -* [LLM Fine-tuning](./docs/LLM_finetuning.md) -* [Adding custom datasets](./docs/Dataset.md) -* [Inference](./docs/inference.md) -* [FAQs](./docs/FAQ.md) - -# Fine-tuning - -We take Automatic Speech Recognition (ASR) with Large Language Models (LLM) as an example to demonstrate the fine-tuning process. The same process can be applied to other tasks in [example](./examples)(TODO) and [scripts](./scripts) folder. - -## Single and Multi GPU Finetune - -If you want to dive right into single or multi GPU fine-tuning, run the examples below on a single GPU like A10, T4, V100, A100 etc. -All the parameters in the examples and recipes below need to be further tuned to have desired results based on the model, method, data and task at hand. - -### Single GPU: - -```bash -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -export OMP_NUM_THREADS=1 -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -output_dir=/nfs/maziyang.mzy/exps/finetune-asr-whisper-largev2-vicuna-7b-v1.5-linear-ds5-proj2048-stlr-lora - -python src/slam-llm/pipeline/finetune.py \ ---config-path "scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$output_dir \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++model_config.encoder_projector_ds_rate=5 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.prompt="Transcribe speech to text. " \ -++dataset_config.train_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ -++dataset_config.val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ -++dataset_config.input_type=mel \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.use_peft=true \ -++train_config.peft_config.peft_method=lora \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++log_config.log_file=/$output_dir/train.log \ -++log_config.use_wandb=true \ -++log_config.wandb_dir=$output_dir \ -++log_config.wandb_entity_name=zym22 \ -++log_config.wandb_project_name=slam-llm \ -++log_config.wandb_exp_name=${0##*/%.*} \ -++log_config.log_interval 5 \ -++metric=acc \ -# ++model_config.encoder_projector=q-former \ -# ++dataset_config.fix_length_audio=64 \ +For some examples, you may need to use `fairseq`, the command line is as follows: ``` - -Here we make use of Parameter Efficient Methods (PEFT) as described in the next section. To run the command above make sure to pass the `peft_method` arg which can be set to `lora`, `llama_adapter` or `prefix`. - -**Note** if you are running on a machine with multiple GPUs please make sure to only make one of them visible using `export CUDA_VISIBLE_DEVICES=GPU:id` - - -### Multiple GPUs One Node with DDP: - -```bash -export CUDA_VISIBLE_DEVICES=0,1,2,3 -export TOKENIZERS_PARALLELISM=false -export OMP_NUM_THREADS=1 -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -output_dir=/nfs/maziyang.mzy/exps/finetune-asr-whisper-largev2-vicuna-7b-v1.5-linear-ds5-proj2048-stlr-lora - -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ -src/llama_recipes/pipeline/finetune.py \ ---config-path "scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$output_dir \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++model_config.encoder_projector_ds_rate=5 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.prompt="Transcribe speech to text. " \ -++dataset_config.train_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ -++dataset_config.val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_dev_other.jsonl \ -++dataset_config.input_type=mel \ -++train_config.model_name=asr \ -++train_config.enable_fsdp=false \ -++train_config.enable_ddp=true \ -++train_config.use_fp16=true \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.use_peft=true \ -++train_config.peft_config.peft_method=lora \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++log_config.log_file=/$output_dir/train.log \ -++log_config.use_wandb=true \ -++log_config.wandb_dir=$output_dir \ -++log_config.wandb_entity_name=zym22 \ -++log_config.wandb_project_name=slam-llm \ -++log_config.wandb_exp_name=${0##*/%.*} \ -++log_config.log_interval 5 \ -++metric=acc \ -# ++model_config.encoder_projector=q-former \ -# ++dataset_config.fix_length_audio=64 \ +git clone https://github.com/pytorch/fairseq +cd fairseq +pip install --editable ./ ``` -If you want to run with FSDP, you can set `++train_config.enable_fsdp=true` and `++train_config.enable_ddp=false`. - -### Flash Attention and Xformer Memory Efficient Kernels - -Setting `use_fast_kernels` will enable using of Flash Attention or Xformer memory-efficient kernels based on the hardware being used. This would speed up the fine-tuning job. This has been enabled in `optimum` library from HuggingFace as a one-liner API, please read more [here](https://pytorch.org/blog/out-of-the-box-acceleration/). - -### Fine-tuning using FSDP on 70B Model - -If you are interested in running full parameter fine-tuning on the 70B model, you can enable `low_cpu_fsdp` mode as the following command. This option will load model on rank0 only before moving model to devices to construct FSDP. This can dramatically save cpu memory when loading large models like 70B (on a 8-gpu node, this reduces cpu memory from 2+T to 280G for 70B model). This has been tested with `BF16` on 16xA100, 80GB GPUs. - -### Multi GPU Multi Node: - -```bash - -sbatch multi_node.slurm -# Change the num nodes and GPU per nodes in the script before running. +# Usage +## List of Recipes +We provide reference implementations of various LLM-based speech, audio, and music tasks: +- **Speech Task** + - [Automatic Speech Recognition (ASR)](examples/asr_librispeech/README.md) + - [Text-to-Speech (TTS)](examples/vallex/README.md) +- **Audio Task** + - [Automated Audio Captioning (AAC)](examples/aac_audiocaps/README.md) + +## Configuration Priority +We provide hierarchical configuration inheritance relationships as follows: ``` -You can read more about our fine-tuning strategies [here](./docs/LLM_finetuning.md). - -# Inference - -Once you have fine-tuned the model(for example, whisper + vicuna + linear + lora), you can use the following command to run inference on the fine-tuned model. - -## Batch Inference - -```bash -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -export OMP_NUM_THREADS=1 -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -output_dir=/nfs/maziyang.mzy/exps/finetune-asr-whisper-largev2-vicuna-7b-v1.5-linear-ds5-proj2048-stlr-lora -ckpt_path=$output_dir/asr/2 -decode_log=$ckpt_path/decode_log_test_clean_beam4 - -python src/llama_recipes/pipeline/inference_batch.py \ ---config-path "scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$ckpt_path \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++model_config.encoder_projector_ds_rate=5 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.prompt="Transcribe speech to text. " \ -++dataset_config.val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_test_clean.jsonl \ -++dataset_config.input_type=mel \ -++dataset_config.inference_mode=true \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.use_peft=true \ -++train_config.peft_config.peft_method=lora \ -++train_config.batching_strategy=custom \ -++train_config.num_epochs=1 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++ckpt_path=$ckpt_path/model.pt \ -++peft_ckpt=$ckpt_path \ -++decode_log=$decode_log \ -# ++model_config.encoder_projector=q-former \ -# ++dataset_config.fix_length_audio=64 \ +command-line (shell file) > Hydra configuration (yaml file) > dataclass configuration (Python file) ``` -## Real-time Inference - -```bash -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -export OMP_NUM_THREADS=1 -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -output_dir=/nfs/maziyang.mzy/exps/finetune-asr-whisper-largev2-vicuna-7b-v1.5-linear-ds5-proj2048-stlr-lora -ckpt_path=$output_dir/asr/2 - -python src/llama_recipes/pipeline/inference.py \ ---config-path "scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++model_config.encoder_projector_ds_rate=5 \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.use_peft=true \ -++train_config.peft_config.peft_method=lora \ -++ckpt_path=$ckpt_path/model.pt \ -++peft_ckpt=$ckpt_path \ -++decode_log=$decode_log \ -# ++model_config.encoder_projector=q-former \ -# ++dataset_config.fix_length_audio=64 \ -``` - -# License -See the License file [here](LICENSE) and Acceptable Use Policy [here](USE_POLICY.md) - -# Citation - -``` -@article{ma2024embarrassingly, - title={An Embarrassingly Simple Approach for LLM with Strong ASR Capacity}, - author={Ma, Ziyang and Yang, Guanrou and Yang, Yifan and Gao, Zhifu and Wang, Jiaming and Du, Zhihao and Yu, Fan and Chen, Qian and Zheng, Siqi and Zhang, Shiliang and others}, - journal={arXiv preprint arXiv:2402.08846}, - year={2024} -} -``` +# Features +- Easily extend to new models and tasks. +- Detailed recipes for training and high-performance checkpoints for inference. +- Mixed precision training which trains faster with less GPU memory on NVIDIA tensor cores. +- Multi-GPU training with data and model parallel, supporting [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html), [FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html) and [deepspeed](https://github.com/microsoft/DeepSpeed) (still need to be improved). +- Flexible configuration based on [Hydra](https://github.com/facebookresearch/hydra) and [dataclass](https://docs.python.org/3/library/dataclasses.html) allowing a combination of code, command-line and file based configuration. + +# Acknowledge +- We borrow code from [Llama-Recipes](https://github.com/meta-llama/llama-recipes) for the training process. +- We borrow code from [Fairseq](https://github.com/facebookresearch/fairseq) for deepspeed configuration. +- We thank the contributors for providing diverse recipes. \ No newline at end of file diff --git a/UPDATES.md b/UPDATES.md deleted file mode 100644 index ffffa733..00000000 --- a/UPDATES.md +++ /dev/null @@ -1,19 +0,0 @@ -## System Prompt Update - -### Observed Issue -We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses. - -### Updated approach -Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting. Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it. - -## Token Sanitization Update - -### Observed Issue -The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts. - -As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use. - -### Updated approach -We recommend sanitizing [these strings](https://github.com/facebookresearch/llama#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this. - -Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository. \ No newline at end of file diff --git a/USE_POLICY.md b/USE_POLICY.md deleted file mode 100644 index 4299e1d1..00000000 --- a/USE_POLICY.md +++ /dev/null @@ -1,49 +0,0 @@ -# Llama 2 Acceptable Use Policy - -Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). - -## Prohibited Uses -We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: - -1. Violate the law or others’ rights, including to: - 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: - 1. Violence or terrorism - 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material - 3. Human trafficking, exploitation, and sexual violence - 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. - 5. Sexual solicitation - 6. Any other criminal activity - 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals - 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services - 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices - 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws - 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials - 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system - - - -2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: - 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State - 2. Guns and illegal weapons (including weapon development) - 3. Illegal drugs and regulated/controlled substances - 4. Operation of critical infrastructure, transportation technologies, or heavy machinery - 5. Self-harm or harm to others, including suicide, cutting, and eating disorders - 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual - - - -3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: - 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation - 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content - 3. Generating, promoting, or further distributing spam - 4. Impersonating another individual without consent, authorization, or legal right - 5. Representing that the use of Llama 2 or outputs are human-generated - 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement -4. Fail to appropriately disclose to end users any known dangers of your AI system - -Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: - -* Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) -* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) -* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) -* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) diff --git a/docs/logo.jpg b/docs/logo.jpg new file mode 100644 index 00000000..0e31204c Binary files /dev/null and b/docs/logo.jpg differ diff --git a/examples/aac_audiocaps/README.md b/examples/aac_audiocaps/README.md index 4eeaddcd..07a760b0 100644 --- a/examples/aac_audiocaps/README.md +++ b/examples/aac_audiocaps/README.md @@ -1,7 +1,7 @@ # AAC_Audiocaps ## Performance and checkpoints -We use [EAT](https://github.com/cwx-worst-one/EAT) and [BEATs](https://github.com/microsoft/unilm/tree/master/beats) as the main audio encoder for SLAM-AAC. Be sure to set up the corresponding environments based on the instructions provided in each repository. Here are checkpoints and performance for training only the linear layer and training the linear layer with LLM tuning via LoRA. +We use [EAT](https://github.com/cwx-worst-one/EAT) as the audio encoder in this repo. Be sure to set up the corresponding environments based on the instructions provided in each repository. Here are checkpoints and performance for training only the linear layer and training the linear layer with LLM tuning via LoRA. Audio Encoder | Projector | LLM | PEFT | METEOR | CIDEr | SPICE | SPIDEr |---|---|---|---|---|---|---|---| [EAT-base (fine-tuned)](https://drive.google.com/file/d/1aCYiQmoZv_Gh1FxnR-CCWpNAp6DIJzn6/view?usp=sharing) | [Linear](https://drive.google.com/file/d/1xyhgx8cUKSIKpYgPlEWjHL-jLgSnhfGJ/view?usp=sharing)(~16.26M) | [vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) | x | 0.2508 | 0.7532 | **0.1853** |0.4692 @@ -11,8 +11,8 @@ Audio Encoder | Projector | LLM | PEFT | METEOR | CIDEr | SPICE | SPIDEr ## Data preparation Prepare your `jsonl` data in the following format: ```json -{"key": "Y7fmOlUlwoNg_1", "prompt": "", "source": "/root/data/AudioCaps/waveforms/test/Y7fmOlUlwoNg.wav", "target": "Constant rattling noise and sharp vibrations", "target_len": 6, "source_len": 6, "text-type": "Transcribe", "audio_language": "english", "text_language": "english", "task-type": ""} -{"key": "Y6BJ455B1aAs_1", "prompt": "", "source": "/root/data/AudioCaps/waveforms/test/Y6BJ455B1aAs.wav", "target": "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle", "target_len": 18, "source_len": 18, "text-type": "Transcribe", "audio_language": "english", "text_language": "english", "task-type": ""} +{"key": "Y7fmOlUlwoNg_1", "source": "/root/data/AudioCaps/waveforms/test/Y7fmOlUlwoNg.wav", "target": "Constant rattling noise and sharp vibrations"} +{"key": "Y6BJ455B1aAs_1", "source": "/root/data/AudioCaps/waveforms/test/Y6BJ455B1aAs.wav", "target": "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle"} ``` Ensure your data aligns with this structure for consistent results. @@ -29,4 +29,4 @@ To perform inference with trained models, you could use this command: ```bash bash scripts/inference_eat_audiocaps.sh ``` -Ensure your environment is set up and data paths are correct for accurate results. \ No newline at end of file +Ensure your environment is set up and data paths are correct to reproduce results. \ No newline at end of file diff --git a/examples/aac_audiocaps/conf/prompt.yaml b/examples/aac_audiocaps/conf/prompt.yaml index 23be6aeb..c874afe6 100644 --- a/examples/aac_audiocaps/conf/prompt.yaml +++ b/examples/aac_audiocaps/conf/prompt.yaml @@ -1,4 +1,3 @@ dataset_config: # we put prompt here, because the hydra override in shell script only support a small subset of chars - # prompt: "Describe the audio you hear. Output the audio caption directly without redundant content. Ensure that the output is not duplicated." prompt: "Describe the audio you hear. Output the audio caption directly without redundant content. Ensure that the output is not duplicated." diff --git a/examples/asr_librispeech/README.md b/examples/asr_librispeech/README.md index a5a14118..c0890817 100644 --- a/examples/asr_librispeech/README.md +++ b/examples/asr_librispeech/README.md @@ -34,3 +34,26 @@ Whisper takes mel as input. Pay attention to the key `dataset_config.mel_size` f bash finetune_wavlm_large_linear_vicuna_7b.sh ``` WavLM takes raw wavform as input. Pay attention to the key `dataset_config.normalize` and `model_config.normalize` for different version of the SSL models for different SSL models are different in these keys. + +**Note**: +- if you are running on a machine with multiple GPUs please make sure to only make one of them visible using `export CUDA_VISIBLE_DEVICES=GPU:id` +- If you want to run with FSDP, you can set `++train_config.enable_fsdp=true` and `++train_config.enable_ddp=false`. + +### Flash Attention and Xformer Memory Efficient Kernels + +Setting `use_fast_kernels` will enable using of Flash Attention or Xformer memory-efficient kernels based on the hardware being used. This would speed up the fine-tuning job. This has been enabled in `optimum` library from HuggingFace as a one-liner API, please read more [here](https://pytorch.org/blog/out-of-the-box-acceleration/). + +### Fine-tuning using FSDP on 70B Model + +If you are interested in running full parameter fine-tuning on the 70B model, you can enable `low_cpu_fsdp` mode as the following command. This option will load model on rank0 only before moving model to devices to construct FSDP. This can dramatically save cpu memory when loading large models like 70B (on a 8-gpu node, this reduces cpu memory from 2+T to 280G for 70B model). This has been tested with `BF16` on 16xA100, 80GB GPUs. + +## Citation +You can refer to the paper for more results. +``` +@article{ma2024embarrassingly, + title={An Embarrassingly Simple Approach for LLM with Strong ASR Capacity}, + author={Ma, Ziyang and Yang, Guanrou and Yang, Yifan and Gao, Zhifu and Wang, Jiaming and Du, Zhihao and Yu, Fan and Chen, Qian and Zheng, Siqi and Zhang, Shiliang and others}, + journal={arXiv preprint arXiv:2402.08846}, + year={2024} +} +``` \ No newline at end of file diff --git a/scripts/finetune_aac_llama.sh b/scripts/finetune_aac_llama.sh deleted file mode 100644 index fc85a941..00000000 --- a/scripts/finetune_aac_llama.sh +++ /dev/null @@ -1,103 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -# speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt -audio_encoder_path=/nfs/maziyang.mzy/models/BEATs/BEATs_iter3_plus_AS2M.pt - -llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5/vicuna-13b-v1.5 - -output_dir=/nfs/maziyang.mzy/exps/debug - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name aac \ ---freeze_encoder \ ---freeze_llm \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name beats \ ---encoder_ds_rate 2 \ ---encoder_path $audio_encoder_path \ ---encoder_dim 768 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset audio_dataset \ ---audio_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---audio_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 1 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ -# --log_file $output_dir/test.log \ -# --use_wandb \ -# --wandb_dir $output_dir \ -# --wandb_entity_name zym22 \ -# --wandb_project_name slam-llm \ -# --wandb_exp_name test \ -# --log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ -# --use_peft --peft_method lora \ - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 2 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name aac \ ---freeze_encoder \ ---freeze_llm \ ---enable_fsdp \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name beats \ ---encoder_ds_rate 2 \ ---encoder_path $audio_encoder_path \ ---encoder_dim 768 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset audio_dataset \ ---audio_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---audio_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file /$output_dir/train.log \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name zym22 \ ---wandb_project_name slam-llm \ ---wandb_exp_name test \ ---log_interval 5 \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --use_peft --peft_method lora \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} \ No newline at end of file diff --git a/scripts/finetune_asr_llama.sh b/scripts/finetune_asr_llama.sh deleted file mode 100644 index b8452cbd..00000000 --- a/scripts/finetune_asr_llama.sh +++ /dev/null @@ -1,104 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0,1,2,3 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf - -output_dir=/nfs/maziyang.mzy/exps/llama-2-chat-hf-finetune-asr-ds5-proj2048-lr1e-4-whisper-prompt-padding30-20240111 - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_encoder \ ---freeze_llm \ ---use_fp16 \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ -# --log_file $output_dir/test.log \ -# --use_wandb \ -# --wandb_dir $output_dir \ -# --wandb_entity_name zym22 \ -# --wandb_project_name slam-llm \ -# --wandb_exp_name test \ -# --log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ -# --use_peft --peft_method lora \ - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_encoder \ ---freeze_llm \ ---enable_fsdp \ ---use_fp16 \ ---llm_name llama-2-7b-chat-hf \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 6 \ ---val_batch_size 6 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file /$output_dir/train.log \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name zym22 \ ---wandb_project_name slam-llm \ ---wandb_exp_name test \ ---log_interval 5 \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --use_peft --peft_method lora \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} \ No newline at end of file diff --git a/scripts/finetune_asr_tinyllama.sh b/scripts/finetune_asr_tinyllama.sh deleted file mode 100644 index a38a6243..00000000 --- a/scripts/finetune_asr_tinyllama.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=4,5,6,7 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 - -output_dir=/nfs/maziyang.mzy/exps/TinyLlama-1.1B-Chat-v0.4-finetune-asr-ds5-proj2048-lr1e-4-finetune-whisper-large-v2-prompt-padding30-20240115 - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_encoder \ ---freeze_llm \ ---llm_name vicuna-13b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 5120 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ -# --log_file $output_dir/test.log \ -# --use_wandb \ -# --wandb_dir $output_dir \ -# --wandb_entity_name zym22 \ -# --wandb_project_name slam-llm \ -# --wandb_exp_name test \ -# --log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ -# --use_peft --peft_method lora \ - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ ---master_port=29501 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_llm \ ---use_fp16 \ ---enable_fsdp \ ---llm_name tinyllama-1.1b-chat-v0.4 \ ---llm_path $llm_path \ ---llm_dim 2048 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file /$output_dir/train.log \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name zym22 \ ---wandb_project_name slam-llm \ ---wandb_exp_name test \ ---log_interval 5 \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --use_peft --peft_method lora \ -# --freeze_encoder \ -fi \ No newline at end of file diff --git a/scripts/finetune_asr_vicuna.sh b/scripts/finetune_asr_vicuna.sh deleted file mode 100644 index 839ed6fe..00000000 --- a/scripts/finetune_asr_vicuna.sh +++ /dev/null @@ -1,132 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0,1 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/tiny.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/base.pt -# speech_encoder_path=//nfs/maziyang.mzy/models/Whisper/small.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/medium.pt -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-intermediate-step-1431k-3T -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/nfs/maziyang.mzy/exps/vicuna-7b-v1.5-finetune-asr-qformer64-steplrwarmupkeep1e-4-whisper-largev2-prompt-padding30-20240125-test - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---config-path "/root/SLAM-LLM/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$output_dir \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=q-former \ -++dataset_config.fix_length_audio=64 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.train_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ -++dataset_config.val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=true \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++metric=acc \ -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.use_peft=true \ -# ++train_config.peft_config.peft_method=lora \ -#++log_config.log_file=/$output_dir/train.log \ -#++log_config.use_wandb=true \ -#++log_config.wandb_dir=$output_dir \ -#++log_config.wandb_entity_name=zym22 \ -#++log_config.wandb_project_name=slam-llm \ -#++log_config.wandb_exp_name=${0##*/%.*} \ -#++log_config.log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ - - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 2 \ -src/llama_recipes/pipeline/finetune.py \ ---config-path "/root/SLAM-LLM/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$output_dir \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=q-former \ -++dataset_config.fix_length_audio=64 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.train_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ -++dataset_config.val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ -++dataset_config.input_type=raw \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=true \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++train_config.enable_fsdp=false \ -++train_config.enable_ddp=true \ -++train_config.use_fp16=true \ -++metric=acc \ -# ++log_config.log_file=/$output_dir/train.log \ -# ++log_config.use_wandb=true \ -# ++log_config.wandb_dir=$output_dir \ -# ++log_config.wandb_entity_name=zym22 \ -# ++log_config.wandb_project_name=slam-llm \ -# ++log_config.wandb_exp_name=${0##*/%.*} \ -# ++log_config.log_interval=5 \ -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.use_peft=true \ -# ++train_config.peft_config.peft_method=lora \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --master_port=29501 \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} \ No newline at end of file diff --git a/scripts/finetune_avsr.sh b/scripts/finetune_avsr.sh deleted file mode 100644 index 278fe777..00000000 --- a/scripts/finetune_avsr.sh +++ /dev/null @@ -1,107 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0,1,2,3 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -# speech_encoder_path= TODO! - - -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/nfs/yangguanrou.ygr/vicuna-13b-v1.5-finetune-avsr-20230115 - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python src/llama_recipes/pipeline/finetune.py \ ---model_name avsr \ ---freeze_encoder \ ---freeze_llm \ ---llm_name vicuna-13b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name moco_wav2vec2 \ ---encoder_ds_rate 2 \ ---encoder_dim 512 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset avsr_dataset \ ---avsr_dataset.file src/llama_recipes/datasets/avsr_dataset.py:get_audio_dataset \ ---batching_strategy custom \ ---num_epochs 20 \ ---batch_size_training 6 \ ---val_batch_size 2 \ ---num_workers_dataloader 2 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file "/root/SLAM-LLM/log/second_try.log" \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name yanghaha \ ---wandb_project_name slam-llm \ ---wandb_exp_name avsr \ ---log_interval 5 \ - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name avsr \ ---freeze_encoder \ ---freeze_llm \ ---use_fp16 \ ---enable_fsdp \ ---llm_name vicuna-13b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name moco_wav2vec2 \ ---encoder_ds_rate 2 \ ---encoder_dim 512 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset avsr_dataset \ ---avsr_dataset.file src/llama_recipes/datasets/avsr_dataset.py:get_audio_dataset \ ---batching_strategy custom \ ---num_epochs 20 \ ---batch_size_training 2 \ ---val_batch_size 2 \ ---num_workers_dataloader 2 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file "/root/SLAM-LLM/log/second_try.log" \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name yanghaha \ ---wandb_project_name slam-llm \ ---wandb_exp_name avsr \ ---log_interval 5 \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --use_peft --peft_method lora \ -# --master_port=29501 \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} - - - -# 没用 encoder_ds_rate - -# 1.15 - -# 7b batch size 开到2 ok的 - -# 6 2 0 可以 \ No newline at end of file diff --git a/scripts/finetune_avsr_debug.sh b/scripts/finetune_avsr_debug.sh deleted file mode 100644 index ca9f8780..00000000 --- a/scripts/finetune_avsr_debug.sh +++ /dev/null @@ -1,104 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/nfs/maziyang.mzy/exps/vicuna-7b-v1.5-finetune-asr-ds5-proj2048-lr1e-4-whisper-prompt-paddingr-20240112 - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_encoder \ ---freeze_llm \ ---llm_name vicuna-13b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 4 \ ---val_batch_size 4 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ -# --log_file $output_dir/test.log \ -# --use_wandb \ -# --wandb_dir $output_dir \ -# --wandb_entity_name zym22 \ -# --wandb_project_name slam-llm \ -# --wandb_exp_name test \ -# --log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ -# --use_peft --peft_method lora \ - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name asr \ ---freeze_encoder \ ---freeze_llm \ ---use_fp16 \ ---enable_fsdp \ ---llm_name vicuna-7b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_dim 1280 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset speech_dataset \ ---speech_dataset.train_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_train_960h.jsonl \ ---speech_dataset.val_data_path /nfs/maziyang.mzy/data/librispeech/librispeech_dev_other_filtered.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 6 \ ---val_batch_size 6 \ ---num_workers_dataloader 4 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file /$output_dir/train.log \ ---use_wandb \ ---wandb_dir $output_dir \ ---wandb_entity_name zym22 \ ---wandb_project_name slam-llm \ ---wandb_exp_name test \ ---log_interval 5 \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --use_peft --peft_method lora \ -# --master_port=29501 \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} \ No newline at end of file diff --git a/scripts/finetune_avsr_vicuna_debug_0113.sh b/scripts/finetune_avsr_vicuna_debug_0113.sh deleted file mode 100644 index ab135502..00000000 --- a/scripts/finetune_avsr_vicuna_debug_0113.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=1 -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -# speech_encoder_path= TODO! - - -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/nfs/yangguanrou.ygr/vicuna-7b-v1.5-finetune-avsr - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5679 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name avsr \ ---freeze_encoder \ ---freeze_llm \ ---llm_name vicuna-13b-v1.5 \ ---llm_path $llm_path \ ---llm_dim 4096 \ ---encoder_name moco_wav2vec2 \ ---encoder_ds_rate 2 \ ---encoder_dim 512 \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset avsr_dataset \ ---avsr_dataset.file src/llama_recipes/datasets/avsr_dataset.py:get_audio_dataset \ ---batching_strategy custom \ ---num_epochs 1 \ ---batch_size_training 2 \ ---num_workers_dataloader 2 \ ---lr 1e-4 \ ---output_dir $output_dir \ ---metric acc \ ---log_file "/root/SLAM-LLM/log/first_try.log" \ - - -# --avsr_dataset.file src/llama_recipes/datasets/avsr_dataset.py:get_audio_dataset \ - - -# --encoder_path $speech_encoder_path \ #TODO! -# --encoder_dim 1280 \ #TODO! \ No newline at end of file diff --git a/scripts/finetune_echat.sh b/scripts/finetune_echat.sh deleted file mode 100644 index 866df6b2..00000000 --- a/scripts/finetune_echat.sh +++ /dev/null @@ -1,102 +0,0 @@ -#!/bin/bash -#export PYTHONPATH=/root/whisper:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0,1,2,3 -export CUDA_LAUNCH_BLOCKING=1 -# export OMP_NUM_THREADS=1 -# export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -cd /root/SLAM-LLM - -speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt -llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -output_dir=/nfs/maziyang.mzy/exps/llama-2-hf-finetune-echat-ds5-proj2048-debug - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---model_name echat \ ---freeze_encoder \ ---freeze_llm \ ---use_fp16 \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset custom_dataset \ ---custom_dataset.file src/llama_recipes/datasets/echat_dataset.py:get_audio_dataset \ ---custom_dataset.data_path /nfs/zhifu.gzf/data/IEMOCAP_full_release/datalist.jsonl \ ---batching_strategy custom \ ---custom_dataset.max_words 1024 \ ---num_epochs 100 \ ---batch_size_training 2 \ ---val_batch_size 2 \ ---output_dir $output_dir \ ---run_test_during_validation \ ---run_test_during_validation_file /nfs/zhifu.gzf/data/IEMOCAP_full_release/Session5/sentences/wav/Ses05M_impro04/Ses05M_impro04_M040.wav \ -# --ckpt_path "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1" -# --use_peft --peft_method lora \ - -# train -# {"trans": "Well, do you have your passport?\n", -# "emotion": "xxx", -# "wav": "/nfs/zhifu.gzf/data/IEMOCAP_full_release/Session1/sentences/wav/Ses01M_impro01/Ses01M_impro01_F009.wav"} -# {"trans": "No, I don't have a passport.\n", -# "emotion": "neu", -# "wav": "/nfs/zhifu.gzf/data/IEMOCAP_full_release/Session1/sentences/wav/Ses01M_impro01/Ses01M_impro01_M010.wav"} - -# val -# {"trans": "Yeah, well thanks for your help.\n", -# "emotion": "ang", -# "wav": "/nfs/zhifu.gzf/data/IEMOCAP_full_release/Session5/sentences/wav/Ses05M_impro04/Ses05M_impro04_M040.wav"} -# {"trans": "I'm sorry. Good luck, man.\n", -# "emotion": "xxx", -# "wav": "/nfs/zhifu.gzf/data/IEMOCAP_full_release/Session5/sentences/wav/Ses05M_impro04/Ses05M_impro04_F038.wav"} - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 4 \ -src/llama_recipes/pipeline/finetune.py \ ---model_name echat \ ---freeze_encoder \ ---use_fp16 \ ---use_peft --peft_method lora \ ---enable_fsdp \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---encoder_name whisper \ ---encoder_ds_rate 2 \ ---encoder_path $speech_encoder_path \ ---encoder_projector linear \ ---encoder_projector_ds_rate 5 \ ---dataset custom_dataset \ ---custom_dataset.file src/llama_recipes/datasets/echat_dataset.py:get_audio_dataset \ ---custom_dataset.data_path /nfs/zhifu.gzf/data/IEMOCAP_full_release/datalist.jsonl \ ---batching_strategy custom \ ---num_epochs 100 \ ---batch_size_training 8 \ ---val_batch_size 8 \ ---output_dir $output_dir \ ---run_test_during_validation \ ---run_test_during_validation_file /nfs/zhifu.gzf/data/IEMOCAP_full_release/Session1/sentences/wav/Ses01M_impro01/Ses01M_impro01_F009.wav \ ---run_test_during_validation_prompt """ -Please provide an emotional response based on the emotional speech you hear. -Remember to format your answer as follows: <|EMOTION|><|REPLY|>. -<|EMOTION|> is a standalone adjective. -<|REPLY|> is a reply based on a the speech. -""" \ ---metric acc \ -# --ckpt_path "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1" -# --freeze_llm \ -fi diff --git a/scripts/finetune_mls_aya.sh b/scripts/finetune_mls_aya.sh deleted file mode 100755 index 8dc81e80..00000000 --- a/scripts/finetune_mls_aya.sh +++ /dev/null @@ -1,113 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -# export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0,1 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -code_dir=/work/SLAM-LLM -cd $code_dir - -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/tiny.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/base.pt -# speech_encoder_path=//nfs/maziyang.mzy/models/Whisper/small.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/medium.pt -speech_encoder_path=/host/model_ckpt/whisper/large-v3.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-intermediate-step-1431k-3T -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf -llm_path=/host/model_ckpt/CohereForAI/aya-101 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/work/exps/aya-finetune-asr-linear-steplrwarmupkeep1e-4-whisper-largev3-$(date +"%Y%m%d")-test - -hydra_args=" -hydra.run.dir=$output_dir \ -++model_config.llm_name="aya-101" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++dataset_config.fix_length_audio=64 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.train_data_path=data/mls/polish_train.jsonl \ -++dataset_config.val_data_path=data/mls/polish_dev.jsonl \ -++dataset_config.mel_size=128 \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=true \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.output_dir=$output_dir \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++metric=acc \ -" -# ++log_config.log_file=/$output_dir/train.log \ -# ++log_config.use_wandb=true \ -# ++log_config.wandb_dir=$output_dir \ -# ++log_config.wandb_entity_name=zym22 \ -# ++log_config.wandb_project_name=slam-llm \ -# ++log_config.wandb_exp_name=${0##*/%.*} \ -# ++log_config.log_interval=5 \ -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.use_peft=true \ -# ++train_config.peft_config.peft_method=lora \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -# --master_port=29501 \ -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ ---config-path "${code_dir}/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -$hydra_args -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.use_peft=true \ -# ++train_config.peft_config.peft_method=lora \ -#++log_config.log_file=/$output_dir/train.log \ -#++log_config.use_wandb=true \ -#++log_config.wandb_dir=$output_dir \ -#++log_config.wandb_entity_name=zym22 \ -#++log_config.wandb_project_name=slam-llm \ -#++log_config.wandb_exp_name=${0##*/%.*} \ -#++log_config.log_interval 5 \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5/model.pt" \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-lora-prompt/asr/5" \ - - -else -torchrun \ ---nnodes 1 \ ---nproc_per_node 2 \ -src/llama_recipes/pipeline/finetune.py \ ---config-path "${code_dir}/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -$hydra_args -hydra.run.dir=$output_dir \ -$hydra_args \ -++train_config.enable_fsdp=false \ -++train_config.enable_ddp=true \ -++train_config.use_fp16=true \ -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} diff --git a/scripts/finetune_mls_llama.sh b/scripts/finetune_mls_llama.sh deleted file mode 100755 index db6a77ac..00000000 --- a/scripts/finetune_mls_llama.sh +++ /dev/null @@ -1,90 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=2,3 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -code_dir=/work/SLAM-LLM -cd $code_dir - -speech_encoder_path=/cxgroup/model/whisper/large-v3.pt - -llm_path=/cxgroup/model/Llama-2-7b-chat-hf -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/work/exps/Llama-2-7b-chat-finetune-asr-linear-lora-32-steplrwarmupkeep1e-4-whisper-largev3-$(date +"%Y%m%d")-test - -hydra_args=" -hydra.run.dir=$output_dir \ -++model_config.llm_name="llama-2-7b-chat-hf" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.train_data_path=data/mls/polish_train.jsonl \ -++dataset_config.val_data_path=data/mls/polish_dev.jsonl \ -++dataset_config.input_type=mel \ -++dataset_config.mel_size=128 \ -++train_config.use_peft=true \ -++train_config.peft_config.r=32 \ -++train_config.model_name=asr \ -++train_config.num_epochs=12 \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++metric=acc \ -" -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.peft_config.peft_method=lora \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -#++log_config.log_file=/$output_dir/train.log \ -#++log_config.use_wandb=true \ -#++log_config.wandb_dir=$output_dir \ -#++log_config.wandb_entity_name=zym22 \ -#++log_config.wandb_project_name=slam-llm \ -#++log_config.wandb_exp_name=${0##*/%.*} \ -#++log_config.log_interval 5 \ - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then - python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ - --config-path "/root/SLAM-LLM/scripts/conf" \ - --config-name "asr_vicuna_lora.yaml" \ - $hydra_args -else - torchrun \ - --nnodes 1 \ - --nproc_per_node 2 \ - --master_port=29501 \ - src/llama_recipes/pipeline/finetune.py \ - --config-path "${code_dir}/scripts/conf" \ - --config-name "asr_vicuna_lora.yaml" \ - ++train_config.enable_fsdp=false \ - ++train_config.enable_ddp=true \ - ++train_config.use_fp16=false \ - $hydra_args -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} diff --git a/scripts/finetune_mls_vicuna.sh b/scripts/finetune_mls_vicuna.sh deleted file mode 100755 index 24c81acc..00000000 --- a/scripts/finetune_mls_vicuna.sh +++ /dev/null @@ -1,90 +0,0 @@ -#!/bin/bash -# export PYTHONPATH=/root/whisper:$PYTHONPATH -export PYTHONPATH=/root/fairseq:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=2,3 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 -export OMP_NUM_THREADS=1 - -# debug setting for multiple gpus -# export NCCL_DEBUG=INFO -# export NCCL_DEBUG_SUBSYS=ALL -# export TORCH_DISTRIBUTED_DEBUG=INFO - -code_dir=/work/SLAM-LLM -cd $code_dir - -speech_encoder_path=/cxgroup/model/whisper/large-v3.pt - -llm_path=/cxgroup/model/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/work/exps/vicuna-7b-v1.5-finetune-asr-linear-lora-32-steplrwarmupkeep1e-4-whisper-largev3-$(date +"%Y%m%d")-test - -hydra_args=" -hydra.run.dir=$output_dir \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=linear \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.train_data_path=data/mls/polish_train.jsonl \ -++dataset_config.val_data_path=data/mls/polish_dev.jsonl \ -++dataset_config.input_type=mel \ -++dataset_config.mel_size=128 \ -++train_config.use_peft=true \ -++train_config.peft_config.r=32 \ -++train_config.model_name=asr \ -++train_config.num_epochs=12 \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=false \ -++train_config.batching_strategy=custom \ -++train_config.warmup_steps=1000 \ -++train_config.total_steps=100000 \ -++train_config.lr=1e-4 \ -++train_config.validation_interval=1000 \ -++train_config.batch_size_training=4 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++metric=acc \ -" -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# ++train_config.peft_config.peft_method=lora \ -# --peft_ckpt "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4" \ -# --ckpt_path "/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-5-whisper-prompt-padding30-20231228/asr/4/model.pt" \ -#++log_config.log_file=/$output_dir/train.log \ -#++log_config.use_wandb=true \ -#++log_config.wandb_dir=$output_dir \ -#++log_config.wandb_entity_name=zym22 \ -#++log_config.wandb_project_name=slam-llm \ -#++log_config.wandb_exp_name=${0##*/%.*} \ -#++log_config.log_interval 5 \ - -# -m debugpy --listen 5678 --wait-for-client -if [[ $CUDA_VISIBLE_DEVICES != *","* ]]; then - python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ - --config-path "/root/SLAM-LLM/scripts/conf" \ - --config-name "asr_vicuna_lora.yaml" \ - $hydra_args -else - torchrun \ - --nnodes 1 \ - --nproc_per_node 2 \ - --master_port=29501 \ - src/llama_recipes/pipeline/finetune.py \ - --config-path "${code_dir}/scripts/conf" \ - --config-name "asr_vicuna_lora.yaml" \ - ++train_config.enable_fsdp=false \ - ++train_config.enable_ddp=true \ - ++train_config.use_fp16=false \ - $hydra_args -fi - -# {"key": "1001-134707-0000_ASR", "prompt": "", "source": "/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0000.wav", "target": "1 little recks the laborer. How near his work is holding him to God, The loving laborer through space and time, after all, not to create, only or found only.", "target_len": 157, "source_len": 1581, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} -# {"key": "1688-142285-0005", "prompt": "", "source": "/nfs/beinian.lzr/workspace/datasets/data/16k/opendata/librispeech/test_other/wav/1688-142285-0005.wav", "target": "YOU WHO WERE ALWAYS ACCUSING PEOPLE OF BEING SHOPPY AT HELSTONE", "target_len": 11, "source_len": 220, "text-type": "Transcribe", "audio_language": "en", "text_language": "en", "task-type": ""} diff --git a/scripts/inference_asr.sh b/scripts/inference_asr.sh deleted file mode 100644 index 35ae0d43..00000000 --- a/scripts/inference_asr.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/bin/bash -#export PYTHONPATH=/root/whisper:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 - -cd /root/SLAM-LLM - -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/tiny.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/base.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/small.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/medium.pt -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt - -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-intermediate-step-1431k-3T -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 - -output_dir=/nfs/maziyang.mzy/exps/vicuna-7b-v1.5-finetune-asr-qformer64-steplrwarmupkeep1e-4-whisper-largev2-promptshort-lowergt-padding30-20240126 -ckpt_path=$output_dir/asr/2 -# peft_ckpt=/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-4-whisper-lora-prompt-paddinglr-20240102-renew5/asr/1 - -# -m debugpy --listen 5678 --wait-for-client -python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/inference.py \ ---config-path "/root/SLAM-LLM/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=whisper \ -++model_config.encoder_ds_rate=2 \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1280 \ -++model_config.encoder_projector=q-former \ -++dataset_config.fix_length_audio=64 \ -++ckpt_path=$ckpt_path/model.pt \ -++wav_path="/cpfs01/shared/Group-speech/beinian.lzr/data/open_data/librispeech_audio/audio/se_librispeech_1001-134707-0032.wav" \ -++prompt="Transcribe speech to text. Output the transcription directly without redundant content. Ensure that the output is not duplicated. " \ -++train_config.model_name=asr \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=true \ -# ++model_config.encoder_projector=linear \ -# ++model_config.encoder_projector_ds_rate=5 \ -# --peft_ckpt $peft_ckpt \ -# --use_peft --peft_method lora \ \ No newline at end of file diff --git a/scripts/inference_asr_batch.sh b/scripts/inference_asr_batch.sh deleted file mode 100755 index c2e8b41a..00000000 --- a/scripts/inference_asr_batch.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash -#export PYTHONPATH=/root/whisper:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 - -cd /root/SLAM-LLM - -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/tiny.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/base.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/small.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/medium.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/wavlm/WavLM-Base.pt -speech_encoder_path=/nfs/maziyang.mzy/models/wavlm/WavLM-Large.pt - -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-intermediate-step-1431k-3T -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 -# llm_path=/nfs/maziyang.mzy/models/phi-2 -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf -llm_path=/nfs/maziyang.mzy/models/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/nfs/maziyang.mzy/exps/vicuna-7b-v1.5-finetune-asr-ds5-proj2048-steplrwarmup1e-4keep-WavLM-Large-promptshort-lowergt-20240127 -ckpt_path=$output_dir/asr/3 -# peft_ckpt=/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-4-whisper-lora-prompt-paddinglr-20240102/asr/4 -val_data_path=/nfs/maziyang.mzy/data/librispeech/librispeech_test_clean_filtered.jsonl -decode_log=$ckpt_path/decode_log_test_clean_beam4_repetition_penalty1 - -# -m debugpy --listen 5678 --wait-for-client -python src/llama_recipes/pipeline/inference_batch.py \ ---config-path "/root/SLAM-LLM/scripts/conf" \ ---config-name "asr_vicuna_lora.yaml" \ -hydra.run.dir=$ckpt_path \ -++model_config.llm_name="vicuna-7b-v1.5" \ -++model_config.llm_path=$llm_path \ -++model_config.llm_dim=4096 \ -++model_config.encoder_name=wavlm \ -++dataset_config.normalize=true \ -++model_config.normalize=true \ -++model_config.encoder_path=$speech_encoder_path \ -++model_config.encoder_dim=1024 \ -++model_config.encoder_projector=linear \ -++model_config.encoder_projector_ds_rate=5 \ -++dataset_config.dataset=speech_dataset \ -++dataset_config.prompt="Transcribe speech to text. " \ -++dataset_config.val_data_path=$val_data_path \ -++dataset_config.input_type=raw \ -++dataset_config.inference_mode=true \ -++train_config.model_name=asr \ -++train_config.batching_strategy=custom \ -++train_config.num_epochs=1 \ -++train_config.val_batch_size=4 \ -++train_config.num_workers_dataloader=4 \ -++train_config.output_dir=$output_dir \ -++ckpt_path=$ckpt_path/model.pt \ -++decode_log=$decode_log \ -++train_config.freeze_encoder=true \ -++train_config.freeze_llm=true \ -# ++model_config.encoder_projector=q-former \ -# ++dataset_config.fix_length_audio=64 \ -# --peft_ckpt $peft_ckpt \ -# --use_peft --peft_method lora \ \ No newline at end of file diff --git a/scripts/inference_asr_batch_2.sh b/scripts/inference_asr_batch_2.sh deleted file mode 100755 index e17fe097..00000000 --- a/scripts/inference_asr_batch_2.sh +++ /dev/null @@ -1,67 +0,0 @@ -#!/bin/bash -#export PYTHONPATH=/root/whisper:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=0 -export TOKENIZERS_PARALLELISM=false -# export CUDA_LAUNCH_BLOCKING=1 - -code_dir=/work/SLAM-LLM -cd $code_dir - -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/tiny.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/base.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/small.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/medium.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt -# speech_encoder_path=/nfs/maziyang.mzy/models/wavlm/WavLM-Base.pt -speech_encoder_path=/host/model_ckpt/whisper/large-v3.pt - -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-intermediate-step-1431k-3T -# llm_path=/nfs/maziyang.mzy/models/TinyLlama-1.1B-Chat-v0.4 -# llm_path=/nfs/maziyang.mzy/models/phi-2 -# llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -# llm_path=/nfs/maziyang.mzy/models/Llama-2-7b-chat-hf -llm_path=/host/model_ckpt/vicuna-7b-v1.5 -# llm_path=/nfs/maziyang.mzy/models/vicuna-13b-v1.5 - -output_dir=/work/exps/vicuna-7b-v1.5-finetune-asr-linear-steplrwarmupkeep1e-4-whisper-largev3-20240301-test -ckpt_path=$output_dir/asr/4 -# peft_ckpt=/nfs/maziyang.mzy/exps/llama-2-hf-finetune-asr-ds5-proj2048-lr1e-4-whisper-lora-prompt-paddinglr-20240102/asr/4 -val_data_path=data/mls/polish_tem.jsonl -decode_log=$ckpt_path/decode_log_test_clean_beam4_repetition_penalty1 - -# -m debugpy --listen 5678 --wait-for-client -python src/llama_recipes/pipeline/inference_batch.py \ - --config-path "${code_dir}/scripts/conf" \ - --config-name "asr_vicuna_lora.yaml" \ - hydra.run.dir=$ckpt_path \ - ++model_config.llm_name="vicuna-7b-v1.5" \ - ++model_config.llm_path=$llm_path \ - ++model_config.llm_dim=4096 \ - ++model_config.encoder_name=whisper \ - ++model_config.encoder_path=$speech_encoder_path \ - ++model_config.encoder_dim=1280 \ - ++model_config.encoder_projector=linear \ - ++model_config.encoder_ds_rate=2 \ - ++dataset_config.dataset=speech_dataset \ - ++dataset_config.fix_length_audio=64 \ - ++dataset_config.val_data_path=$val_data_path \ - ++dataset_config.input_type=mel \ - ++dataset_config.mel_size=128 \ - ++dataset_config.inference_mode=true \ - ++train_config.model_name=asr \ - ++train_config.batching_strategy=custom \ - ++train_config.num_epochs=1 \ - ++train_config.val_batch_size=4 \ - ++train_config.num_workers_dataloader=4 \ - ++train_config.output_dir=$output_dir \ - ++decode_log=$decode_log \ - ++ckpt_path=$ckpt_path/model.pt \ - ++train_config.freeze_encoder=true \ - ++train_config.freeze_llm=true \ - # ++dataset_config.normalize=true \ - # ++model_config.encoder_projector=q-former \ - # ++dataset_config.fix_length_audio=64 \ - # --peft_ckpt $peft_ckpt \ - # ++ckpt_path=$ckpt_path/model.pt \ - # --use_peft --peft_method lora \ \ No newline at end of file diff --git a/scripts/inference_echat.sh b/scripts/inference_echat.sh deleted file mode 100644 index f34726d2..00000000 --- a/scripts/inference_echat.sh +++ /dev/null @@ -1,42 +0,0 @@ -#!/bin/bash -#export PYTHONPATH=/root/whisper:$PYTHONPATH -export CUDA_VISIBLE_DEVICES=1 -export CUDA_LAUNCH_BLOCKING=1 - -cd /root/SLAM-LLM - -# speech_encoder_path=/nfs/zhifu.gzf/ckpt/Whisper/base.pt -speech_encoder_path=/nfs/maziyang.mzy/models/Whisper/large-v2-qwen.pt -llm_path=/nfs/zhifu.gzf/ckpt/Llama-2-7b-hf -output_dir=/nfs/maziyang.mzy/models/llama-2-hf-finetune - -# -m debugpy --listen 5678 --wait-for-client -#python -m debugpy --listen 5678 --wait-for-client src/llama_recipes/pipeline/finetune.py \ -python src/llama_recipes/pipeline/inference.py \ ---model_name echat \ ---freeze_llm \ ---use_fp16 \ ---quantization \ ---llm_name llama-2-7b-hf \ ---llm_path $llm_path \ ---encoder_name whisper \ ---encoder_path $speech_encoder_path \ ---encoder_projector linear \ ---dataset custom_dataset \ ---custom_dataset.file src/llama_recipes/datasets/speech_text_dataset.py:get_audio_dataset \ ---custom_dataset.data_path /nfs/zhifu.gzf/data/IEMOCAP_full_release/datalist.jsonl \ ---batching_strategy custom \ ---custom_dataset.max_words 1024 \ ---num_epochs 1 \ ---batch_size_training 2 \ ---output_dir $output_dir \ ---ckpt_path "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1/model.pt" \ ---wav_path "/nfs/zhifu.gzf/data/IEMOCAP_full_release/Session5/sentences/wav/Ses05M_impro04/Ses05M_impro04_F035.wav" \ ---prompt """ - Please provide an emotional response based on the emotional speech you hear. - Remember to format your answer as follows: <|EMOTION|><|REPLY|>. - <|EMOTION|> is a standalone adjective. - <|REPLY|> is a reply based on a the speech. - """ \ -# --peft_ckpt "/nfs/maziyang.mzy/models/llama-2-hf-finetune/echat/1" -# --use_peft --peft_method lora \ \ No newline at end of file