Skip to content

Commit

Permalink
[doc] use full path for run_qa.py (#34914)
Browse files Browse the repository at this point in the history
use full path for run_qa.py
  • Loading branch information
faaany authored Nov 26, 2024
1 parent 6bc0c21 commit 784d220
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 6 deletions.
5 changes: 2 additions & 3 deletions docs/source/en/perf_infer_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ Enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] metho
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder")
model.to_bettertransformer()
```

## TorchScript
Expand All @@ -54,7 +53,7 @@ For a gentle introduction to TorchScript, see the [Introduction to PyTorch Torch
With the [`Trainer`] class, you can enable JIT mode for CPU inference by setting the `--jit_mode_eval` flag:

```bash
python run_qa.py \
python examples/pytorch/question-answering/run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
Expand Down Expand Up @@ -86,7 +85,7 @@ pip install intel_extension_for_pytorch
Set the `--use_ipex` and `--jit_mode_eval` flags in the [`Trainer`] class to enable JIT mode with the graph optimizations:

```bash
python run_qa.py \
python examples/pytorch/question-answering/run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/perf_train_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`
Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)

- Training with IPEX using BF16 auto mixed precision on CPU:
<pre> python run_qa.py \
<pre> python examples/pytorch/question-answering/run_qa.py \
--model_name_or_path google-bert/bert-base-uncased \
--dataset_name squad \
--do_train \
Expand Down
4 changes: 2 additions & 2 deletions docs/source/en/perf_train_cpu_many.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ The following command enables training with 2 processes on one Xeon node, with o
export CCL_WORKER_COUNT=1
export MASTER_ADDR=127.0.0.1
mpirun -n 2 -genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
python3 examples/pytorch/question-answering/run_qa.py \
--model_name_or_path google-bert/bert-large-uncased \
--dataset_name squad \
--do_train \
Expand Down Expand Up @@ -104,7 +104,7 @@ Now, run the following command in node0 and **4DDP** will be enabled in node0 an
export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip
mpirun -f hostfile -n 4 -ppn 2 \
-genv OMP_NUM_THREADS=23 \
python3 run_qa.py \
python3 examples/pytorch/question-answering/run_qa.py \
--model_name_or_path google-bert/bert-large-uncased \
--dataset_name squad \
--do_train \
Expand Down

0 comments on commit 784d220

Please sign in to comment.