Skip to content

Commit

Permalink
add japanese documentation (#26138)
Browse files Browse the repository at this point in the history
* udpaet

* update

* Update docs/source/ja/autoclass_tutorial.md

Co-authored-by: Steven Liu <[email protected]>

* add codes workflows/build_pr_documentation.yml

* Create preprocessing.md

* added traning.md

* Create Model_sharing.md

* add quicktour.md

* new

* ll

* Create benchmark.md

* Create Tensorflow_model

* add

* add community.md

* add create_a_model

* create custom_model.md

* create_custom_tools.md

* create fast_tokenizers.md

* create

* add

* Update docs/source/ja/_toctree.yml

Co-authored-by: Steven Liu <[email protected]>

* md

* add

* commit

* add

* h

* Update docs/source/ja/peft.md

Co-authored-by: Younes Belkada <[email protected]>

* Update docs/source/ja/_toctree.yml

Co-authored-by: Younes Belkada <[email protected]>

* Update docs/source/ja/_toctree.yml

Co-authored-by: Younes Belkada <[email protected]>

* Suggested Update

* add perf_train_gpu_one.md

* added perf based MD files

* Modify toctree.yml and Add transmartion to md codes

* Add `serialization.md` and edit `_toctree.yml`

* add task summary and tasks explained

* Add and Modify files starting from T

* Add testing.md

* Create main_classes files

* delete main_classes folder

* Add toctree.yml

* Update llm_tutorail.md

* Update docs/source/ja/_toctree.yml

Co-authored-by: Steven Liu <[email protected]>

* Update misspelled filenames

* Update docs/source/ja/_toctree.yml

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/_toctree.yml

* Update docs/source/ja/_toctree.yml

* missplled file names inmrpovements

* Update _toctree.yml

* close tip block

* close another tip block

* Update docs/source/ja/quicktour.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/pipeline_tutorial.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/pipeline_tutorial.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/preprocessing.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/peft.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/add_new_model.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/testing.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/task_summary.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/tasks_explained.md

Co-authored-by: Steven Liu <[email protected]>

* Update glossary.md

* Update docs/source/ja/transformers_agents.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/llm_tutorial.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/create_a_model.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/torchscript.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/benchmarks.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/troubleshooting.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/troubleshooting.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/troubleshooting.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/ja/add_new_model.md

Co-authored-by: Steven Liu <[email protected]>

* Update perf_torch_compile.md

* Update Year to default in en documentation

* Final Update

---------

Co-authored-by: Steven Liu <[email protected]>
Co-authored-by: Younes Belkada <[email protected]>
  • Loading branch information
3 people authored Oct 11, 2023
1 parent 797a1ba commit 9b7668c
Show file tree
Hide file tree
Showing 59 changed files with 14,440 additions and 10 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build_documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
commit_sha: ${{ github.sha }}
package: transformers
notebook_folder: transformers_doc
languages: de en es fr it ko pt zh
languages: de en es fr it ko pt zh ja
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
2 changes: 1 addition & 1 deletion .github/workflows/build_pr_documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ jobs:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: transformers
languages: de en es fr it ko pt zh
languages: de en es fr it ko pt zh ja
8 changes: 5 additions & 3 deletions docs/source/en/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ RUN_SLOW=1 pytest examples/





### Choosing which tests to run

This document goes into many details of how tests can be run. If after reading everything, you need even more details
Expand Down Expand Up @@ -184,6 +182,7 @@ pytest -k "test and ada" tests/test_optimization.py
### Run `accelerate` tests

Sometimes you need to run `accelerate` tests on your models. For that you can just add `-m accelerate_tests` to your command, if let's say you want to run these tests on `OPT` run:

```bash
RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py
```
Expand Down Expand Up @@ -514,13 +513,15 @@ n_gpu = get_gpu_count() # works with torch and tf
### Testing with a specific PyTorch backend or device

To run the test suite on a specific torch device add `TRANSFORMERS_TEST_DEVICE="$device"` where `$device` is the target backend. For example, to test on CPU only:

```bash
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/utils/test_logging.py
```

This variable is useful for testing custom or less common PyTorch backends such as `mps`. It can also be used to achieve the same effect as `CUDA_VISIBLE_DEVICES` by targeting specific GPUs or testing in CPU-only mode.

Certain devices will require an additional import after importing `torch` for the first time. This can be specified using the environment variable `TRANSFORMERS_TEST_BACKEND`:

```bash
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/utils/test_logging.py
```
Expand Down Expand Up @@ -879,7 +880,8 @@ or the `xfail` way:
def test_feature_x():
```

- Here is how to skip a test based on some internal check inside the test:

Here's how to skip a test based on internal checks within the test:

```python
def test_feature_x():
Expand Down
132 changes: 127 additions & 5 deletions docs/source/ja/_toctree.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,136 @@
- sections:
- local: index
title: 🤗 Transformers
- local: quicktour
title: クイックツアー
- local: installation
title: インストール
title: はじめに
title: Get started
- sections:
- local: pipeline_tutorial
title: パイプラインを使用して推論を実行する
- local: autoclass_tutorial
title: AutoClass を使用して移植可能なコードを作成する
- local: preprocessing
title: データの前処理
- local: training
title: 事前トレーニングされたモデルを微調整する
- local: run_scripts
title: スクリプトを使用してトレーニングする
- local: accelerate
title: 🤗 Accelerate を用いた分散学習
title: チュートリアル
title: 🤗 Accelerate を使用して分散トレーニングをセットアップする
- local: peft
title: 🤗 PEFT を使用してアダプターをロードしてトレーニングする
- local: model_sharing
title: モデルを共有する
- local: transformers_agents
title: エージェント
- local: llm_tutorial
title: LLM を使用した生成
title: Tutorials
- sections:
- local: generation_strategies
title: 生成戦略をカスタマイズする
title: Generation
- sections:
- local: fast_tokenizers
title: 🤗 トークナイザーの高速トークナイザーを使用する
- local: multilingual
title: 多言語モデルで推論を実行する
- local: create_a_model
title: モデル固有の API を使用する
- local: custom_models
title: カスタムモデルを共有する
- local: chat_templating
title: チャットモデルのテンプレート
- local: serialization
title: ONNX へのエクスポート
- local: tflite
title: TFLite へのエクスポート
- local: torchscript
title: トーチスクリプトへのエクスポート
- local: benchmarks
title: ベンチマーク
- local: community
title: コミュニティリソース
- local: custom_tools
title: カスタムツールとプロンプト
- local: troubleshooting
title: トラブルシューティング
title: 開発者ガイド
- sections:
- local: performance
title: 概要
- sections:
- local: perf_train_gpu_one
title: 単一の GPU で効率的にトレーニングするための方法とツール
- local: perf_train_gpu_many
title: 複数の GPU と並列処理
- local: perf_train_cpu
title: CPU での効率的なトレーニング
- local: perf_train_cpu_many
title: 分散CPUトレーニング
- local: perf_train_tpu
title: TPU に関するトレーニング
- local: perf_train_tpu_tf
title: TensorFlow を使用した TPU のトレーニング
- local: perf_train_special
title: 特殊なハードウェアに関するトレーニング
- local: perf_hardware
title: トレーニング用のカスタム ハードウェア
- local: hpo_train
title: Trainer API を使用したハイパーパラメータ検索
title: 効率的なトレーニングテクニック
- sections:
- local: multilingual
title: 推論のための多言語モデル
- local: perf_infer_cpu
title: CPUでの推論
- local: perf_infer_gpu_one
title: 1 つの GPU での推論
- local: perf_infer_gpu_many
title: 多くの GPU での推論
- local: perf_infer_special
title: 特殊なハードウェアでの推論
title: 推論の最適化
- local: big_models
title: 大きなモデルのインスタンス化
- local: tf_xla
title: TensorFlowモデルのXLA統合
- local: perf_torch_compile
title: torch.compile()を使用した推論の最適化
title: パフォーマンスとスケーラビリティ
- sections:
- local: add_new_model
title: 🤗 Transformersにモデルを追加する方法
- local: add_tensorflow_model
title: 🤗 TransformersモデルをTensorFlowに変換する方法
- local: testing
title: テスト
- local: pr_checks
title: プルリクエストのチェック
title: 貢献する
- sections:
- local: philosophy
title: フィロソフィー
- local: glossary
title: 用語集
- local: task_summary
title: 🤗 Transformersの機能
- local: tasks_explained
title: 🤗 Transformersがタスクを解決する方法
- local: model_summary
title: Transformerモデルファミリー
- local: tokenizer_summary
title: トークナイザーの概要
- local: attention
title: 注意機構
- local: pad_truncation
title: パディングと切り詰め
- local: bertology
title: BERTology
- local: perplexity
title: 固定長モデルのパープレキシティ
- local: pipeline_webserver
title: Webサーバー推論用パイプライン
- local: model_memory_anatomy
title: モデルトレーニングの解剖学
title: コンセプチュアルガイド
Loading

0 comments on commit 9b7668c

Please sign in to comment.