Skip to content

Commit

Permalink
Pin numpy v1 for onnxruntime (#1921)
Browse files Browse the repository at this point in the history
* fix offline ci

* pin numpy v1 for now

* pin numpy 1 in exporters as well

* pin numpy v1 everywhere for transfromers
  • Loading branch information
IlyasMoutawwakil authored Jun 27, 2024
1 parent 2db03d4 commit 291f535
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 21 deletions.
42 changes: 23 additions & 19 deletions .github/workflows/test_offline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ name: Offline usage / Python - Test

on:
push:
branches: [ main ]
branches: [main]
pull_request:
branches: [ main ]
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
Expand All @@ -15,29 +15,33 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: [3.9]
python-version: [3.8, 3.9]
os: [ubuntu-20.04]

runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies for pytorch export
run: |
pip install .[tests,exporters,onnxruntime]
- name: Test with unittest
run: |
HF_HOME=/tmp/ huggingface-cli download hf-internal-testing/tiny-random-gpt2
- name: Checkout code
uses: actions/checkout@v4

HF_HOME=/tmp/ HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

huggingface-cli download hf-internal-testing/tiny-random-gpt2
- name: Install dependencies for pytorch export
run: |
pip install .[tests,exporters,onnxruntime]
HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
- name: Test with pytest
run: |
HF_HOME=/tmp/ huggingface-cli download hf-internal-testing/tiny-random-gpt2
pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
HF_HOME=/tmp/ HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
HF_HUB_OFFLINE=1 pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
huggingface-cli download hf-internal-testing/tiny-random-gpt2
HF_HUB_OFFLINE=1 optimum-cli export onnx --model hf-internal-testing/tiny-random-gpt2 gpt2_onnx --task text-generation
pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
HF_HUB_OFFLINE=1 pytest tests/onnxruntime/test_modeling.py -k "test_load_model_from_hub and not from_hub_onnx" -s -vvvvv
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"transformers[sentencepiece]>=4.26.0,<4.42.0",
"torch>=1.11",
"packaging",
"numpy",
"numpy<2.0", # transformers requires numpy<2.0 https://github.com/huggingface/transformers/pull/31569
"huggingface_hub>=0.8.0",
"datasets",
]
Expand Down Expand Up @@ -79,10 +79,10 @@
"openvino": "optimum-intel[openvino]>=1.16.0",
"nncf": "optimum-intel[nncf]>=1.16.0",
"neural-compressor": "optimum-intel[neural-compressor]>=1.16.0",
"graphcore": "optimum-graphcore",
"habana": ["optimum-habana", "transformers >= 4.38.0, < 4.39.0"],
"neuron": ["optimum-neuron[neuron]>=0.0.20", "transformers >= 4.36.2, < 4.42.0"],
"neuronx": ["optimum-neuron[neuronx]>=0.0.20", "transformers >= 4.36.2, < 4.42.0"],
"graphcore": "optimum-graphcore",
"furiosa": "optimum-furiosa",
"amd": "optimum-amd",
"dev": TESTS_REQUIRE + QUALITY_REQUIRE,
Expand Down

0 comments on commit 291f535

Please sign in to comment.