mirror of
https://github.com/vale981/ray
synced 2025-03-04 17:41:43 -05:00
[AIR][CI] Speed up HF CI by ~20% (#28208)
Speeds up HuggingFaceTrainer/Predictor tests in CI by around ~20% by switching to a different GPT model. This is the same model Hugging Face team uses for their own CI. Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
This commit is contained in:
parent
ac6d63e397
commit
48898aa03d
3 changed files with 6 additions and 6 deletions
File diff suppressed because one or more lines are too long
|
@ -23,8 +23,8 @@ prompts = pd.DataFrame(
|
|||
|
||||
# We are only testing Casual Language Modeling here
|
||||
|
||||
model_checkpoint = "sshleifer/tiny-gpt2"
|
||||
tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer"
|
||||
model_checkpoint = "hf-internal-testing/tiny-random-gpt2"
|
||||
tokenizer_checkpoint = "hf-internal-testing/tiny-random-gpt2"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
|
|
|
@ -35,8 +35,8 @@ prompts = pd.DataFrame(
|
|||
|
||||
# We are only testing Casual Language Modelling here
|
||||
|
||||
model_checkpoint = "sshleifer/tiny-gpt2"
|
||||
tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer"
|
||||
model_checkpoint = "hf-internal-testing/tiny-random-gpt2"
|
||||
tokenizer_checkpoint = "hf-internal-testing/tiny-random-gpt2"
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
|
|
Loading…
Add table
Reference in a new issue