[AIR][CI] Speed up HF CI by ~20% (#28208)

Speeds up HuggingFaceTrainer/Predictor tests in CI by around ~20% by switching to a different GPT model. This is the same model Hugging Face team uses for their own CI.

Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
This commit is contained in:
Antoni Baum 2022-09-01 19:18:10 +02:00 committed by GitHub
parent ac6d63e397
commit 48898aa03d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 6 additions and 6 deletions

File diff suppressed because one or more lines are too long

View file

@ -23,8 +23,8 @@ prompts = pd.DataFrame(
# We are only testing Casual Language Modeling here
model_checkpoint = "sshleifer/tiny-gpt2"
tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer"
model_checkpoint = "hf-internal-testing/tiny-random-gpt2"
tokenizer_checkpoint = "hf-internal-testing/tiny-random-gpt2"
@pytest.fixture

View file

@ -35,8 +35,8 @@ prompts = pd.DataFrame(
# We are only testing Casual Language Modelling here
model_checkpoint = "sshleifer/tiny-gpt2"
tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer"
model_checkpoint = "hf-internal-testing/tiny-random-gpt2"
tokenizer_checkpoint = "hf-internal-testing/tiny-random-gpt2"
@pytest.fixture