When I'm downloading the weights, the cell keeps running and doesn't stop. I need to fine tune Mistral-Small-3.1-24B-Instruct-2503 model

from transformers import AutoTokenizer, MistralForCausalLM
import torch

model_id = “mistralai/Mistral-Small-3.1-24B-Instruct-2503”

tokenizer = AutoTokenizer.from_pretrained(model_id,
trust_remote_code=True,
cache_dir=“/content/huggingface_cache”)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = “right”

model = MistralForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map=“auto”,
cache_dir=“/content/huggingface_cache”,
low_cpu_mem_usage=True,
offload_folder=“offload”,
)

I have used Older_version = transformers==4.49.0 and Current_version = transformers==4.52.0.dev0, I tried both version but didn’t gets the solution.
Please help us! Thanks

Hmm… Gated model issue?

Ya, I already got the access for this model. But cell keeps running, is there any other way or option for it.

Actually, model weights are downloaded but at the end cell keeps running.

Hmm… “Cell” probably refers to some kind of notebook environment.

There may be cache-related issues occurring occasionally in from_pretrained. In that case, rebuilding the virtual environment may resolve the issue…

Additionally, to determine whether this is a specific issue with the Mistral model (repository), testing with a smaller model should help isolate the problem.

model_id = "HuggingFaceTB/SmolLM2-135M-Instruct"