runtime error

Exit code: 1. Reason: nfig.json: 0%| | 0.00/1.65k [00:00<?, ?B/s] config.json: 100%|██████████| 1.65k/1.65k [00:00<00:00, 13.8MB/s] configuration_llada.py: 0%| | 0.00/12.0k [00:00<?, ?B/s] configuration_llada.py: 100%|██████████| 12.0k/12.0k [00:00<00:00, 53.5MB/s] A new version of the following files was downloaded from https://huggingface.co/mlx-community/LLaDA-8B-Instruct-mlx-4bit: - configuration_llada.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. modeling_llada.py: 0%| | 0.00/60.7k [00:00<?, ?B/s] modeling_llada.py: 100%|██████████| 60.7k/60.7k [00:00<00:00, 228MB/s] A new version of the following files was downloaded from https://huggingface.co/mlx-community/LLaDA-8B-Instruct-mlx-4bit: - modeling_llada.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Traceback (most recent call last): File "/home/user/app/app.py", line 14, in <module> model = AutoModel.from_pretrained('mlx-community/LLaDA-8B-Instruct-mlx-4bit', trust_remote_code=True, File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 593, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 315, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4803, in from_pretrained if pre_quantized and not AutoHfQuantizer.supports_quant_method(config.quantization_config): File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 237, in supports_quant_method raise ValueError( ValueError: The model's quantization config from the arguments has no `quant_method` attribute. Make sure that the model has been correctly quantized

Container logs:

Fetching error logs...