I have downloaded deepseek-r1-qwen-distill-7B from huggingface and then converted it with mlx_lm.convert with 8 bits quantization and 16 bit float dtype. Is there a way to load this model from my ssd rather than downloading the model from huggingface/mlx community?