Hi, I'm a big fan of candle so I implemented the LLMs below in my [repo](https://github.com/ITHwang/llm-serving-wasm) forked from candle: - Qwen2-Instruct models including quantized version - Qwen2.5-Instruct models including quantized version Furthermore, I published [Qwen2.5 Instruct demo](https://huggingface.co/spaces/ITHwangg/candle-qwen25-wasm-demo) in my huggingface space. Can I add these qwen models into `candle-examples` and `candle-wasm-examples`?