Skip to content

llama-burn + wgpu broken? #64

@VisualEhrmanntraut

Description

@VisualEhrmanntraut

Chip: Apple M3
OS: macOS 15.5 Beta (24F5042g)

Loading record...
Loaded in 7s
Processing prompt: How many helicopters can a human eat in one sitting?

thread 'main' panicked at /Users/visual/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/burn-jit-0.16.1/src/ops/float_ops.rs:366:89:
called `Result::unwrap()` on an `Err` value: CubeCountTooLarge
stack backtrace:
   0:        0x104f69890 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hfc1aec1722525aab
   1:        0x104f852d4 - core::fmt::write::heccbd137bf61b5da
   2:        0x104f66c90 - std::io::Write::write_fmt::haca8a96eb7cdccc9
   3:        0x104f69744 - std::sys::backtrace::BacktraceLock::print::he7bfdc0a6bceeb9f
   4:        0x104f6a8b8 - std::panicking::default_hook::{{closure}}::hcec2db2a6f3d4c4d
   5:        0x104f6a708 - std::panicking::default_hook::hf79c7d86116dc29f
   6:        0x104f6b398 - std::panicking::rust_panic_with_hook::h0e60ca225d10023a
   7:        0x104f6afc4 - std::panicking::begin_panic_handler::{{closure}}::h40e919c7e63ca7fd
   8:        0x104f69d48 - std::sys::backtrace::__rust_end_short_backtrace::ha5f082a0dfde6853
   9:        0x104f6ac6c - __rustc[8a6480b3f5e8ef7f]::rust_begin_unwind
  10:        0x104fc6c50 - core::panicking::panic_fmt::h6a23240c892a221a
  11:        0x104fc6f88 - core::result::unwrap_failed::h800e5ed148294dcd
  12:        0x1045549a4 - burn_jit::ops::float_ops::<impl burn_tensor::tensor::ops::tensor::FloatTensorOps<burn_jit::backend::JitBackend<R,F,I,BT>> for burn_jit::backend::JitBackend<R,F,I,BT>>::float_sum_dim::h6ff01a8fc1056efd
  13:        0x10475c9d4 - <burn_fusion::ops::float::<impl burn_tensor::tensor::ops::tensor::FloatTensorOps<burn_fusion::backend::Fusion<B>> for burn_fusion::backend::Fusion<B>>::float_sum_dim::SumDimOps<B> as burn_fusion::stream::execution::base::Operation<<B as burn_fusion::backend::FusionBackend>::FusionRuntime>>::execute::h9f9dd9e2b4175fcd
  14:        0x1045454b8 - <burn_fusion::stream::multi::Segment<R> as burn_fusion::stream::execution::processor::StreamSegment<<R as burn_fusion::backend::FusionRuntime>::Optimization>>::execute::h464d38ab66f48306
  15:        0x104744820 - burn_fusion::stream::execution::processor::Processor<O>::process::hda54a04651839289
  16:        0x104534bc0 - burn_fusion::stream::multi::MultiStream<R>::register::hd1dc0c5ae88111af
  17:        0x10453a8d4 - <burn_fusion::client::mutex::MutexFusionClient<R> as burn_fusion::client::base::FusionClient<R>>::register::h79dba9616058be5a
  18:        0x10473bf98 - burn_fusion::ops::float::<impl burn_tensor::tensor::ops::tensor::FloatTensorOps<burn_fusion::backend::Fusion<B>> for burn_fusion::backend::Fusion<B>>::float_sum_dim::hdba5107b19985a07
  19:        0x10474ddc8 - burn_tensor::tensor::api::numeric::<impl burn_tensor::tensor::api::base::Tensor<B,_,K>>::sum_dim::hd59b4a39b225258c
  20:        0x10466d4a4 - burn_core::nn::rope_encoding::RotaryEncoding<B>::apply::h9fee501d36b2d090
  21:        0x104647a04 - llama_burn::transformer::Transformer<B>::forward::h53069aa98c209cb2
  22:        0x1045fd56c - llama_burn::llama::Llama<B,T>::generate::h222f047d67bd5a9e
  23:        0x10466e348 - chat::chat::h912c70a8ba9cfe6a
  24:        0x10466eaf0 - chat::main::h642e6a68cbc5f844
  25:        0x1046955ac - std::sys::backtrace::__rust_begin_short_backtrace::h765eb1ef42b9dd7d
  26:        0x1047b3918 - std::rt::lang_start::{{closure}}::h82e09bbef9382677
  27:        0x104f60ec8 - std::rt::lang_start_internal::hef6fb50dbdfc3bf8
  28:        0x1046717b0 - _main

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions