Description
Hi. I recently tried to run your project by utilizing enroot since I do not have ubuntu 20.
After running this command:
./devel/lib/neural_mapping/neural_mapping_node train src/M2Mapping/config/replica/replica.yaml src/M2Mapping/data/Replica/room2
I encountered this error message:
TORCH_VERSION: 2.4.1
Using path derived from config file: "src/M2Mapping"
output_path: "src/M2Mapping/output/2025-03-29-20-29-32_room2_replica.yaml"
data_path: "src/M2Mapping/data/Replica/room2"
print_files: src/M2Mapping/output/2025-03-29-20-29-32_room2_replica.yaml/config/scene/replica.yaml
%YAML:1.0rosrun neural_mapping neural_mapping_node src/RIM2/neural_mapping/config/neural_rgbd/neural_rgbd.yaml src/RIM2/data/neural_rgbd_data/kitchen_kitti_format
base_config: "../base.yaml"
iter_step: 20000
trace_iter: 50 # sphere tracing iteration times
preload: 1 # 0: disable; 1: enable # accelerate the loading process but will cost more memory
llff: 0 # 0: disable; 1: enable; every 8 frame will be used for evaluation
cull_mesh: 1
prob_map_en: 0 # 0: disable; 1: enable; Whether the map is a probabilistic mapdataset_type
Replica = 0,
R3live = 1,
NeuralRGBD = 2,
KITTI = 3,
FastLivo = 4,
dataset_type: 0
dir_embedding_degree: 0
map:
map_origin: !!opencv-matrix
rows: 1
cols: 3
dt: d
data: [ 0, 0, 0 ]
map_size: 14min_range: 0.0
max_range: 100
ds_pt_num: 10000 # downsampled point number
max_pt_num: 1000000leaf_sizes: 0.05
fill_level: 1
bce_sigma: 0.02
sphere_trace_thr: 1e-3outlier_removal_interval: 2000
outlier_remove: 0 # unnecessary for static scenes
outlier_dist: 0.05visualization
vis_attribute: 2 # 0: disable to save storage; 1: normal; 2: color
vis_resolution: 0.1 # better no more than leaf_sizes or will miss faces
export_resolution: 0.01
fps: 30
print_files end
print_files: src/M2Mapping/output/2025-03-29-20-29-32_room2_replica.yaml/config/base.yaml
%YAML:1.0debug: 0 # 0: disable; 1: enable
device_param: 1 # 0: cpu; 1: gpu
tcnn encoder params
n_levels: 16
n_features_per_level: 2
log2_hashmap_size: 19tcnn decoder params
hidden_dim: 64
geo_feat_dim: 14 # geo_feat_dim + k_strc_dim <= 16 / 8 / 4 / 2 or tcnn decoder will become cutlass and crash
geo_num_layer: 3
color_hidden_dim: 64
color_num_layer: 3rim params
trunc_sdf: 1
surface_sample_num: 3
free_sample_num: 3
color_batch_pt_num: 256000 # color render pt batch sizelr: 5e-3
lr_end: 1e-4sdf_batch_ray_num: 2048
color_batch_pt_num: 25600
sdf_weight: 1.0
eikonal_weight: 1e-1 # it will greatly affect structure
curvate_weight: 5e-4 # should be the same loss level to eikonal loss
dist_weight: 1e-1rgb_weight: 10.0
visualization
vis_frame_step: 10
export_interval: 1000 # every export_interval frames, the test will be exported
export_colmap_format: 0 # 0: disable; 1: for 3dgs; 2: for nerfstudio colmap
export_train_pcl: 0 # 0: disable; 1: enable
print_files end
print_files: src/M2Mapping/config/replica/replica.yaml
%YAML:1.0rosrun neural_mapping neural_mapping_node src/RIM2/neural_mapping/config/neural_rgbd/neural_rgbd.yaml src/RIM2/data/neural_rgbd_data/kitchen_kitti_format
base_config: "../base.yaml"
iter_step: 20000
trace_iter: 50 # sphere tracing iteration times
preload: 1 # 0: disable; 1: enable # accelerate the loading process but will cost more memory
llff: 0 # 0: disable; 1: enable; every 8 frame will be used for evaluation
cull_mesh: 1
prob_map_en: 0 # 0: disable; 1: enable; Whether the map is a probabilistic mapdataset_type
Replica = 0,
R3live = 1,
NeuralRGBD = 2,
KITTI = 3,
FastLivo = 4,
dataset_type: 0
dir_embedding_degree: 0
map:
map_origin: !!opencv-matrix
rows: 1
cols: 3
dt: d
data: [ 0, 0, 0 ]
map_size: 14min_range: 0.0
max_range: 100
ds_pt_num: 10000 # downsampled point number
max_pt_num: 1000000leaf_sizes: 0.05
fill_level: 1
bce_sigma: 0.02
sphere_trace_thr: 1e-3outlier_removal_interval: 2000
outlier_remove: 0 # unnecessary for static scenes
outlier_dist: 0.05visualization
vis_attribute: 2 # 0: disable to save storage; 1: normal; 2: color
vis_resolution: 0.1 # better no more than leaf_sizes or will miss faces
export_resolution: 0.01
fps: 30
print_files end
Load 2000 data.
2000
Error: [enforce fail at alloc_cpu.cpp:117] err == 0. DefaultCPUAllocator: can't allocate memory: you tried to allocate 19584000000 bytes. Error code 12 (Cannot allocate memory)
frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, void const*) + 0x68 (0x7e57984e2618 in /root/libtorch/lib/libc10.so)
frame #1: c10::alloc_cpu(unsigned long) + 0x39f (0x7e57984d038f in /root/libtorch/lib/libc10.so)
frame #2: + 0x2e3f7 (0x7e579848f3f7 in /root/libtorch/lib/libc10.so)
frame #3: + 0x191c9dc (0x7e5780f1c9dc in /root/libtorch/lib/libtorch_cpu.so)
frame #4: at::detail::empty_generic(c10::ArrayRef, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, std::optionalc10::MemoryFormat) + 0x28 (0x7e5780f16598 in /root/libtorch/lib/libtorch_cpu.so)
frame #5: at::detail::empty_cpu(c10::ArrayRef, c10::ScalarType, bool, std::optionalc10::MemoryFormat) + 0x57 (0x7e5780f16617 in /root/libtorch/lib/libtorch_cpu.so)
frame #6: at::detail::empty_cpu(c10::ArrayRef, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x49 (0x7e5780f16689 in /root/libtorch/lib/libtorch_cpu.so)
frame #7: at::native::empty_cpu(c10::ArrayRef, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x34 (0x7e578173d404 in /root/libtorch/lib/libtorch_cpu.so)
frame #8: + 0x2eafc00 (0x7e57824afc00 in /root/libtorch/lib/libtorch_cpu.so)
frame #9: + 0x2eafc7f (0x7e57824afc7f in /root/libtorch/lib/libtorch_cpu.so)
frame #10: at::_ops::empty_memory_format::redispatch(c10::DispatchKeySet, c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0xfb (0x7e578205df7b in /root/libtorch/lib/libtorch_cpu.so)
frame #11: + 0x2e73283 (0x7e5782473283 in /root/libtorch/lib/libtorch_cpu.so)
frame #12: at::_ops::empty_memory_format::call(c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x1b3 (0x7e57820ad383 in /root/libtorch/lib/libtorch_cpu.so)
frame #13: at::native::zeros_symint(c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional) + 0x17d (0x7e57817458bd in /root/libtorch/lib/libtorch_cpu.so)
frame #14: + 0x3083ea9 (0x7e5782683ea9 in /root/libtorch/lib/libtorch_cpu.so)
frame #15: at::_ops::zeros::redispatch(c10::DispatchKeySet, c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional) + 0xf6 (0x7e5781cb5936 in /root/libtorch/lib/libtorch_cpu.so)
frame #16: + 0x2e70cc9 (0x7e5782470cc9 in /root/libtorch/lib/libtorch_cpu.so)
frame #17: at::_ops::zeros::call(c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional) + 0x1b2 (0x7e5781d16f32 in /root/libtorch/lib/libtorch_cpu.so)
frame #18: torch::zeros(c10::ArrayRef, c10::TensorOptions) + 0x1ad (0x7e579862795d in /root/m2mapping_ws/devel/lib/libneural_mapping_lib.so)
frame #19: dataparser::DataParser::load_colors(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, bool, bool const&) + 0x3d3 (0x7e571d2e68b3 in /root/m2mapping_ws/devel/lib/libdata_parser.so)
frame #20: dataparser::Replica::load_data() + 0x17f (0x7e577f5e431f in /root/m2mapping_ws/devel/lib/libdata_loader.so)
frame #21: dataparser::Replica::Replica(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, c10::Device const&, bool const&, float const&) + 0x83c (0x7e577f5e551c in /root/m2mapping_ws/devel/lib/libdata_loader.so)
frame #22: dataloader::DataLoader::DataLoader(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int const&, c10::Device const&, bool const&, float const&, sensor::Sensors const&) + 0x638 (0x7e577f5d4248 in /root/m2mapping_ws/devel/lib/libdata_loader.so)
frame #23: NeuralSLAM::NeuralSLAM(int const&, std::filesystem::__cxx11::path const&, std::filesystem::__cxx11::path const&) + 0x24c (0x7e579861005c in /root/m2mapping_ws/devel/lib/libneural_mapping_lib.so)
frame #24: main + 0x680 (0x5d41ac6a5eb0 in ./devel/lib/neural_mapping/neural_mapping_node)
frame #25: __libc_start_main + 0xf3 (0x7e571d450083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #26: _start + 0x2e (0x5d41ac6a622e in ./devel/lib/neural_mapping/neural_mapping_node)double free or corruption (fasttop)
Segmentation fault (core dumped)
What is the solution? Thanks in advance.