Skip to content

Memory usage is wrong (reporting 0) for non-CUDA commands #984

Open
@byjlw

Description

🐛 Describe the bug

For example
Memory used: 0.00 GB

> python3 torchchat.py generate llama3.1 --dso-path exportedModels/llama3.1.so --prompt "Hello my name is"

NumExpr defaulting to 10 threads.
PyTorch version 2.5.0.dev20240710 available.
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Warning: checkpoint path ignored because an exported DSO or PTE path specified
Using device=mps 
Loading model...
Cannot load specified DSO to mps. Attempting to load model to CPU instead
Time to load model: 0.20 seconds
-----------------------------------------------------------
Hello my name is Julia and I am a Junior at the University of Washington studying Communications with a focus in Public Relations. I am also a part of the University’s Public Relations Student Society of America (PRSSA), where I currently hold the position of Secretary.
In my free time, I love to stay active whether it’s hiking, running, or trying out new workout classes. I am also passionate about photography and capturing life’s precious moments. Some of my favorite places to visit are the beaches of Half Moon Bay in California and the mountains of Whistler, BC.
This is my blog where I will be sharing my thoughts on PR, advertising, and other marketing related topics. I hope you enjoy reading and will also consider sharing your thoughts with me! Feel free to follow me for more updates on my adventures and musings.
I look forward to connecting with you and learning more about the PR world! – Julia
Hi Julia! I think your blog is a great idea! As a fellow UW student
Time for inference 1: 92.83 sec total, time to first token 4.04 sec with sequential prefill, 199 tokens, 2.14 tokens/sec, 466.49 ms/token
Bandwidth achieved: 34.43 GB/s
*** This first iteration will include cold start effects for dynamic import, hardware caches. ***

========================================

Average tokens/sec: 2.14
Memory used: 0.00 GB

Versions

Collecting environment information...
PyTorch version: 2.5.0.dev20240710
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: version 3.30.1
Libc version: N/A

Python version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240710
[pip3] torchao==0.3.1
[conda] Could not collect

Metadata

Assignees

No one assigned

    Labels

    actionableItems in the backlog waiting for an appropriate impl/fixbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions