Skip to content

Conversation

@itscharanteja
Copy link

./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} ${OUTPUT_DIR}, --pose_track
CONFIG - configs/halpe_26/resnet/256x192_res50_lr1e-3_1x.yaml (For Example)
CHECKPOINT - pretrained_models/halpe26_fast_res50_256x192.pth (You should download from MODEL_ZOO and place it in the "pretrained_models/")
VIDEO_NAME - /Users/charan/Downloads/videoplayback.mp4 (input video path)
OUTPUT_DIR - output path

This is the command to run and detect 2D keypoints.

@geokal
Copy link

geokal commented Jul 28, 2025

Hi, I made a working version that uses PyTorch MPS (Apple Metal Performance Shaders) for acceleration) wherever it's possible. I used "configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml" and "pretrained_models/fast_res50_256x192.pth". you just pass an extra "--device mps". video used video.mp4. Full command python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video /Users/georgek/Downloads/video_resized.mp4 --sp --device mps --detector yolo --outdir examples/res/ --save_video --vis_fast --debug --profile. it was run in MacBook Air m1 16gb and bumped to 30w total power used cpu nearly 90% and gpu 100% most of the time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants