Skip to content

Commit d6ef72d

Browse files
Leooo-Huangclaude
andcommitted
feat: add banner, architecture diagram, MkDocs site, and SOTA auto-update
Visual: - SVG banner with dark gradient and 5 modality icons - Mermaid architecture diagram showing repo structure and automation - Website badge linking to GitHub Pages GitHub Pages (MkDocs Material): - Dark slate theme with searchable dataset catalog - All 53 dataset cards browsable by modality - Tabbed "Which Dataset?" quick guide on homepage - Auto-deploys on push to main SOTA Auto-Update Pipeline: - Weekly scrape of Papers with Code API (4 task categories) - Saves to data/sota-snapshot.json - Auto-commits via github-actions[bot] Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 297a215 commit d6ef72d

9 files changed

Lines changed: 557 additions & 0 deletions

File tree

.github/workflows/mkdocs.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
name: Deploy MkDocs to GitHub Pages
2+
3+
on:
4+
push:
5+
branches:
6+
- main
7+
8+
permissions:
9+
contents: write
10+
pages: write
11+
id-token: write
12+
13+
jobs:
14+
deploy:
15+
runs-on: ubuntu-latest
16+
steps:
17+
- name: Checkout repository
18+
uses: actions/checkout@v4
19+
20+
- name: Set up Python
21+
uses: actions/setup-python@v5
22+
with:
23+
python-version: "3.12"
24+
25+
- name: Install dependencies
26+
run: pip install -r requirements-docs.txt
27+
28+
- name: Deploy to GitHub Pages
29+
run: mkdocs gh-deploy --force

.github/workflows/sota-update.yml

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
name: SOTA Snapshot Update
2+
3+
on:
4+
schedule:
5+
# Every Wednesday at 06:00 UTC
6+
- cron: "0 6 * * 3"
7+
workflow_dispatch:
8+
9+
permissions:
10+
contents: write
11+
12+
jobs:
13+
update-sota:
14+
runs-on: ubuntu-latest
15+
16+
steps:
17+
- name: Checkout repository
18+
uses: actions/checkout@v4
19+
20+
- name: Set up Python 3.12
21+
uses: actions/setup-python@v5
22+
with:
23+
python-version: "3.12"
24+
25+
- name: Install dependencies
26+
run: pip install requests
27+
28+
- name: Run SOTA updater
29+
run: python tools/sota_updater.py
30+
31+
- name: Commit updated snapshot
32+
run: |
33+
git config user.name "github-actions[bot]"
34+
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
35+
git add data/sota-snapshot.json
36+
# Only commit if there are actual changes
37+
git diff --cached --quiet && echo "No changes to commit" || \
38+
git commit -m "chore: update SOTA snapshot $(date -u +%Y-%m-%d)"
39+
git push

README.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,23 @@
11
# Awesome Human Activity Recognition [![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
22

3+
<p align="center">
4+
<a href="https://github.com/Leo-Cyberautonomy/awesome-human-activity-recognition">
5+
<img src="assets/banner.svg" alt="Awesome Human Activity Recognition" width="600">
6+
</a>
7+
</p>
8+
39
> A curated, researcher-driven guide to **Human Activity Recognition** — 53 datasets, key frameworks, pretrained models, tutorials, and benchmark tools across vision, wearable, skeleton, and multimodal modalities.
410
511
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC_BY_4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
612
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/Leo-Cyberautonomy/awesome-human-activity-recognition/pulls)
713
[![Last Updated](https://img.shields.io/badge/Updated-March_2026-blue.svg)](#)
14+
[![Website](https://img.shields.io/badge/Website-GitHub_Pages-blue.svg)](https://leo-cyberautonomy.github.io/awesome-human-activity-recognition/)
815

916
**[中文](i18n/README.zh.md)** | [Deutsch](i18n/README.de.md) | [Español](i18n/README.es.md) | [Français](i18n/README.fr.md) | [日本語](i18n/README.ja.md) | [한국어](i18n/README.ko.md) | [Português](i18n/README.pt.md) | [Русский](i18n/README.ru.md)
1017

1118
## Contents
1219

20+
- [Repository Architecture](#repository-architecture)
1321
- [Which Dataset Should I Use](#which-dataset-should-i-use)
1422
- [Datasets](#datasets)
1523
- [Frameworks and Libraries](#frameworks-and-libraries)
@@ -20,6 +28,38 @@
2028
- [Tools and Utilities](#tools-and-utilities)
2129
- [Related Awesome Lists](#related-awesome-lists)
2230

31+
## Repository Architecture
32+
33+
```mermaid
34+
graph LR
35+
subgraph Datasets["53 Datasets"]
36+
V["Vision (14)"]
37+
S["Skeleton (7)"]
38+
W["Wearable (13)"]
39+
M["Multimodal (7)"]
40+
E["Emerging (12)"]
41+
end
42+
43+
subgraph Ecosystem
44+
F["Frameworks & Libraries"]
45+
P["Pretrained Models"]
46+
T["Tutorials & Courses"]
47+
end
48+
49+
subgraph Automation
50+
LC["Link Check\n(weekly)"]
51+
SU["SOTA Update\n(weekly)"]
52+
CB["Catalog Build\n(on push)"]
53+
end
54+
55+
Datasets --> F
56+
Datasets --> P
57+
F --> T
58+
SU -->|updates| Datasets
59+
LC -->|validates| Datasets
60+
CB -->|exports| JSON["catalog.json\ncatalog.csv"]
61+
```
62+
2363
## Which Dataset Should I Use
2464

2565
> Pick your modality and task, then follow the recommendation to the right section.

assets/banner.svg

Lines changed: 140 additions & 0 deletions
Loading

data/sota-snapshot.json

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"updated_at": "2026-03-19T00:00:00Z",
3+
"tasks": {
4+
"action-recognition-in-videos": {
5+
"task_name": "Action Recognition in Videos",
6+
"description": "",
7+
"top_results": []
8+
},
9+
"skeleton-based-action-recognition": {
10+
"task_name": "Skeleton-Based Action Recognition",
11+
"description": "",
12+
"top_results": []
13+
},
14+
"activity-recognition": {
15+
"task_name": "Activity Recognition",
16+
"description": "",
17+
"top_results": []
18+
},
19+
"human-pose-estimation": {
20+
"task_name": "Human Pose Estimation",
21+
"description": "",
22+
"top_results": []
23+
}
24+
}
25+
}

docs/index.md

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
# Awesome Human Activity Recognition
2+
3+
> A curated, researcher-driven guide to **Human Activity Recognition** -- 53 datasets, key frameworks, pretrained models, tutorials, and benchmark tools across vision, wearable, skeleton, and multimodal modalities.
4+
5+
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC_BY_4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
6+
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/Leo-Cyberautonomy/awesome-human-activity-recognition/pulls)
7+
8+
## Quick Stats
9+
10+
| Modality | Datasets | Highlights |
11+
|----------|----------|------------|
12+
| Vision (RGB/Depth) | 14 | Kinetics-700, UCF-101, ActivityNet, AVA |
13+
| Skeleton & MoCap | 7 | NTU RGB+D 60/120, AMASS, Human3.6M |
14+
| Wearable Sensors | 13 | UCI-HAR, PAMAP2, CAPTURE-24 (3883 hrs) |
15+
| Multimodal & Egocentric | 7 | Ego4D (3.3k hrs), EPIC-Kitchens-100 |
16+
| Emerging & Frontier | 12 | HumanML3D, Motion-X++, Ego-Exo4D |
17+
18+
## Which Dataset Should I Use?
19+
20+
!!! tip "Pick your modality and task, then follow the recommendation."
21+
22+
=== "Video Classification"
23+
24+
Start with **[Kinetics-700](../datasets/vision/kinetics-700.md)** for pretraining, evaluate on **[UCF-101](../datasets/vision/ucf101.md)** or **[HMDB-51](../datasets/vision/hmdb51.md)** for comparison with prior work. Browse all [Vision datasets](../datasets/vision/kinetics-700.md).
25+
26+
=== "Temporal Action Detection"
27+
28+
**[ActivityNet](../datasets/vision/activitynet.md)** for proposals, **[AVA](../datasets/vision/ava.md)** for spatio-temporal, **[MultiTHUMOS](../datasets/vision/multithumos.md)** for dense multi-label.
29+
30+
=== "Skeleton / MoCap"
31+
32+
**[NTU RGB+D 120](../datasets/vision/ntu-rgbd-120.md)** is the de facto standard. For text-motion alignment, use **[BABEL](../datasets/skeleton/babel.md)** or **[HumanML3D](../datasets/emerging/humanml3d.md)**.
33+
34+
=== "Wearable Sensors"
35+
36+
**[UCI-HAR](../datasets/wearable/uci-har.md)** for baselines, **[PAMAP2](../datasets/wearable/pamap2.md)** for multi-sensor, **[CAPTURE-24](../datasets/wearable/capture24.md)** for real-world scale (151 subjects, 3883 hours).
37+
38+
=== "Egocentric / Multimodal"
39+
40+
**[Ego4D](../datasets/multimodal/ego4d.md)** for scale (3.3k hours), **[EPIC-Kitchens-100](../datasets/multimodal/epic-kitchens-100.md)** for kitchen actions, **[Ego-Exo4D](../datasets/emerging/ego-exo4d.md)** for cross-view.
41+
42+
=== "Text-to-Motion Generation"
43+
44+
**[HumanML3D](../datasets/emerging/humanml3d.md)** for single-person, **[InterHuman](../datasets/emerging/interhuman.md)** for two-person, **[Motion-X++](../datasets/emerging/motionx-plus.md)** for whole-body with face and hands.
45+
46+
## Explore
47+
48+
- **[Datasets](../datasets/vision/kinetics-700.md)** -- Browse all 53 dataset cards organized by modality
49+
- **[Taxonomy](taxonomy.md)** -- Multi-dimensional classification of HAR approaches
50+
- **[Surveys](surveys.md)** -- Curated survey papers across all modalities
51+
- **[Benchmarking](benchmarking.md)** -- Compare datasets and methods
52+
- **[Roadmap](roadmap.md)** -- What is coming next
53+
- **[Contributing](../CONTRIBUTING.md)** -- How to add datasets or improve the list
54+
55+
## Citation
56+
57+
```bibtex
58+
@misc{awesome_har_2025,
59+
title = {Awesome Human Activity Recognition: A Curated List},
60+
author = {Wenxuan Huang},
61+
year = {2025},
62+
url = {https://github.com/Leo-Cyberautonomy/awesome-human-activity-recognition},
63+
note = {GitHub repository}
64+
}
65+
```

0 commit comments

Comments
 (0)