Replies: 2 comments 1 reply
-
|
Hi @FFiot, |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
You are welcome to submit a PR to add this method into DeepXDE. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment




Uh oh!
There was an error while loading. Please reload this page.
-
For the benchmark dataset Burgers.npz from DeepXDE, our new architecture achieves performance equivalent to traditional networks with 33,665 trainable parameters using only 737 trainable parameters (a 97.8% reduction). This innovation significantly enhances computational efficiency while maintaining model accuracy, making it ideal for resource-constrained scenarios.
Key Highlights:
Ultra-Lightweight Design: 737 parameters vs. 33,665 in baseline models
Dynamic Topology: Adaptive focus-driven computation replaces static linear layers
Hardware-Friendly: 5-8× higher computational density for edge deployment
Project Repository: FFnormal on Gitee
Documentation: README (English)
Feedback and technical discussions are welcome! 🚀
Beta Was this translation helpful? Give feedback.
All reactions