Recently, we have found that the native method of FGD is more prone to gradient explosion than the integer - order method. Although the Adam optimizer was used in the experiments, the convergence of FGD was slower than that of the integer - order method due to gradient explosion occurring in some iterations. Therefore, we need to modify the FGD in Artificial Neural Networks (ANN) or improve it by leveraging existing integer - order methods. We already have some solutions, which will be presented in our subsequent research. This paper primarily conducts a preliminary exploration of the efficient application of FGD within ANN based on Autograd technology. It is not yet SOTA. At least in terms of convergence speed, it can only guarantee the linear - level time complexity of the same - type optimizers because of the susceptibility of FGD to gradient explosion.
-
Notifications
You must be signed in to change notification settings - Fork 1
zhouxiaojunlove/IFOGD-MLP
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published