Skip to content

zhouxiaojunlove/IFOGD-MLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recently, we have found that the native method of FGD is more prone to gradient explosion than the integer - order method. Although the Adam optimizer was used in the experiments, the convergence of FGD was slower than that of the integer - order method due to gradient explosion occurring in some iterations. Therefore, we need to modify the FGD in Artificial Neural Networks (ANN) or improve it by leveraging existing integer - order methods. We already have some solutions, which will be presented in our subsequent research. This paper primarily conducts a preliminary exploration of the efficient application of FGD within ANN based on Autograd technology. It is not yet SOTA. At least in terms of convergence speed, it can only guarantee the linear - level time complexity of the same - type optimizers because of the susceptibility of FGD to gradient explosion.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published