Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

chanocy/mlbasic

Open more actions menu
 
 

Repository files navigation

一些关于机器学习优化函数的练习

  • p1: 最基本的梯度下降法:gradient descent
  • p2: 原始 SGD方法:stochastic gradient descent
  • p3: minibatch-SGD方法
  • p4 momentum SGD: minibatch-SGD with momentum
  • p4 momentum: momentum with SGD
  • p5: Nesterov方法
  • p6: adagrad
  • p7: adadelta
  • p8: adam

参考

p1 参考 https://zhuanlan.zhihu.com/p/27297638

p2~pn 参考 http://ruder.io/optimizing-gradient-descent/index.html

单独参考

p5 参考 http://cs231n.github.io/neural-networks-3/

p6 参考 https://zhuanlan.zhihu.com/p/22252270

p7 参考 https://arxiv.org/abs/1212.5701 (原始论文)

p8 参考 http://www.ijiandao.com/2b/baijia/63540.html

About

深度学习常用优化方法详解

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%
Morty Proxy This is a proxified and sanitized view of the page, visit original site.