dlpy.model.NatGradSolver(approximation_type=1, learning_rate=0.001, learning_rate_policy='fixed', gamma=0.1, step_size=10, power=0.75, use_locking=True, clip_grad_max=None, clip_grad_min=None, steps=None, fcmp_learning_rate=None, lr_scheduler=None)¶Bases: dlpy.model.Solver
Natural gradient solver object
| Parameters: |
|
|---|---|
| Returns: |
__init__(approximation_type=1, learning_rate=0.001, learning_rate_policy='fixed', gamma=0.1, step_size=10, power=0.75, use_locking=True, clip_grad_max=None, clip_grad_min=None, steps=None, fcmp_learning_rate=None, lr_scheduler=None)¶Initialize self. See help(type(self)) for accurate signature.
Methods
| __init__([approximation_type, …]) | Initialize self. |
| add_parameter(key, value) | Adds a parameter to the parameter list of a solver. |
| clear() | |
| get(k[,d]) | |
| items() | |
| keys() | |
| pop(k[,d]) | If key is not found, d is returned if given, otherwise KeyError is raised. |
| popitem() | as a 2-tuple; but raise KeyError if D is empty. |
| set_method(method) | Sets the solver method in the parameters list. |
| setdefault(k[,d]) | |
| update([E, ]**F) | If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v |
| values() |