Per-parameter Learning Rate (PPLR) Implementation, Tensor Decomposition Models, and Tomography Updates#213
Per-parameter Learning Rate (PPLR) Implementation, Tensor Decomposition Models, and Tomography Updates#213cedriclim1 wants to merge 30 commits intodevelectronmicroscopy/quantem:devfrom
Conversation
…eeds to be overloaded is set_optimizer for PPLR cases
…to do the matching in set_optimizer instead of parsing in optimizer_params maybe?
… to check: Look at object_models.py and see how the optimizer matching should be handled. It seems like set_optimizers doesn't really do what it's supposed to do.
… probably have to do TV loss computation within the model?
…es. Also overloaded reconnecting optimizers
…well. Only things to ask Corneel about is multiscale res since this adds a significant amount of compute. Should I be doing variable num_samples_per_ray?
…t DDP, clean-up KPlanes, fix up object_models.py since it's insanely cluttered now
…ositionModel ABC, make sure to have a property for which kind of tensor decomposition method is being used. SO3Params are moved to a different file, thinking of making a kplanes_utils.py. Starting reorganization of object_models.py to have ObjectINR and ObjectTensorDecomp
…p-level Tomography
… of parameters now that helps with type-setting. The main reason for having model_base.py as is it is right now is if we ever wanted to go do TensoRF or something just to validate
…pe hinting and make PPLR more extensible
…rams. reconnect_optimizer_to_parameters, and optimizer_params changes
|
@cedriclim1 the default VS code setting has been removed, so please pull from |
There was a problem hiding this comment.
overall looks good! I made the necessary few changes to ptycho so it works with this, and overall i'm happy with it. Hopefully we're one step closer to not having to mess with OptimizerMixin ever again...
As to /core/ml organization, yes cnn.py, cnn_dense.py, inr.py, etc. should all be in /ml/models. I don't think it should be that bad to move them actually? An automatic refactor should handle everything in quantem fine, and they'll be accessible at the same level of the namespace (we still want to be able to from quantem.core.ml import CNN2d), so it shouldn't affeect the tutorials either really. If you move them (and whatever else you think) we can just re-run all the tutorials and tests to make sure nothing breaks.
| "yticks" | ||
| ], | ||
| "basedpyright.analysis.typeCheckingMode": "standard", | ||
| "python.REPL.enableREPLSmartSend": false, |
| import torch | ||
| from numpy.typing import NDArray | ||
|
|
||
| from quantem.tomography.tomography_context import ReconstructionContext |
There was a problem hiding this comment.
If we're going to have the ReconstructionContext used in our Constraints ABC, then it shouldn't live in tomography_context.
It's definitely not going to be fully backwards compatible, but hopefully things should run without errors, and we'll only have problems with linting?
|
|
||
| class PPLR(ABC): | ||
| """ | ||
| Abstract base class for models that require multi-scale parameter optimization. |
There was a problem hiding this comment.
is "multi-scale" the right way of describing this?
| class TensorDecompositionModel(nn.Module, ABC): | ||
| """ | ||
| Base class for factored tensor-decomposition models. | ||
|
|
||
| Subclasses must set ``td_type`` as a normal attribute in ``__init__``. | ||
| """ | ||
|
|
||
| td_type: str | ||
|
|
||
|
|
||
| class PlanarDecompositionModel(TensorDecompositionModel, PPLR): | ||
| """ | ||
| Planar factored-grid models: K-Planes, K-Planes-TILTED, HexPlane, tri-planes. | ||
|
|
||
| Subclasses must set ``grids``, ``tilted``, and ``resolution`` as normal | ||
| attributes in ``__init__``. | ||
| """ | ||
|
|
||
| grids: nn.ParameterList | ||
| tilted: bool | ||
| resolution: list[int] |
There was a problem hiding this comment.
what's the motivation for having the TensorDecomposisionModel base class, shouldn't there just be a td_type in PlanarDecompositionModel? Unless there's going to be other, non-planar decomp moels...
| SO(3) rotation bank using R9+SVD parameterization. | ||
| Each rotation is stored as an unconstrained 3x3 matrix M, | ||
| projected to SO(3) via SVD+(M) = U diag(1,1,det(UVt)) Vt. |
There was a problem hiding this comment.
I would add a link to the paper in the docstring
There was a problem hiding this comment.
okay the fact that your formatter is changing this file is a little suspicious, are you using a weird ruff setup or anything?
There was a problem hiding this comment.
cite the relevant papers please for the models
| elif not isinstance(self._model, torch.nn.parallel.DistributedDataParallel): | ||
| self.distribute_model(self._model) | ||
| self.reconnect_optimizer_to_parameters() | ||
| def _get_plane_tv_loss(self) -> torch.Tensor: |
There was a problem hiding this comment.
some explanation for what this is would be helpful. I guess also why you use this and the normal INR style volume TV.
| ) | ||
|
|
||
|
|
||
| def _unwrap(model: nn.Module | nn.parallel.DistributedDataParallel) -> PlanarDecompositionModel: |
There was a problem hiding this comment.
I'm seeing linter errors for this function and in a couple other places. Not a huge deal but wanted to let you know
There was a problem hiding this comment.
just did a pretty quick skim over this and it looks good. Main thing is that, since this will be new to most folks, include some resources and more explanation for what's going on. i.e. reference the relevant papers and more code comments or docstrings
What does this PR do?
First pass implementation of
KPlanesimplementation (https://brentyi.github.io/tilted/, https://sarafridov.github.io/K-Planes/). These tensor decomposition models require multiple parmeter optimization which is not handled inOptimizerMixin.The solution @arthurmccray and I came up with is to setup a per-parameter learning rate
PPLRabstract class which will be instantiated with models that have multiple parameters to be optimized to. This is implemented incore/ml/models/model_base.py. The most important part about this class is theget_paramsmethod, which is used for parsing the trainable parameters (seeObjectTensorDecomp'sget_optimization_parameters) .The KPlanes and TILTED implementations are in
core/ml/modelsinkplanes.pywithKPlanes,KPlanesTILTED, andCPTilted. NoteCPTIltedis used for the two-phase warmup that the paper recommends by pretraining the SO3 rotations of the planes in a lower representation space.so3params.pyalso has both quaternion and R9+SVD implementations which can be swapped as a parameter inKPlanesTILTED.In
tomography/object_models.pythere is now a new object that handles the new tensor decomposition methods,ObjectTensorDecompthat inherits fromObjectINRand the following overloads are needed:optimizer_paramsproperty: The setter is changed, since we're building a a dictionary of parameters into our optimizers per https://docs.pytorch.org/docs/stable/optim.html#per-parameter-options.reconnect_optimizer_to_parameters: This one is the most annoying thing to implement, since now our parameters is a dictionary.There are a few optional overrides that I've also included preemptively just incase if I want to add additional soft/hard constraints to these methods as you can see both
apply_soft_constraintsandapply_hard_constraintsare still there.Key change to OptimizerMixin: I have changed
set_optimizerto handle a dictionary of parameters. I think this is better than my original idea of just implementing this inobject_models.pysince you would have to overload that function again. I think this is fine and still works for single optimization stuff.There are also small changes to different aspects of the Tomography module, but mostly contained in
object_models.py. Listing out the changes and rationale in the tomography files:tomography.py: I've disabledtorch.bfloat16autocasting sinceF.grid_sampledoes not haveautogradimplemented errors. I've also put the TILTED tensor decomp pretraining check by looking at the convergence of the SO3 rotations. I don't like this being here, but I'm not quite sure where else to put this.tomography_base.py: Type-hinting change for setting up distributed.tomography_opt.py: Parsing is already handled inobject_models.py.What the reviewer should do
Attached is a notebook that tests tensor decomposition methods in tomography: 0415_PPLR_testing.ipynb. The dataset here is from
quantem-tutorials, so just set the directory of the same dataset to this notebook. The runtime can be increased by increasing the batch size, but the learning rates have to be scaled. Setting the batch size to 4096 seems to be fine? So feel free to change it and scale the learning rates by 2.optimizer_mixin.py, andobject_models.pycore/mlis looking. Where do we draw the line of what lives in models (I think everything model related should live in there tbh: INR, CNN, etc...)?