Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

@jscanvic
Copy link
Collaborator

@jscanvic jscanvic commented Oct 4, 2025

A proof of concept towards #496

Usage

Denoising

python -m deepinv train --config ./config_denoising.yaml

Super-resolution

python -m deepinv train --config ./config_sr.yaml

Example

$ python -m deepinv train --config ./config_sr.yaml
Selected GPU 0 with 7007.5 MiB free memory
The model has 8557635 trainable parameters
Train epoch 1/10: 100%|███████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.20it/s, TotalLoss=0.703, PSNR=7.03]
Train epoch 2/10: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.27it/s, TotalLoss=0.0568, PSNR=13.1]
Train epoch 3/10: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.31it/s, TotalLoss=0.0126, PSNR=19.4]
Train epoch 4/10: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.45it/s, TotalLoss=0.0104, PSNR=20.3]
Train epoch 5/10: 100%|█████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.39it/s, TotalLoss=0.00843, PSNR=21.2]
Train epoch 6/10: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.31it/s, TotalLoss=0.0068, PSNR=22.2]
Train epoch 7/10: 100%|█████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.36it/s, TotalLoss=0.00605, PSNR=22.8]
Train epoch 8/10: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.34it/s, TotalLoss=0.0058, PSNR=22.9]
Train epoch 9/10: 100%|█████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.32it/s, TotalLoss=0.00499, PSNR=23.6]
Train epoch 10/10: 100%|████████████████████████████████████████████████████████████████| 25/25 [00:04<00:00,  5.32it/s, TotalLoss=0.00435, PSNR=24.3]

@Andrewwango
Copy link
Collaborator

Super cool, thanks @jscanvic !

Copy link
Collaborator

@Andrewwango Andrewwango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial comments, this is super cool, thanks Jérémy!

@@ -0,0 +1,23 @@
dataset:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should provide somewhere in the repo example/template configs? or perhaps even in examples, each example involving training can be used in either notebook mode or using the provided config that we provide?

import argparse


class CommandLineTrainer:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future extensibility, can we generalise this to just a CLIParser or something, that handles all the config ingestion, and then maybe have CommandLineTrainer as a subclass or something with only the core Trainer orchestration?

num_workers=2,
)
trainer = dinv.training.Trainer(
epochs=10,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of doing a one-to-one mapping between config params and trainer options, such that this can be reduced to just like **parsed_kwargs?

import argparse


class CommandLineTrainer:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we pytest this?

args: argparse.Namespace = parser.parse_args()

if args.command == "train":
trainer = CommandLineTrainer()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could have a cli submodule which contains all the orchestrators e.g. CommandLineTrainer, CommandLineOptim etc. to keep this __main__ clean. And also keep this argparser clean, and the individual files in cli can contain the task-specific args

@codecov
Copy link

codecov bot commented Oct 4, 2025

Codecov Report

❌ Patch coverage is 0% with 141 lines in your changes missing coverage. Please review.
✅ Project coverage is 83.21%. Comparing base (8967143) to head (7071191).
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
deepinv/__main__.py 0.00% 141 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #821      +/-   ##
==========================================
- Coverage   83.79%   83.21%   -0.58%     
==========================================
  Files         204      205       +1     
  Lines       20503    20643     +140     
  Branches     2807     2840      +33     
==========================================
- Hits        17180    17179       -1     
- Misses       2410     2550     +140     
- Partials      913      914       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.