Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Improve unit tests#985

Merged
mfeurer merged 12 commits intodevelopopenml/openml-python:developfrom
randomize_test_orderopenml/openml-python:randomize_test_orderCopy head branch name to clipboard
Nov 3, 2020
Merged

Improve unit tests#985
mfeurer merged 12 commits intodevelopopenml/openml-python:developfrom
randomize_test_orderopenml/openml-python:randomize_test_orderCopy head branch name to clipboard

Conversation

@mfeurer
Copy link
Collaborator

@mfeurer mfeurer commented Oct 30, 2020

  • increase wait time when uploading a run to 600s. It's better to wait a very long time instead of failing that test (on CI)
  • reduce the warnings a bit to have less output online
  • remove deprecated format argument to OpenMLDataset
  • add load balancing for parallel unit tests

@mfeurer mfeurer changed the title Randomize test order Improve unit tests Oct 30, 2020
@mfeurer mfeurer force-pushed the randomize_test_order branch from ccec20d to 60bd37a Compare November 2, 2020 09:16
@codecov-io
Copy link

codecov-io commented Nov 2, 2020

Codecov Report

Merging #985 into develop will increase coverage by 0.07%.
The diff coverage is 83.33%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #985      +/-   ##
===========================================
+ Coverage    87.85%   87.93%   +0.07%     
===========================================
  Files           36       36              
  Lines         4555     4551       -4     
===========================================
  Hits          4002     4002              
+ Misses         553      549       -4     
Impacted Files Coverage 螖
openml/flows/flow.py 92.71% <0.00%> (酶)
openml/datasets/dataset.py 83.33% <100.00%> (+0.55%) 猬嗭笍
openml/extensions/sklearn/extension.py 91.08% <100.00%> (酶)
openml/study/functions.py 88.23% <100.00%> (酶)
openml/runs/functions.py 83.58% <0.00%> (+0.25%) 猬嗭笍

Continue to review full report at Codecov.

Legend - Click here to learn more
螖 = absolute <relative> (impact), 酶 = not affected, ? = missing data
Powered by Codecov. Last update 63ec0ae...39e2f07. Read the comment docs.

Copy link
Collaborator

@PGijsbers PGijsbers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, just would like some minor clarifications.

n_missing_vals = self.TEST_SERVER_TASK_SIMPLE[1]
n_test_obs = self.TEST_SERVER_TASK_SIMPLE[2]
self._run_and_upload_classification(lr, task_id, n_missing_vals, n_test_obs, "62501")
self.assertLessEqual(warn_mock.call_count, 3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please explain why we expect three warnings.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I don't understand the 3 myself and would have expected 1. I will simply increase the max_iter so this test passes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I now understand. We call run_model_on_task once, and then twice a helper function called _rerun_model_and_compare_predictions. Anyway, I think increasing the number iterations is a bit cleaner.

tests/test_runs/test_run_functions.py Show resolved Hide resolved
tests/test_runs/test_run_functions.py Show resolved Hide resolved
Copy link
Collaborator

@PGijsbers PGijsbers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, just expecting some minor explanations.

tests/test_datasets/test_dataset_functions.py Show resolved Hide resolved
@mfeurer mfeurer requested a review from PGijsbers November 2, 2020 19:03
Copy link
Collaborator

@PGijsbers PGijsbers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 馃憤

@mfeurer mfeurer merged commit a629562 into develop Nov 3, 2020
@mfeurer mfeurer deleted the randomize_test_order branch November 3, 2020 07:35
github-actions bot pushed a commit that referenced this pull request Nov 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.