hyperparameter-optimization
Here are 236 public repositories matching this topic...
Hi,
I'm new to tpot but I got this error. I understand that score function can take strings, but I got the following error when using TPOTClassifier.
ValueError Traceback (most recent call last)
in
----> 1 tpot.score(X_test, y_test)~/miniconda3/envs/ml
is it Grid Search can solve CASH problems with NNI , it seems that it is usually used for hyper-parameters optimization, have you guys have finished some revision for Grid Search for solving CASH problems.
about Cash problems can refer to :microsoft/nni#1178
mutlilabel task
i want to know whether autosklearn support multilabel task, thank you!
A curated list of automated machine learning papers, articles, tutorials, slides and projects
-
Updated
Jan 27, 2020
As a user/researcher I would like to easily be able to find information on what kind of algorithm stands behind optuna, to be able to make informed decision on using it in my work. Currently main page says only that it does "Bayesian optimization".
As a user I would like to see detailed documentation about the algorithm behind the Study.optimize method, when I check the documentation page for
According to issue #204 the number of layers could be tuned by a for-loop
model = ...
num_layers = <result of randint>
for _ in range(num_layers):
model.add(Dense({{choice([np.power(2,5),np.power(2,9),np.power(2,11)])}}))
model.add(Activation({{choice(['tanh','relu', 'sigmoid'])}}))
model.add(Dropout({{uniform(0, 1)}}))
But the layers added in the for-
README.md typo
pedantic, I know, but README.md has a typo in the Categorical Ensembling section where it refers to markets in the first sentence, not models.
Sklearn has a TON of estimators, metrics, and cv iterators that could trivially be added to the xcessiv.presets package. I'm a bit focused on other issues to bother adding them all.
Anyone who can help add to the list can easily do so.
Adding preset estimators/metrics/cvs is very easy. There's literally no need to understand how the rest of Xcessiv works, just take a look and copy the patt
- Unable to supply
validation_datato a KerasCVExperimentviamodel_extra_params[“fit”] - This is because HyperparameterHunter automatically sets
validation_datato be the OOF data produced by the cross validation scheme - I can imagine this would be unexpected behavior, so I’d love to hear any thoughts on how to clear this up
Note
- This issue (along with several others) was ori
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
-
Updated
Jan 24, 2020
Currently the hparams are logged as text in tensorboard. Could we change this to start using the add_hparams() function in SummaryWritter. This will allow some additional nice views of the hyperparameters such as parallel coordinates view.
I think this would be a relatively easy change and I can help out with it if needed.
Tuning hyperparams fast with Hyperband
-
Updated
Jan 9, 2020 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Expected Results
List of cost of all configs, or run if more then one run per config
Actual Results
Don't know where it is
Versions
0.11.1
Experimental Global Optimization Algorithm
-
Updated
Jan 22, 2020 - Python
Hyperparameter optimization for PyTorch.
-
Updated
Jan 27, 2020 - Python
I can't see anywhere in the documentation exactly what min and max budget mean. At first I saw examples that made it look like they are the number of epochs, but given the default values are 0.01 and 1, respectively, that doesn't make much sense.
Are they arbitrary, but it is simply the ratio of the two that determines the step sizes e.g. during successive halving?
Python code for bayesian optimization using Gaussian processes
-
Updated
Jan 17, 2020 - Jupyter Notebook
A curated list of awesome Distributed Deep Learning resources.
-
Updated
Jan 27, 2020
When running the firststeps example I get several deprecation warnings. The environment is a fresh virtual conda environment on Win 10 x64 with the following package versions: packages.txt
First the tensorflow warnings:
WARNING:tensorflow:From [envpat
Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.
-
Updated
Jan 24, 2020 - JavaScript
Hello all,
I started writing an sqlite db implementation for Orion. It currently is in working state but I need to add tests, improve comments etc. When I do those I will do a pull request.
It is not an issue that I am reporting, more an attempt to start a conversation around developing new db implementations and the design decisions that have been made and make it harder to implement a db c
Ability to add arguments to Quantized distributions to make them not only perfect integers, but rather, also add an interval such as interval=0.1 to have more precision on the quantification, and perhaps even a fixed phase to start the quantization, such as saying also anchor=0.05.
So a Quantized(Uniform(0, 1), interval=0.1, anchor=0.05) would generate something like `[0.05, 0.15, 0.25,
Hi,
Thanks for sharing the code.
Is there ant simple "getting started" document on setting up like a basic EI or KG with your tool.
1- I have seen the main.py in the "example" folder, it is good but it is for a very complete case, not for simple sampling mode.
2- I have tried the MOE getting started document (available in the "doc" folder) but as the function names and procedures are chan
opt.ea.lambda | [numeric{1}]
For opt.ea = "ea". Number of children generated in each generation. Default is 1.
should this be an integer(1)?
DeepArchitect: Automatically Designing and Training Deep Architectures
-
Updated
Jan 6, 2020 - Python
We should focus over the new documentation of BTB and mainly on the following points:
- Update the
Tunerssection, including how to contribute a new tuner. - Update the
Hyperparamssection. - Create new section explaining how to create and use the
Tunable. - Create a new
READMEwhereBTBSessionis used, which I belive should be the main focus of the library at the moment.
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."


The documentation doesn't appear to document how to use
.options(...)in any of the followingLess important, but we should also probably introduce
actor.method.options(...).remote()