The Wayback Machine - https://web.archive.org/web/20200127131532/https://github.com/topics/hyperparameter-optimization
Skip to content
#

hyperparameter-optimization

Here are 236 public repositories matching this topic...

twolodzko
twolodzko commented Jan 15, 2020

As a user/researcher I would like to easily be able to find information on what kind of algorithm stands behind optuna, to be able to make informed decision on using it in my work. Currently main page says only that it does "Bayesian optimization".

As a user I would like to see detailed documentation about the algorithm behind the Study.optimize method, when I check the documentation page for

loublock
loublock commented Aug 9, 2019

According to issue #204 the number of layers could be tuned by a for-loop

model = ...
num_layers = <result of randint>
for _ in range(num_layers):
        model.add(Dense({{choice([np.power(2,5),np.power(2,9),np.power(2,11)])}}))
        model.add(Activation({{choice(['tanh','relu', 'sigmoid'])}}))
        model.add(Dropout({{uniform(0, 1)}}))

But the layers added in the for-

xcessiv
reiinakano
reiinakano commented May 30, 2017

Sklearn has a TON of estimators, metrics, and cv iterators that could trivially be added to the xcessiv.presets package. I'm a bit focused on other issues to bother adding them all.

Anyone who can help add to the list can easily do so.

Adding preset estimators/metrics/cvs is very easy. There's literally no need to understand how the rest of Xcessiv works, just take a look and copy the patt

hyperparameter_hunter
HunterMcGushion
HunterMcGushion commented Feb 19, 2019
  • Unable to supply validation_data to a Keras CVExperiment via model_extra_params[“fit”]
  • This is because HyperparameterHunter automatically sets validation_data to be the OOF data produced by the cross validation scheme
  • I can imagine this would be unexpected behavior, so I’d love to hear any thoughts on how to clear this up

Note

  • This issue (along with several others) was ori
bcyphers
bcyphers commented Jan 31, 2018

If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.

We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing

SMAC3
Nicholas-Mitchell
Nicholas-Mitchell commented Dec 17, 2019

I can't see anywhere in the documentation exactly what min and max budget mean. At first I saw examples that made it look like they are the number of epochs, but given the default values are 0.01 and 1, respectively, that doesn't make much sense.

Are they arbitrary, but it is simply the ratio of the two that determines the step sizes e.g. during successive halving?

angeloskath
angeloskath commented Sep 6, 2018

Hello all,

I started writing an sqlite db implementation for Orion. It currently is in working state but I need to add tests, improve comments etc. When I do those I will do a pull request.

It is not an issue that I am reporting, more an attempt to start a conversation around developing new db implementations and the design decisions that have been made and make it harder to implement a db c

Neuraxle
guillaume-chevalier
guillaume-chevalier commented Jan 4, 2020

Ability to add arguments to Quantized distributions to make them not only perfect integers, but rather, also add an interval such as interval=0.1 to have more precision on the quantification, and perhaps even a fixed phase to start the quantization, such as saying also anchor=0.05.

So a Quantized(Uniform(0, 1), interval=0.1, anchor=0.05) would generate something like `[0.05, 0.15, 0.25,

v-pourahmadi
v-pourahmadi commented Jun 2, 2019

Hi,

Thanks for sharing the code.

Is there ant simple "getting started" document on setting up like a basic EI or KG with your tool.
1- I have seen the main.py in the "example" folder, it is good but it is for a very complete case, not for simple sampling mode.
2- I have tried the MOE getting started document (available in the "doc" folder) but as the function names and procedures are chan

pvk-developer
pvk-developer commented Jan 22, 2020

We should focus over the new documentation of BTB and mainly on the following points:

  • Update the Tuners section, including how to contribute a new tuner.
  • Update the Hyperparams section.
  • Create new section explaining how to create and use the Tunable.
  • Create a new README where BTBSession is used, which I belive should be the main focus of the library at the moment.

Improve this page

Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.
Morty Proxy This is a proxified and sanitized view of the page, visit original site.