Skip to content

Commit

Permalink
Develop (#215)
Browse files Browse the repository at this point in the history
* update develop (#206)

* Update CONTRIBUTING.md

* Develop (#187)

* merge (#128)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* rm print

* Issue#37 and Issue #79 Save keras model/autokeras model (#122)

* Issue #37 Export Keras model

* Issue #79 Save autokeras model

* Issue #37 and Issue#79 Fixed comments

* Issue #37 and Issue #79

* Issue #37 and Issue #79

* Issue #37 and Issue #79 Fixed pytests

* Issue #37 and Issue #79

* quick fix test

* Progbar (#143)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* contribute guide

* Add Progress Bar

* Update utils.py

* Update search.py

* update constant (#145)

* [WIP] Issue #158 Imageregressor (#159)

* Develop (#146)

* merge (#128)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* rm print

* Issue#37 and Issue #79 Save keras model/autokeras model (#122)

* Issue #37 Export Keras model

* Issue #79 Save autokeras model

* Issue #37 and Issue#79 Fixed comments

* Issue #37 and Issue #79

* Issue #37 and Issue #79

* Issue #37 and Issue #79 Fixed pytests

* Issue #37 and Issue #79

* quick fix test

* Progbar (#143)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* contribute guide

* Add Progress Bar

* Update utils.py

* Update search.py

* update constant (#145)

* Update setup.py (#147)

* Update setup.py

* Update setup.py

* Update setup.py (#155)

* requirements

* Issue #158 Export ImageRegressor model

* Memory (#161)

* aa

* limit memory

* refactor to_real_layer to member functions

* bug fix (#166)

* doc string changed for augment (#170)

I added proper documentation for class ImageSupervised  arg 'augment'. It is 'None' by default. However, if it is 'None', then it uses Constant.DATA_AUGMENTATION which is 'True'. This is misleading when trying things out.

* Update constant.py

* bug fix (#177)

* memory limit dynamically (#180)

* memory limit dynamically

* test

* test fixed

* [MRG]Dcgan (#175)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* finish workable version of gan

* add unit test and small refactoring

* add unsupervised super class

* Fix test_dcgan ran too long issue, put default param in unsupervised::generate(input_sample=None)

* remove examples/gan.py from repo

* add missing import

* correct model_trainer signature

* fixed the bug in return value of train_model()

* Update setup.py

* [WIP]Update CONTRIBUTING.md (#190)

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update mkdocs.yml

* code_reuse_example

* Update CONTRIBUTING.md

* bug_fix (#208)

* bug_fix (#214) resolves #212
  • Loading branch information
haifeng-jin authored Sep 25, 2018
1 parent ba1a0a3 commit 38ec427
Showing 1 changed file with 10 additions and 11 deletions.
21 changes: 10 additions & 11 deletions autokeras/bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ def edit_distance(x, y, kernel_lambda):
class IncrementalGaussianProcess:
def __init__(self, kernel_lambda):
self.alpha = 1e-10
self._k_matrix = None
self._distance_matrix = None
self._x = None
self._y = None
Expand Down Expand Up @@ -92,16 +91,16 @@ def incremental_fit(self, train_x, train_y):
up_k = np.concatenate((self._distance_matrix, up_right_k), axis=1)
down_k = np.concatenate((down_left_k, down_right_k), axis=1)
self._distance_matrix = np.concatenate((up_k, down_k), axis=0)
self._distance_matrix = bourgain_embedding_matrix(self._distance_matrix)
self._k_matrix = 1.0 / np.exp(self._distance_matrix)
diagonal = np.diag_indices_from(self._k_matrix)
distort_matrix = bourgain_embedding_matrix(self._distance_matrix)
k_matrix = 1.0 / np.exp(np.power(distort_matrix, 2))
diagonal = np.diag_indices_from(k_matrix)
diagonal = (diagonal[0][-len(train_x):], diagonal[1][-len(train_x):])
self._k_matrix[diagonal] += self.alpha
k_matrix[diagonal] += self.alpha

self._x = np.concatenate((self._x, train_x), axis=0)
self._y = np.concatenate((self._y, train_y), axis=0)

self._l_matrix = cholesky(self._k_matrix, lower=True) # Line 2
self._l_matrix = cholesky(k_matrix, lower=True) # Line 2

self._alpha_vector = cho_solve((self._l_matrix, True), self._y) # Line 3

Expand All @@ -118,19 +117,19 @@ def first_fit(self, train_x, train_y):
self._y = np.copy(train_y)

self._distance_matrix = self.edit_distance_matrix(self.kernel_lambda, self._x)
self._distance_matrix = bourgain_embedding_matrix(self._distance_matrix)
self._k_matrix = 1.0 / np.exp(self._distance_matrix)
self._k_matrix[np.diag_indices_from(self._k_matrix)] += self.alpha
distort_matrix = bourgain_embedding_matrix(self._distance_matrix)
k_matrix = 1.0 / np.exp(np.power(distort_matrix, 2))
k_matrix[np.diag_indices_from(k_matrix)] += self.alpha

self._l_matrix = cholesky(self._k_matrix, lower=True) # Line 2
self._l_matrix = cholesky(k_matrix, lower=True) # Line 2

self._alpha_vector = cho_solve((self._l_matrix, True), self._y) # Line 3

self._first_fitted = True
return self

def predict(self, train_x):
k_trans = 1.0 / np.exp(self.edit_distance_matrix(self.kernel_lambda, train_x, self._x))
k_trans = 1.0 / np.exp(np.power(self.edit_distance_matrix(self.kernel_lambda, train_x, self._x), 2))
y_mean = k_trans.dot(self._alpha_vector) # Line 4 (y_mean = f_star)

# compute inverse K_inv of K based on its Cholesky
Expand Down

0 comments on commit 38ec427

Please sign in to comment.