Skip to content

Commit

Permalink
Develop (#171)
Browse files Browse the repository at this point in the history
* merge (#128)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* rm print

* Issue#37 and Issue #79 Save keras model/autokeras model (#122)

* Issue #37 Export Keras model

* Issue #79 Save autokeras model

* Issue #37 and Issue#79 Fixed comments

* Issue #37 and Issue #79

* Issue #37 and Issue #79

* Issue #37 and Issue #79 Fixed pytests

* Issue #37 and Issue #79

* quick fix test

* Progbar (#143)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* contribute guide

* Add Progress Bar

* Update utils.py

* Update search.py

* update constant (#145)

* [WIP] Issue #158 Imageregressor (#159)

* Develop (#146)

* merge (#128)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* rm print

* Issue#37 and Issue #79 Save keras model/autokeras model (#122)

* Issue #37 Export Keras model

* Issue #79 Save autokeras model

* Issue #37 and Issue#79 Fixed comments

* Issue #37 and Issue #79

* Issue #37 and Issue #79

* Issue #37 and Issue #79 Fixed pytests

* Issue #37 and Issue #79

* quick fix test

* Progbar (#143)

* 0.2.6 (#126)

* 0.2.5 setup.py (#111)

* [WIP] Attempts to Fix Memory Error  (#112)

* Add Website Badge in README.md, apply timeout in search function in search.py

* Add timeout in maximize_acq function in search.py

* Update unit test to allow timeout to raise TimeoutError

* Add unit test for timeout resume

* Remove TimeoutError from expectation

* Check Timeout exception in search() in search.py

* 0.2.5 setup.py (#110)

* Prevent gpu memory copy to main process after train() finished

* Cast loss from tensor to float

* Add pass() in MockProcess

* [MRG] Search Space limited to avoid out of memory (#121)

* limited the search space

* limited the search space

* reduce search space

* test added

* [MRG]Pytorch mp (#124)

* Change multiprcoessing to torch.multiprocessing

* Replace multiprocessing.Pool with torch.multiprocessing.Pool in tests

* 0.2.6 (#125)

* new release

* auto deploy

* auto deploy of docs

* fix the docs auto deploy

* Create CNAME

* deploy docs fixed

* update

* bug fix (#127)

* setup.py

* contribute guide

* Add Progress Bar

* Update utils.py

* Update search.py

* update constant (#145)

* Update setup.py (#147)

* Update setup.py

* Update setup.py

* Update setup.py (#155)

* requirements

* Issue #158 Export ImageRegressor model

* Memory (#161)

* aa

* limit memory

* refactor to_real_layer to member functions

* bug fix (#166)

* doc string changed for augment (#170)

I added proper documentation for class ImageSupervised  arg 'augment'. It is 'None' by default. However, if it is 'None', then it uses Constant.DATA_AUGMENTATION which is 'True'. This is misleading when trying things out.

* Update constant.py
  • Loading branch information
haifeng-jin authored Sep 3, 2018
1 parent 5e397db commit 504d63d
Show file tree
Hide file tree
Showing 11 changed files with 152 additions and 98 deletions.
3 changes: 1 addition & 2 deletions autokeras/constant.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ class Constant:
KERNEL_LAMBDA = 0.1
T_MIN = 0.0001
N_NEIGHBOURS = 8
MAX_MODEL_WIDTH = 1024
MAX_MODEL_DEPTH = 10
MAX_MODEL_SIZE = (1 << 17)

# Model Defaults

Expand Down
7 changes: 5 additions & 2 deletions autokeras/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from autokeras.constant import Constant
from autokeras.layer_transformer import wider_bn, wider_next_conv, wider_next_dense, wider_pre_dense, wider_pre_conv, \
deeper_conv_block, dense_to_deeper_block, add_noise
from autokeras.layers import StubConcatenate, StubAdd, StubConv, is_layer, layer_width, to_real_layer, \
from autokeras.layers import StubConcatenate, StubAdd, StubConv, is_layer, layer_width, \
to_real_keras_layer, set_torch_weight_to_stub, set_stub_weight_to_torch, set_stub_weight_to_keras, \
set_keras_weight_to_stub, StubBatchNormalization, StubReLU

Expand Down Expand Up @@ -571,14 +571,17 @@ def wide_layer_ids(self):
def skip_connection_layer_ids(self):
return self._conv_layer_ids_in_order()[:-1]

def size(self):
return sum(list(map(lambda x: x.size(), self.layer_list)))


class TorchModel(torch.nn.Module):
def __init__(self, graph):
super(TorchModel, self).__init__()
self.graph = graph
self.layers = []
for layer in graph.layer_list:
self.layers.append(to_real_layer(layer))
self.layers.append(layer.to_real_layer())
if graph.weighted:
for index, layer in enumerate(self.layers):
set_stub_weight_to_torch(self.graph.layer_list[index], layer)
Expand Down
24 changes: 15 additions & 9 deletions autokeras/image_supervised.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
import numpy as np
from scipy import ndimage
import torch
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split

from autokeras.loss_function import classification_loss, regression_loss
Expand Down Expand Up @@ -122,7 +121,8 @@ class ImageSupervised(Supervised):
searcher: An instance of BayesianSearcher. It searches different
neural architecture to find the best model.
searcher_args: A dictionary containing the parameters for the searcher's __init__ function.
augment: A boolean value indicating whether the data needs augmentation.
augment: A boolean value indicating whether the data needs augmentation. If not define, then it
will use the value of Constant.DATA_AUGMENTATION which is True by default.
"""

def __init__(self, verbose=False, path=None, resume=False, searcher_args=None, augment=None):
Expand All @@ -136,7 +136,8 @@ def __init__(self, verbose=False, path=None, resume=False, searcher_args=None, a
path: A string. The path to a directory, where the intermediate results are saved.
resume: A boolean. If True, the classifier will continue to previous work saved in path.
Otherwise, the classifier will start a new search.
augment: A boolean value indicating whether the data needs augmentation.
augment: A boolean value indicating whether the data needs augmentation. If not define, then it
will use the value of Constant.DATA_AUGMENTATION which is True by default.
"""
super().__init__(verbose)
Expand Down Expand Up @@ -284,7 +285,7 @@ def inverse_transform_y(self, output):
def evaluate(self, x_test, y_test):
"""Return the accuracy score between predict value and `y_test`."""
y_predict = self.predict(x_test)
return accuracy_score(y_test, y_predict)
return self.metric().compute(y_test, y_predict)

def save_searcher(self, searcher):
pickle.dump(searcher, open(os.path.join(self.path, 'searcher'), 'wb'))
Expand Down Expand Up @@ -329,8 +330,11 @@ def export_keras_model(self, model_file_name):

def export_autokeras_model(self, model_file_name):
""" Creates and Exports the AutoKeras model to the given filename. """
portable_model = PortableImageSupervised(graph=self.load_searcher().load_best_model(), \
y_encoder=self.y_encoder, data_transformer=self.data_transformer)
portable_model = PortableImageSupervised(graph=self.load_searcher().load_best_model(),
y_encoder=self.y_encoder,
data_transformer=self.data_transformer,
metric=self.metric,
inverse_transform_y_method=self.inverse_transform_y)
pickle_to_file(portable_model, model_file_name)


Expand Down Expand Up @@ -378,14 +382,16 @@ def inverse_transform_y(self, output):


class PortableImageSupervised(PortableClass):
def __init__(self, graph, data_transformer, y_encoder):
def __init__(self, graph, data_transformer, y_encoder, metric, inverse_transform_y_method):
"""Initialize the instance.
Args:
graph: The graph form of the learned model
"""
super().__init__(graph)
self.data_transformer = data_transformer
self.y_encoder = y_encoder
self.metric = metric
self.inverse_transform_y_method = inverse_transform_y_method

def predict(self, x_test):
"""Return predict results for the testing data.
Expand All @@ -410,9 +416,9 @@ def predict(self, x_test):
return self.inverse_transform_y(output)

def inverse_transform_y(self, output):
return self.y_encoder.inverse_transform(output)
return self.inverse_transform_y_method(output)

def evaluate(self, x_test, y_test):
"""Return the accuracy score between predict value and `y_test`."""
y_predict = self.predict(x_test)
return accuracy_score(y_test, y_predict)
return self.metric().compute(y_test, y_predict)
74 changes: 46 additions & 28 deletions autokeras/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,16 @@ def export_weights_keras(self, keras_layer):
def get_weights(self):
return self.weights

def size(self):
return 0

@property
def output_shape(self):
return self.input.shape

def to_real_layer(self):
pass


class StubWeightBiasLayer(StubLayer):
def import_weights(self, torch_layer):
Expand Down Expand Up @@ -68,6 +74,12 @@ def export_weights(self, torch_layer):
torch_layer.running_mean = torch.Tensor(self.weights[2])
torch_layer.running_var = torch.Tensor(self.weights[3])

def size(self):
return self.num_features * 4

def to_real_layer(self):
return torch.nn.BatchNorm2d(self.num_features)


class StubDense(StubWeightBiasLayer):
def __init__(self, input_units, units, input_node=None, output_node=None):
Expand All @@ -85,6 +97,12 @@ def import_weights_keras(self, keras_layer):
def export_weights_keras(self, keras_layer):
keras_layer.set_weights((self.weights[0].T, self.weights[1]))

def size(self):
return self.input_units * self.units + self.units

def to_real_layer(self):
return torch.nn.Linear(self.input_units, self.units)


class StubConv(StubWeightBiasLayer):
def __init__(self, input_channel, filters, kernel_size, input_node=None, output_node=None):
Expand All @@ -105,6 +123,15 @@ def import_weights_keras(self, keras_layer):
def export_weights_keras(self, keras_layer):
keras_layer.set_weights((self.weights[0].T, self.weights[1]))

def size(self):
return self.filters * self.kernel_size * self.kernel_size + self.filters

def to_real_layer(self):
return torch.nn.Conv2d(self.input_channel,
self.filters,
self.kernel_size,
padding=int(self.kernel_size / 2))


class StubAggregateLayer(StubLayer):
def __init__(self, input_nodes=None, output_node=None):
Expand All @@ -122,12 +149,18 @@ def output_shape(self):
ret = self.input[0].shape[:-1] + (ret,)
return ret

def to_real_layer(self):
return TorchConcatenate()


class StubAdd(StubAggregateLayer):
@property
def output_shape(self):
return self.input[0].shape

def to_real_layer(self):
return TorchAdd()


class StubFlatten(StubLayer):
@property
Expand All @@ -137,13 +170,18 @@ def output_shape(self):
ret *= dim
return ret,

def to_real_layer(self):
return TorchFlatten()


class StubReLU(StubLayer):
pass
def to_real_layer(self):
return torch.nn.ReLU()


class StubSoftmax(StubLayer):
pass
def to_real_layer(self):
return torch.nn.LogSoftmax(dim=1)


class StubPooling(StubLayer):
Expand All @@ -159,6 +197,9 @@ def output_shape(self):
ret = ret + (self.input.shape[-1],)
return ret

def to_real_layer(self):
return torch.nn.MaxPool2d(2)


class StubGlobalPooling(StubLayer):
def __init__(self, func, input_node=None, output_node=None):
Expand All @@ -171,6 +212,9 @@ def __init__(self, rate, input_node=None, output_node=None):
super().__init__(input_node, output_node)
self.rate = rate

def to_real_layer(self):
return torch.nn.Dropout2d(self.rate)


class StubInput(StubLayer):
def __init__(self, input_node=None, output_node=None):
Expand Down Expand Up @@ -239,32 +283,6 @@ def keras_dropout(layer, rate):
return layers.Dropout(rate)


def to_real_layer(layer):
if is_layer(layer, 'Dense'):
return torch.nn.Linear(layer.input_units, layer.units)
if is_layer(layer, 'Conv'):
return torch.nn.Conv2d(layer.input_channel,
layer.filters,
layer.kernel_size,
padding=int(layer.kernel_size / 2))
if is_layer(layer, 'Pooling'):
return torch.nn.MaxPool2d(2)
if is_layer(layer, 'BatchNormalization'):
return torch.nn.BatchNorm2d(layer.num_features)
if is_layer(layer, 'Concatenate'):
return TorchConcatenate()
if is_layer(layer, 'Add'):
return TorchAdd()
if is_layer(layer, 'Dropout'):
return torch.nn.Dropout2d(layer.rate)
if is_layer(layer, 'ReLU'):
return torch.nn.ReLU()
if is_layer(layer, 'Softmax'):
return torch.nn.LogSoftmax(dim=1)
if is_layer(layer, 'Flatten'):
return TorchFlatten()


def to_real_keras_layer(layer):
if is_layer(layer, 'Dense'):
return layers.Dense(layer.units, input_shape=(layer.input_units,))
Expand Down
32 changes: 14 additions & 18 deletions autokeras/net_transformer.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,15 @@
from copy import deepcopy
from operator import itemgetter
from random import randint, randrange, sample
from random import randrange, sample

from autokeras.graph import NetworkDescriptor

from autokeras.constant import Constant
from autokeras.layers import is_layer, layer_width
from autokeras.layers import is_layer


def to_wider_graph(graph):
weighted_layer_ids = graph.wide_layer_ids()
weighted_layer_ids = list(filter(lambda x: layer_width(graph.layer_list[x]) * 2 <= Constant.MAX_MODEL_WIDTH,
weighted_layer_ids))

if len(weighted_layer_ids) == 0:
return None
# n_wider_layer = randint(1, len(weighted_layer_ids))
# wider_layers = sample(weighted_layer_ids, n_wider_layer)
wider_layers = sample(weighted_layer_ids, 1)

for layer_id in wider_layers:
Expand Down Expand Up @@ -61,12 +54,8 @@ def to_skip_connection_graph(graph):

def to_deeper_graph(graph):
weighted_layer_ids = graph.deep_layer_ids()
if len(weighted_layer_ids) >= Constant.MAX_MODEL_DEPTH:
return None

deeper_layer_ids = sample(weighted_layer_ids, 1)
# n_deeper_layer = randint(1, len(weighted_layer_ids))
# deeper_layer_ids = sample(weighted_layer_ids, n_deeper_layer)

for layer_id in deeper_layer_ids:
layer = graph.layer_list[layer_id]
Expand All @@ -87,15 +76,22 @@ def legal_graph(graph):

def transform(graph):
graphs = []
for i in range(Constant.N_NEIGHBOURS):
for i in range(Constant.N_NEIGHBOURS * 2):
a = randrange(3)
temp_graph = None
if a == 0:
graphs.append(to_deeper_graph(deepcopy(graph)))
temp_graph = to_deeper_graph(deepcopy(graph))
elif a == 1:
graphs.append(to_wider_graph(deepcopy(graph)))
temp_graph = to_wider_graph(deepcopy(graph))
elif a == 2:
graphs.append(to_skip_connection_graph(deepcopy(graph)))
graphs = list(filter(lambda x: x, graphs))
temp_graph = to_skip_connection_graph(deepcopy(graph))

if temp_graph is not None and temp_graph.size() <= Constant.MAX_MODEL_SIZE:
graphs.append(temp_graph)

if len(graphs) >= Constant.N_NEIGHBOURS:
break

return list(filter(lambda x: legal_graph(x), graphs))


Expand Down
18 changes: 13 additions & 5 deletions autokeras/preprocessor.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,23 +71,31 @@ def transform_train(self, data, targets=None, batch_size=None):
common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
compose_list = augment_list + common_list

return self._transform(batch_size, compose_list, data, targets)
dataset = self._transform(compose_list, data, targets)

if batch_size is None:
batch_size = Constant.MAX_BATCH_SIZE
batch_size = min(len(data), batch_size)

return DataLoader(dataset, batch_size=batch_size, shuffle=True)

def transform_test(self, data, targets=None, batch_size=None):
common_list = [Normalize(torch.Tensor(self.mean), torch.Tensor(self.std))]
compose_list = common_list

return self._transform(batch_size, compose_list, data, targets)
dataset = self._transform(compose_list, data, targets)

def _transform(self, batch_size, compose_list, data, targets):
if batch_size is None:
batch_size = Constant.MAX_BATCH_SIZE
batch_size = min(len(data), batch_size)

return DataLoader(dataset, batch_size=batch_size, shuffle=False)

def _transform(self, compose_list, data, targets):
data = data / self.max_val
data = torch.Tensor(data.transpose(0, 3, 1, 2))
data_transforms = Compose(compose_list)
dataset = MultiTransformDataset(data, targets, data_transforms)
return DataLoader(dataset, batch_size=batch_size, shuffle=True)
return MultiTransformDataset(data, targets, data_transforms)


class MultiTransformDataset(Dataset):
Expand Down
6 changes: 3 additions & 3 deletions autokeras/search.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ def search(self, train_data, test_data, timeout=60 * 60 * 24):
print('|' + line + '|')
print('+' + '-' * len(line) + '+')
for i in range(len(new_graph.operation_history)):
if i == len(new_graph.operation_history)//2:
if i == len(new_graph.operation_history) // 2:
r = [new_father_id, new_graph.operation_history[i]]
else:
r = [' ', new_graph.operation_history[i]]
Expand Down Expand Up @@ -272,14 +272,14 @@ def train(args):
model = graph.produce_model()
# if path is not None:
# plot_model(model, to_file=path, show_shapes=True)
loss, mertic_value = ModelTrainer(model,
loss, metric_value = ModelTrainer(model,
train_data,
test_data,
metric,
loss,
verbose).train_model(**trainer_args)
model.set_weight_to_graph()
return mertic_value, loss, model.graph
return metric_value, loss, model.graph


def same_graph(des1, des2):
Expand Down
Loading

0 comments on commit 504d63d

Please sign in to comment.