Replies: 1 comment 2 replies
-
Hi @j2kun, class PrunedSimpleNet(torch.nn.Module):
def __init__(self, n_hidden=192):
super(PrunedSimpleNet, self).__init__()
self.fc1 = nn.Linear(28*28, n_hidden, True)
self.q1 = nn.ReLU()
self.fc2 = qnn.QuantLinear(n_hidden, n_hidden, True)
self.q2 = nn.ReLU()
self.fc3 = nn.Linear(n_hidden, 10, True)
for m in self.modules():
if isinstance(m, qnn.QuantLinear):
torch.nn.init.uniform_(m.weight.data, -1, 1)
def forward(self, x):
x = self.q1(self.fc1(x))
x = self.q2(self.fc2(x))
x = self.fc3(x)
return x |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there, I'm trying to train a simple 3-layer feed-forward network for MNIST with full quantization. However, during training the loss is
nan
in every round. Here is my model:My train, test and main look like this:
Beta Was this translation helpful? Give feedback.
All reactions