Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submission for #95 #142

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jeromepl
Copy link

@jeromepl jeromepl commented Jan 6, 2019

Submission for #95

@reproducibility-org reproducibility-org added the checks-complete Submission criteria checks complete label Jan 7, 2019
@koustuvsinha koustuvsinha added reviewer-assigned Reviewer has been assigned and removed reviewer-assigned Reviewer has been assigned labels Feb 1, 2019
@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 5
Reviewer 3 comment : PROBLEM STATEMENT
The problem is very clearly presented in a self-contained fashion.

CODE
The whole code is presented in a Google Colab notebook. This allows an easy understanding of the code. On the other hand, the notebook does not contain any graph as the ones presented in the report. Maybe this is related to the limited computation resources available for the free Colab notebooks.

COMMUNICATION WITH THE ORIGINAL AUTHORS
There is no reference to communication with authors.

HYPERPARAMETER SEARCH
There is no explicit mention to the work on trying to obtain better hyperparameters.

They do though introduce a study of the impact of adding biases in the normalization as well.

ABLATION STUDIES
No ablation studies were reported.

DISCUSSION ON RESULTS
The obtained results are basically commented. Results match the ICLR submission but for the deepest network, but as the same authors state, this could be related to the variance involved when running a single training of the model due to low computational power.

However, I do not understand why Batch Norm is not included in Figure 3.

RECOMMENDATIONS FOR REPRODUCIBILITY
No comments in this sense.

OVERALL ORGANIZATION AND CLARITY
The paper is clear to follow and understand. The authors made an effort to make it self-contained.

My main criticism is on Figure 1, which is actually taken from the ICLR submission. Firstly, citation should be added and, secondly, it is almost impossible to compare the relevant curves of Figure 1 with the ones obtained in Figure 2. Only those plots being reproduced should be included in a single Figure comparing the reproducibility and the original values. This central result is totally missing in the report, so I encourage the authors to create a graph with a fair comparison of the obtained results and the ones reported in the ICLR submission.

Confidence : 4

@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 5
Reviewer 1 comment : Problem Statement

Equi-Normalization is well-justified in the Introduction of the reproduced paper.

Code

The code is open-source. All the code is in one Jupyter notebook. Cannot comment much about the codebase itself, but the structure and the comments looks good.

Communication with the Original Author

The authors mention that they have communicated with the authors via OpenReview. The communication can be found in OpenReview.

Hyperparameter Search

No hyperparameter search was performed.

Ablation Study

Ablation study of adding bias to the neural network was performed. No other ablation study is given, other than the ones already in the original paper (BN, ENorm, BN+ENorm).

Discussion on Results

The authors give an adequate comparison between published and reproduced results. There could be more speculations about the the p=19 case.

Recommendations for Reproducibility

There was no suggestions to the author about possible improvements in terms of reproducibility.

Overall Organization and Clarity

Overall, the paper was written coherently.


Here are points that I was particularly impressed with:

  1. The authors found out that the neural network did not include bias.

Here are some parts of the paper I wished for more information:

  1. No experiment was conducted on optimizers with momentum.
  2. No experiment was conducted with convolutional networks.

Here are some minor fixes I recommend:

  1. Add citation for the CIFAR-10 dataset.
  2. In 3rd paragraph of Section 2 (Methodology), “Googles Colaboratory” should be fixed to “Google Colaboratory.”
  3. In 4th paragraph of Section 2 (Methodology), “weight decay of 10e^{-3}” should be fixed to “10^{-3}” or “0.0001”
  4. Specify in the caption of Table 1 that the learning rates have been modified for p = 15, 17, 19 in the BN + ENorm case.
  5. In Section 3, put the figure of reproduced results before the figure of results in the original paper.
  6. In 5th paragraph of 4.1 (Reproducibility), write full names for other normalization techniques and include citations. (GN, WN+BN, Path-SGD)

Thank you!

Score (110): 5
Confidence Score (1
5): 3
Confidence : 3

@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 4
Reviewer 2 comment : Code is nicely commented and easy enough to read.
Confidence : 3

@reproducibility-org
Copy link
Collaborator

Reviewer 2 comment : In detail, it would have been nice to take advantage of notebook format as to show the reproduced results. In present state, it is a bit hard to judge as to what the output would look like.

@reproducibility-org reproducibility-org added the review-complete Review is done by all reviewers label Mar 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
checks-complete Submission criteria checks complete review-complete Review is done by all reviewers reviewer-assigned Reviewer has been assigned
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants