-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ResNet101+ASL in MS-COCO #90
Comments
I also had a hard time and mine is 83.1 using resnet101 ,and I followed the tricks in #30 (comment) |
Would you like to provide a training log? |
The file of map=83.1 was deleted these, so I ran the experiment again with batch_size=6 per GPU and 48 1080Ti GPUS. The new result is mAP=82.51. I use MMclassification and the config file is as follows: base = [ work_dir = './work_dirs/resnet101_coco_baseline' model = dict( img_norm_cfg = dict( crop_size=576 train_pipeline = [ test_pipeline = [ data = dict( # learning policy # checkpoint saving dist_params = dict(backend='nccl') workflow = [('train', 1)] |
The training log is: 2022-07-13 10:38:26,914 - mmcls - INFO - workflow: [('train', 1)], max: 80 epochs |
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)??? The server where the previous training logs are located is currently inaccessible. |
@THUeeY [07/13 17:40:44.800]: Epoch: [0/80][ 0/646] T 8.995 (8.995) DT 2.315 (2.315) S1 3.6 (3.6) SA 14.2 (14.2) LR 4.000e-06 Loss 56.353 (56.353) Mem 15486 |
Sure, I have updated my email address. |
This result is difficult to reproduce. Not only am I having a hard time solving this problem, but others are also having this problem too.
Would you share the configuration file and training log of ResNet101?
The result I report is map=82.9.
The text was updated successfully, but these errors were encountered: