Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undefined functions appear when running train_CCAM_VOC12.py? #19

Open
liaochuanlin opened this issue Sep 1, 2023 · 21 comments
Open

Comments

@liaochuanlin
Copy link

File "/home/lcl/BECO/BECO-main/CCAM-master/WSSS/train_CCAM_VOC12.py", line 240, in
visualize_heatmap(args.tag, images.clone().detach(), ccam, 0, iteration)
NameError: name 'visualize_heatmap' is not defined

@Sierkinhane
Copy link
Member

Hi, the visualize_heatmap() function was defined in WSSS/utils and please ensure to import it.

@liaochuanlin
Copy link
Author

Thank you for your reply, but there are other problems.
Traceback (most recent call last):
File "/home/lcl/BECO/BECO-main/CCAM-master/WSSS/train_CCAM_VOC12.py", line 273, in
torch.save({'state_dict': model.module.state_dict() if (the_number_of_gpu > 1) else model.state_dict(),
File "/home/lcl/anaconda3/envs/BECO/lib/python3.10/site-packages/torch/serialization.py", line 440, in save
with _open_zipfile_writer(f) as opened_zipfile:
File "/home/lcl/anaconda3/envs/BECO/lib/python3.10/site-packages/torch/serialization.py", line 315, in _open_zipfile_writer
return container(name_or_buffer)
File "/home/lcl/anaconda3/envs/BECO/lib/python3.10/site-packages/torch/serialization.py", line 288, in init
super().init(torch._C.PyTorchFileWriter(str(name)))
RuntimeError: [enforce fail at inline_container.cc:365] . invalid file name: ./experiments/models/.pth

@Sierkinhane
Copy link
Member

Hi, actually the program will automatically create the folder experiments/models/. You can also manually create them.

@liaochuanlin
Copy link
Author

Traceback (most recent call last):
File "/home/lcl/BECO/BECO-main/CCAM-master/WSSS/inference_crf.py", line 98, in
cams = crf_inference_label(np.asarray(ori_image), cams, n_labels=keys.shape[0], t=args.crf_iteration)
File "/home/lcl/BECO/BECO-main/CCAM-master/WSSS/tools/ai/demo_utils.py", line 102, in crf_inference_label
d = dcrf.DenseCRF2D(w, h, n_labels)
AttributeError: module 'pydensecrf.densecrf' has no attribute 'DenseCRF2D'

@Sierkinhane
Copy link
Member

Installing pydensecrf from source can address the issue.

@liaochuanlin
Copy link
Author

I downloaded the whole folder and added it to wsss, but the above problem still occurred. Is it the wrong way to download the whole file?

@Sierkinhane
Copy link
Member

Install using python3 setup.py install.

@liaochuanlin
Copy link
Author

When running the evaluation:
/home/lcl/anaconda3/envs/BECO/bin/python3.10 /home/lcl/BECO/BECO-main/CCAM-master/WSSS/evaluate_using_background_cues.py --domain train --with_bg_cues --data_dir /home/lcl/BECO/BECO-main/CCAM-master/WSSS/data/VOC2012 True --bg_dir /home/lcl/BECO/BECO-main/CCAM-master/WSSS/experiments/predictions/CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0@t=0.3@ccam_inference_crf=10 True
usage: evaluate_using_background_cues.py [-h]
[--experiment_name EXPERIMENT_NAME]
[--domain DOMAIN]
[--threshold THRESHOLD]
[--predict_dir PREDICT_DIR]
[--gt_dir GT_DIR] [--logfile LOGFILE]
[--comment COMMENT] [--mode MODE]
[--max_th MAX_TH] [--bg_dir BG_DIR]
[--with_bg_cues WITH_BG_CUES]
evaluate_using_background_cues.py: error: argument --with_bg_cues: expected one argument

@liaochuanlin
Copy link
Author

/home/lcl/anaconda3/envs/BECO/bin/python3.10 /home/lcl/BECO/BECO-main/CCAM-master/WSSS/evaluate_using_background_cues.py --domain train --with_bg_cues True --bg_dir /home/lcl/BECO/BECO-main/CCAM-master/WSSS/experiments/predictions/CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0@t=0.3@ccam_inference_crf=10
Th=0.05, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.10, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.15, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.20, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.25, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.30, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.35, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.40, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.45, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.50, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.55, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.60, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.65, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.70, mIoU=3.599%, FP=0.0000, FN=1.0000
Th=0.75, mIoU=3.599%, FP=0.0000, FN=1.0000
Best Th=0.05, mIoU=3.599%
The end result is very bad.

@liaochuanlin
Copy link
Author

type=str)

parser.add_argument('--experiment_name', default='CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0@t=0.3@ccam_inference_crf=10',
type=str)
parser.add_argument("--domain", default='train', type=str)
parser.add_argument("--threshold", default=None, type=float)

parser.add_argument("--predict_dir", default='', type=str)
parser.add_argument('--gt_dir', default='./data/VOC2012/SegmentationClass', type=str)

parser.add_argument('--logfile', default='', type=str)
parser.add_argument('--comment', default='', type=str)

parser.add_argument('--mode', default='npy', type=str) # png
parser.add_argument('--max_th', default=0.50, type=float)

parser.add_argument('--bg_dir', default='', type=str)
parser.add_argument('--with_bg_cues', default=False, type=bool)
Dear author, can you explain this parameter for me?

@liaochuanlin
Copy link
Author

Refine the background cues You can use the extracted background cues as pseudo supervision signal to train a saliency detector like PoolNet to further refine the background cues and we provide the code for background cues refinement in the directory ./PoolNet. We also provide our refined background cues at here.Does this have to run?
Will it lead to bad results if it doesn't run?

@Sierkinhane
Copy link
Member

Could you provide some predicted results of your method and the extracted background cues?

@liaochuanlin
Copy link
Author

/home/lcl/anaconda3/envs/BECO/bin/python3.10 /home/lcl/BECO/BECO-main/CCAM-master/WSSS/train_CCAM_VOC12.py --tag CCAM_VOC12_MOCO --pretrained mocov2 --alpha 0.25
[i] CCAM_VOC12_MOCO

[i] mean values is [0.485, 0.456, 0.406]
[i] std values is [0.229, 0.224, 0.225]
[i] The number of class is 20
[i] train_transform is Compose(
Resize(size=(512, 512), interpolation=bilinear, max_size=None, antialias=warn)
RandomHorizontalFlip(p=0.5)
RandomCrop(size=(448, 448), padding=None)
ToTensor()
Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
)
[i] test_transform is Compose(
<tools.ai.augment_utils.Normalize_For_Segmentation object at 0x7fe2fb7bae90>
<tools.ai.augment_utils.Top_Left_Crop_For_Segmentation object at 0x7fe2fb7bb100>
<tools.ai.augment_utils.Transpose_For_Segmentation object at 0x7fe2fb7bb340>
)
[i] #train data
[i] #valid data

[i] log_iteration : 66
[i] val_iteration : 331
[i] max_iteration : 3,310
Loading unsupervised mocov2 pretrained parameters!

[i] Architecture is resnet50
[i] Total Params: 23.54M

[i] Epoch[0/10], iteration=100, learning_rate=0.0010, loss=1.1635, positive_loss=0.0400, negative_loss=0.9580, time=40sec
[i] Epoch[0/10], iteration=200, learning_rate=0.0009, loss=1.0110, positive_loss=0.0428, negative_loss=0.9016, time=38sec
[i] Epoch[0/10], iteration=300, learning_rate=0.0009, loss=0.9442, positive_loss=0.0486, negative_loss=0.7862, time=38sec
Is Negative: True
[i] save model
[i] Epoch[1/10], iteration=100, learning_rate=0.0009, loss=0.8853, positive_loss=0.0506, negative_loss=0.7656, time=51sec
[i] Epoch[1/10], iteration=200, learning_rate=0.0009, loss=0.8497, positive_loss=0.0600, negative_loss=0.6674, time=39sec
[i] Epoch[1/10], iteration=300, learning_rate=0.0008, loss=0.8274, positive_loss=0.0543, negative_loss=0.6699, time=39sec
[i] save model
[i] Epoch[2/10], iteration=100, learning_rate=0.0008, loss=0.7983, positive_loss=0.0608, negative_loss=0.6137, time=51sec
[i] Epoch[2/10], iteration=200, learning_rate=0.0008, loss=0.7801, positive_loss=0.0557, negative_loss=0.6619, time=39sec
[i] Epoch[2/10], iteration=300, learning_rate=0.0007, loss=0.7643, positive_loss=0.0592, negative_loss=0.6105, time=39sec
[i] save model
[i] Epoch[3/10], iteration=100, learning_rate=0.0007, loss=0.7495, positive_loss=0.0554, negative_loss=0.6402, time=51sec
[i] Epoch[3/10], iteration=200, learning_rate=0.0007, loss=0.7369, positive_loss=0.0654, negative_loss=0.5693, time=39sec
[i] Epoch[3/10], iteration=300, learning_rate=0.0006, loss=0.7246, positive_loss=0.0611, negative_loss=0.5761, time=39sec
[i] save model
[i] Epoch[4/10], iteration=100, learning_rate=0.0006, loss=0.7191, positive_loss=0.0576, negative_loss=0.6141, time=50sec
[i] Epoch[4/10], iteration=200, learning_rate=0.0006, loss=0.7047, positive_loss=0.0568, negative_loss=0.6006, time=38sec
[i] Epoch[4/10], iteration=300, learning_rate=0.0005, loss=0.7014, positive_loss=0.0627, negative_loss=0.5613, time=38sec
[i] save model
[i] Epoch[5/10], iteration=100, learning_rate=0.0005, loss=0.6958, positive_loss=0.0568, negative_loss=0.5652, time=51sec
[i] Epoch[5/10], iteration=200, learning_rate=0.0005, loss=0.6861, positive_loss=0.0660, negative_loss=0.5346, time=39sec
[i] Epoch[5/10], iteration=300, learning_rate=0.0004, loss=0.6819, positive_loss=0.0610, negative_loss=0.5482, time=39sec
[i] save model
[i] Epoch[6/10], iteration=100, learning_rate=0.0004, loss=0.6764, positive_loss=0.0598, negative_loss=0.5811, time=51sec
[i] Epoch[6/10], iteration=200, learning_rate=0.0004, loss=0.6716, positive_loss=0.0674, negative_loss=0.5235, time=38sec
[i] Epoch[6/10], iteration=300, learning_rate=0.0003, loss=0.6669, positive_loss=0.0519, negative_loss=0.5649, time=38sec
[i] save model
[i] Epoch[7/10], iteration=100, learning_rate=0.0003, loss=0.6641, positive_loss=0.0515, negative_loss=0.5690, time=50sec
[i] Epoch[7/10], iteration=200, learning_rate=0.0003, loss=0.6602, positive_loss=0.0650, negative_loss=0.4885, time=38sec
[i] Epoch[7/10], iteration=300, learning_rate=0.0002, loss=0.6572, positive_loss=0.0578, negative_loss=0.5534, time=38sec
[i] save model
[i] Epoch[8/10], iteration=100, learning_rate=0.0002, loss=0.6574, positive_loss=0.0664, negative_loss=0.5179, time=50sec
[i] Epoch[8/10], iteration=200, learning_rate=0.0002, loss=0.6533, positive_loss=0.0598, negative_loss=0.5154, time=39sec
[i] Epoch[8/10], iteration=300, learning_rate=0.0001, loss=0.6497, positive_loss=0.0643, negative_loss=0.4860, time=39sec
[i] save model
[i] Epoch[9/10], iteration=100, learning_rate=0.0001, loss=0.6512, positive_loss=0.0623, negative_loss=0.5379, time=50sec
[i] Epoch[9/10], iteration=200, learning_rate=0.0001, loss=0.6496, positive_loss=0.0590, negative_loss=0.5055, time=38sec
[i] Epoch[9/10], iteration=300, learning_rate=0.0000, loss=0.6486, positive_loss=0.0569, negative_loss=0.5523, time=38sec
[i] save model
CCAM_VOC12_MOCO

Process finished with exit code 0

@Sierkinhane
Copy link
Member

I mean you can go to the directory./experiments/images/ and show whether the predicted cues are correct.

@liaochuanlin
Copy link
Author

image
Yes, the colormap.jpg picture is generated in / home/lcl/BECO/BECO-main/CCAM-master/WSSS/experiments/images/CCAM_VOC12_MOCO/train/colormaps

@Sierkinhane
Copy link
Member

That's good. How about the generated background cues? And your cams to be refined. Could you please provide some examples?

@liaochuanlin
Copy link
Author

2007_000515
Yes, the png picture is generated in / home/lcl/BECO/BECO-main/CCAM-master/WSSS/experiments/predictions/CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0@t=0.3@ccam_inference_crf=10

@liaochuanlin
Copy link
Author

May I add your contact information?
I'm also a student.

@Sierkinhane
Copy link
Member

python3 evaluate_using_background_cues.py --experiment_name you_experiment_name --domain train --data_dir path/to/your/data --with_bg_cues True --bg_dir path/to/your/background cues in which you_experiment_name means that the directory storing your extracted cams and bg_dir indicates the background cues.

@Sierkinhane
Copy link
Member

If your extracted cams and background cues are correct, the evaluation results may be good.

@liaochuanlin
Copy link
Author

liaochuanlin commented Sep 5, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants