-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the backbone of CAM is shared witch CCAM? #3
Comments
The training of CCAM and CAM is separated but with the same architecture like ResNet50. |
thanks! |
Actually, we adopt a simple method to refine CAM, i.e., put the extracted background cues at the background channel (0) instead of a fixed background threshold and then applying the argmax process to refine CAM. For example, change the padding background threshold to the extracted background cues in https://github.com/OFRIN/PuzzleCAM/blob/659b4c1b464a86faf363314acd71a5ce9480ff9b/evaluate.py#L48. |
thanks for your reply, you really did a great work |
Thanks! : ) |
Hello, have you solved the final refine code? Is it convenient to share? |
|
|
If it is of any help, I managed to refine cams by making the following changes to # Imports...
# Arguments
parser.add_argument('--sal_dir', default=None, type=str) # (<--- added)
parser.add_argument('--gt_dir', default='../VOCtrainval_11-May-2012/SegmentationClass', type=str)
...
args = parser.parse_args()
sal_dir = args.sal_dir # (<--- added)
...
def compare(...):
for idx in range(start, len(name_list), step):
name = name_list[idx]
npy_file = os.path.join(predict_folder, name + '.npy')
label_file = os.path.join(gt_dir, name + '.png')
sal_file = os.path.join(sal_dir, name + '.png') if sal_dir else None # (<--- added)
if os.path.exists(npy_file):
...
if sal_file: # (<--- added)
sal = np.array(Image.open(sal_file)).astype(float)
sal = 1 - sal / 255.
cams = np.concatenate((sal[np.newaxis, ...], cams), axis=0)
else:
cams = np.pad(cams, ((1, 0), (0, 0), (0, 0)), mode='constant', constant_values=args.threshold)
... And finally call it passing python evaluate.py \
--experiment_name ResNet50@Puzzle@optimal@train@scale=0.5,1.0,1.5,2.0 \
--domain train \
--gt_dir /datasets/voc/VOCdevkit/VOC2012/SegmentationClass \
--sal_dir /saliency/sal_55_epoch9_moco |
Thanks for sharing the code for refining CAMs! I've noticed that when calling it, the experiment_name is 'ResNet50@Puzzle@optimal@train@scale=0.5,1.0,1.5,2.0'. It seems that you use PuzzleCAM to generate cams instead of using CCAM, which is in 'CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0'. Have you tried to use these cams to refine the final labels? |
Hi, he just used puzzlecam to generate initial CAMs and you can train C2AM to extract background cues to refine the initial CAMs, so the experiment names were different. |
Thanks for the reply. Is that using other method to generate initial CAMs and then using refined background cues (generated by saliency detector like PoolNet with background cues) to get final CAMs is the proper way to use C2AM? Are npy files in 'CCAM_VOC12_MOCO@train@scale=0.5,1.0,1.5,2.0' which contains CAMs generated during training parse just intermediate output? |
Right! Details were provided at sec. 3.5 in our paper. |
the backbone of CAM and CCAM is the same or different one?
when i use the ccam to refine the CAM, should i start a new network to generate the ccam separately?
The text was updated successfully, but these errors were encountered: