From c496fb1cc02a4c70dbdca622dae0390eefef69d5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=BC=A0=E6=97=AD=E8=BF=8E?= <47649909+zhangxuying1004@users.noreply.github.com> Date: Fri, 14 Jul 2023 16:26:05 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f59da67..0482599 100644 --- a/README.md +++ b/README.md @@ -60,7 +60,7 @@ in orange. In the reference branch, the common representation of a specified obj features with the foreground map generated by a SOD network. In the segmentation branch, the visual features from the last three layers of the encoder are employed to represent the given image. Then, these two kinds of feature representations are fused and compared in the well-designed RMG module to generate a mask prior, which is used to enrich the visual feature among different scales to highlight the camouflaged targets in our -RFE module. Finally, the enriched features are fed into the decoder to generate the final segmentation map. DSF:Dual-source Information Fusion, MSF: Multi-scale Feature Fusion, TM: Target Matching. +RFE module. Finally, the enriched features are fed into the decoder to generate the final segmentation map. DSF: Dual-source Information Fusion, MSF: Multi-scale Feature Fusion, TM: Target Matching.

@@ -92,7 +92,7 @@ python train.py --model_name r2cnet --gpu_id 0

## Contact -For technical questions, feel free to contact `zhangxuying1004@gmail.com` and `bowenyin@mail.nankai.edu.cn`. +For technical questions, feel free to contact [zhangxuying1004@gmail.com]() and [bowenyin@mail.nankai.edu.cn](). ## Citation If our work is helpful to you or gives some inspiration to you, please star this project and cite our paper. Thank you!