From f70606bae99940daa30dc9989491589200d8230c Mon Sep 17 00:00:00 2001
From: Linshan <52514670+Luffy03@users.noreply.github.com>
Date: Thu, 12 Sep 2024 10:44:00 +0800
Subject: [PATCH] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 48df51b..39bed5b 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@ Code for CVPR 2024 paper, [**"VoCo: A Simple-yet-Effective Volume Contrastive Le
Authors: Linshan Wu, Jiaxin Zhuang, and Hao Chen
-This work presents VoCo, a simple-yet-effective contrastive learning framework for pre-training large scale 3D medical images. Our **10k CT images pre-training** model are available. Our **100k CT images pre-training** models are comming soon!
+This work presents VoCo, a simple-yet-effective contrastive learning framework for pre-training large scale 3D medical images. Our **10k CT images pre-training** model are available. Our **160k CT images pre-training** models are comming soon!
## Abstract
Self-Supervised Learning (SSL) has demonstrated promising results in 3D medical image analysis. However, the lack of high-level semantics in pre-training still heavily hinders the performance of downstream tasks. We observe that 3D medical images contain relatively consistent contextual position information, i.e., consistent geometric relations between different organs, which leads to a potential way for us to learn consistent semantic representations in pre-training. In this paper, we propose a simple-yet-effective **Vo**lume **Co**ntrast (**VoCo**) framework to leverage the contextual position priors for pre-training. Specifically, we first generate a group of base crops from different regions while enforcing feature discrepancy among them, where we employ them as class assignments of different regions. Then, we randomly crop sub-volumes and predict them belonging to which class (located at which region) by contrasting their similarity to different base crops, which can be seen as predicting contextual positions of different sub-volumes. Through this pretext task, VoCo implicitly encodes the contextual position priors into model representations without the guidance of annotations, enabling us to effectively improve the performance of downstream tasks that require high-level semantics. Extensive experimental results on six downstream tasks demonstrate the superior effectiveness of VoCo.