Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- LICENSE +201 -0
- README.md +147 -3
- figs/framework_v1.png +3 -0
- figs/iterative_results.png +3 -0
- figs/iterative_results_supp.png +3 -0
- figs/qualitative_results.png +3 -0
- implementation/lits/segmamba_lits/best.pth.tar +3 -0
- implementation/lits/segmamba_lits/last.pth.tar +3 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205022.log +44 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205243.log +1 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205455.log +1 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205751.log +44 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-210909.log +44 -0
- implementation/lits/segmamba_lits/train_segmamba_lits_20251228-212200.log +0 -0
- requirements.txt +16 -0
- splits/README.md +1 -0
- splits/colon/split.pkl +3 -0
- splits/kits/split.pkl +3 -0
- splits/lits/split.pkl +3 -0
- splits/pancreas/split.pkl +3 -0
- src/config/__pycache__/config_args.cpython-312.pyc +0 -0
- src/config/__pycache__/config_args.cpython-39.pyc +0 -0
- src/config/__pycache__/config_setup.cpython-312.pyc +0 -0
- src/config/__pycache__/config_setup.cpython-39.pyc +0 -0
- src/config/config_args.py +78 -0
- src/config/config_setup.py +56 -0
- src/dataset/__pycache__/__init__.cpython-39.pyc +0 -0
- src/dataset/__pycache__/base_dataset_distance_map.cpython-39.pyc +0 -0
- src/dataset/__pycache__/dataloader.cpython-312.pyc +0 -0
- src/dataset/__pycache__/dataloader.cpython-39.pyc +0 -0
- src/dataset/__pycache__/datasets_distance_map.cpython-39.pyc +0 -0
- src/dataset/dataloader.py +265 -0
- src/implementation/colon/readme.log +1 -0
- src/implementation/kits/readme.log +1 -0
- src/implementation/lits/readme.log +1 -0
- src/implementation/pancreas/readme.log +1 -0
- src/models/__pycache__/build_sam3D.cpython-312.pyc +0 -0
- src/models/__pycache__/build_sam3D.cpython-39.pyc +0 -0
- src/models/__pycache__/image_encoder.cpython-312.pyc +0 -0
- src/models/__pycache__/image_encoder.cpython-39.pyc +0 -0
- src/models/__pycache__/mask_decoder.cpython-312.pyc +0 -0
- src/models/__pycache__/mask_decoder.cpython-39.pyc +0 -0
- src/models/__pycache__/prompt_encoder.cpython-312.pyc +0 -0
- src/models/__pycache__/prompt_encoder.cpython-39.pyc +0 -0
- src/models/__pycache__/sam3D.cpython-312.pyc +0 -0
- src/models/__pycache__/sam3D.cpython-39.pyc +0 -0
- src/models/__pycache__/segmamba_encoder.cpython-312.pyc +0 -0
- src/models/__pycache__/transformer.cpython-312.pyc +0 -0
- src/models/__pycache__/transformer.cpython-39.pyc +0 -0
- src/models/__pycache__/unet.cpython-312.pyc +0 -0
LICENSE
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Apache License
|
| 2 |
+
Version 2.0, January 2004
|
| 3 |
+
http://www.apache.org/licenses/
|
| 4 |
+
|
| 5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
| 6 |
+
|
| 7 |
+
1. Definitions.
|
| 8 |
+
|
| 9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
| 10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
| 11 |
+
|
| 12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
| 13 |
+
the copyright owner that is granting the License.
|
| 14 |
+
|
| 15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
| 16 |
+
other entities that control, are controlled by, or are under common
|
| 17 |
+
control with that entity. For the purposes of this definition,
|
| 18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
| 19 |
+
direction or management of such entity, whether by contract or
|
| 20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
| 21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
| 22 |
+
|
| 23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
| 24 |
+
exercising permissions granted by this License.
|
| 25 |
+
|
| 26 |
+
"Source" form shall mean the preferred form for making modifications,
|
| 27 |
+
including but not limited to software source code, documentation
|
| 28 |
+
source, and configuration files.
|
| 29 |
+
|
| 30 |
+
"Object" form shall mean any form resulting from mechanical
|
| 31 |
+
transformation or translation of a Source form, including but
|
| 32 |
+
not limited to compiled object code, generated documentation,
|
| 33 |
+
and conversions to other media types.
|
| 34 |
+
|
| 35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
| 36 |
+
Object form, made available under the License, as indicated by a
|
| 37 |
+
copyright notice that is included in or attached to the work
|
| 38 |
+
(an example is provided in the Appendix below).
|
| 39 |
+
|
| 40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
| 41 |
+
form, that is based on (or derived from) the Work and for which the
|
| 42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
| 43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
| 44 |
+
of this License, Derivative Works shall not include works that remain
|
| 45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
| 46 |
+
the Work and Derivative Works thereof.
|
| 47 |
+
|
| 48 |
+
"Contribution" shall mean any work of authorship, including
|
| 49 |
+
the original version of the Work and any modifications or additions
|
| 50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
| 51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
| 52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
| 53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
| 54 |
+
means any form of electronic, verbal, or written communication sent
|
| 55 |
+
to the Licensor or its representatives, including but not limited to
|
| 56 |
+
communication on electronic mailing lists, source code control systems,
|
| 57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
| 58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
| 59 |
+
excluding communication that is conspicuously marked or otherwise
|
| 60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
| 61 |
+
|
| 62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
| 63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
| 64 |
+
subsequently incorporated within the Work.
|
| 65 |
+
|
| 66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
| 67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
| 70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
| 71 |
+
Work and such Derivative Works in Source or Object form.
|
| 72 |
+
|
| 73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
| 74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 76 |
+
(except as stated in this section) patent license to make, have made,
|
| 77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
| 78 |
+
where such license applies only to those patent claims licensable
|
| 79 |
+
by such Contributor that are necessarily infringed by their
|
| 80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
| 81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
| 82 |
+
institute patent litigation against any entity (including a
|
| 83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
| 84 |
+
or a Contribution incorporated within the Work constitutes direct
|
| 85 |
+
or contributory patent infringement, then any patent licenses
|
| 86 |
+
granted to You under this License for that Work shall terminate
|
| 87 |
+
as of the date such litigation is filed.
|
| 88 |
+
|
| 89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
| 90 |
+
Work or Derivative Works thereof in any medium, with or without
|
| 91 |
+
modifications, and in Source or Object form, provided that You
|
| 92 |
+
meet the following conditions:
|
| 93 |
+
|
| 94 |
+
(a) You must give any other recipients of the Work or
|
| 95 |
+
Derivative Works a copy of this License; and
|
| 96 |
+
|
| 97 |
+
(b) You must cause any modified files to carry prominent notices
|
| 98 |
+
stating that You changed the files; and
|
| 99 |
+
|
| 100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
| 101 |
+
that You distribute, all copyright, patent, trademark, and
|
| 102 |
+
attribution notices from the Source form of the Work,
|
| 103 |
+
excluding those notices that do not pertain to any part of
|
| 104 |
+
the Derivative Works; and
|
| 105 |
+
|
| 106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
| 107 |
+
distribution, then any Derivative Works that You distribute must
|
| 108 |
+
include a readable copy of the attribution notices contained
|
| 109 |
+
within such NOTICE file, excluding those notices that do not
|
| 110 |
+
pertain to any part of the Derivative Works, in at least one
|
| 111 |
+
of the following places: within a NOTICE text file distributed
|
| 112 |
+
as part of the Derivative Works; within the Source form or
|
| 113 |
+
documentation, if provided along with the Derivative Works; or,
|
| 114 |
+
within a display generated by the Derivative Works, if and
|
| 115 |
+
wherever such third-party notices normally appear. The contents
|
| 116 |
+
of the NOTICE file are for informational purposes only and
|
| 117 |
+
do not modify the License. You may add Your own attribution
|
| 118 |
+
notices within Derivative Works that You distribute, alongside
|
| 119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
| 120 |
+
that such additional attribution notices cannot be construed
|
| 121 |
+
as modifying the License.
|
| 122 |
+
|
| 123 |
+
You may add Your own copyright statement to Your modifications and
|
| 124 |
+
may provide additional or different license terms and conditions
|
| 125 |
+
for use, reproduction, or distribution of Your modifications, or
|
| 126 |
+
for any such Derivative Works as a whole, provided Your use,
|
| 127 |
+
reproduction, and distribution of the Work otherwise complies with
|
| 128 |
+
the conditions stated in this License.
|
| 129 |
+
|
| 130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
| 131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
| 132 |
+
by You to the Licensor shall be under the terms and conditions of
|
| 133 |
+
this License, without any additional terms or conditions.
|
| 134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
| 135 |
+
the terms of any separate license agreement you may have executed
|
| 136 |
+
with Licensor regarding such Contributions.
|
| 137 |
+
|
| 138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
| 139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
| 140 |
+
except as required for reasonable and customary use in describing the
|
| 141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
| 142 |
+
|
| 143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
| 144 |
+
agreed to in writing, Licensor provides the Work (and each
|
| 145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
| 146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
| 147 |
+
implied, including, without limitation, any warranties or conditions
|
| 148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
| 149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
| 150 |
+
appropriateness of using or redistributing the Work and assume any
|
| 151 |
+
risks associated with Your exercise of permissions under this License.
|
| 152 |
+
|
| 153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
| 154 |
+
whether in tort (including negligence), contract, or otherwise,
|
| 155 |
+
unless required by applicable law (such as deliberate and grossly
|
| 156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
| 157 |
+
liable to You for damages, including any direct, indirect, special,
|
| 158 |
+
incidental, or consequential damages of any character arising as a
|
| 159 |
+
result of this License or out of the use or inability to use the
|
| 160 |
+
Work (including but not limited to damages for loss of goodwill,
|
| 161 |
+
work stoppage, computer failure or malfunction, or any and all
|
| 162 |
+
other commercial damages or losses), even if such Contributor
|
| 163 |
+
has been advised of the possibility of such damages.
|
| 164 |
+
|
| 165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
| 166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
| 167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
| 168 |
+
or other liability obligations and/or rights consistent with this
|
| 169 |
+
License. However, in accepting such obligations, You may act only
|
| 170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
| 171 |
+
of any other Contributor, and only if You agree to indemnify,
|
| 172 |
+
defend, and hold each Contributor harmless for any liability
|
| 173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
| 174 |
+
of your accepting any such warranty or additional liability.
|
| 175 |
+
|
| 176 |
+
END OF TERMS AND CONDITIONS
|
| 177 |
+
|
| 178 |
+
APPENDIX: How to apply the Apache License to your work.
|
| 179 |
+
|
| 180 |
+
To apply the Apache License to your work, attach the following
|
| 181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
| 182 |
+
replaced with your own identifying information. (Don't include
|
| 183 |
+
the brackets!) The text should be enclosed in the appropriate
|
| 184 |
+
comment syntax for the file format. We also recommend that a
|
| 185 |
+
file or class name and description of purpose be included on the
|
| 186 |
+
same "printed page" as the copyright notice for easier
|
| 187 |
+
identification within third-party archives.
|
| 188 |
+
|
| 189 |
+
Copyright [yyyy] [name of copyright owner]
|
| 190 |
+
|
| 191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 192 |
+
you may not use this file except in compliance with the License.
|
| 193 |
+
You may obtain a copy of the License at
|
| 194 |
+
|
| 195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 196 |
+
|
| 197 |
+
Unless required by applicable law or agreed to in writing, software
|
| 198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 200 |
+
See the License for the specific language governing permissions and
|
| 201 |
+
limitations under the License.
|
README.md
CHANGED
|
@@ -1,3 +1,147 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PRISM
|
| 2 |
+
|
| 3 |
+
[PRISM](https://arxiv.org/abs/2404.15028): A **P**romptable and **R**obust **I**nteractive **S**egmentation **M**odel with Visual Prompts
|
| 4 |
+
|
| 5 |
+
Placenta application:
|
| 6 |
+
|
| 7 |
+
[PRISM Lite](https://arxiv.org/abs/2408.05372): A lightweight model for interactive 3D placenta segmentation in ultrasound
|
| 8 |
+
|
| 9 |
+
Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound Images ([arXiv version](https://arxiv.org/abs/2407.08020))
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
## News
|
| 13 |
+
[07/07/24] Check out the decent performance/version of [PRISM on placenta segmentation in ultrasound images](https://github.com/MedICL-VU/PRISM-placenta).
|
| 14 |
+
|
| 15 |
+
[05/13/24] Our work is early accepted by MICCAI 2024.
|
| 16 |
+
|
| 17 |
+
[03/07/24] The [pretrained PRISM](https://drive.google.com/drive/u/1/folders/1B6Df44Gd9PEBGPkE1FwC8Ds4jefCekUB) models and [preprocessed datasets](https://drive.google.com/drive/folders/13uGNb2WQhSQcBQIUhnvYJere1LBYGDsW?usp=sharing) are uploaded.
|
| 18 |
+
|
| 19 |
+
## TODO
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
demo (gradio)
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## Introduction of PRISM
|
| 27 |
+
<img src='figs/framework_v1.png' width='600'>
|
| 28 |
+
|
| 29 |
+
PRISM is a robust model/method for interactive segmentation in medical imaging. We strive for human-level performance, as a human-in-loop interactive segmentation model with prompts should gradually refine its outcomes until they closely match inter-rater variability.
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## PRISM tumor segmentation examples
|
| 34 |
+
Briefly, PRISM produces tumor segmentation with mean Dice values of **93.79 (colon), 94.48 (pancreas), 94.18 (liver), and 96.58 (kidney)**.
|
| 35 |
+
|
| 36 |
+
| | |
|
| 37 |
+
:-------------------------:|:-------------------------:
|
| 38 |
+
Iterative correction for colon tumor | 
|
| 39 |
+
Iterative correction for multiple tumors | 
|
| 40 |
+
Qualitative results with compared methods | 
|
| 41 |
+
|
| 42 |
+
The quantitative results can be viewed in our [paper](https://arxiv.org/abs/2404.15028).
|
| 43 |
+
|
| 44 |
+
## Datasets
|
| 45 |
+
The anatomical differences among individuals and ambiguous boundaries are present in the datasets.
|
| 46 |
+
|
| 47 |
+
- Our preprocessed
|
| 48 |
+
|
| 49 |
+
We used four public [datasets](https://drive.google.com/drive/folders/13uGNb2WQhSQcBQIUhnvYJere1LBYGDsW?usp=sharing) for 3D tumor segmentation in [colon](https://drive.google.com/drive/u/1/folders/1bt17794HCZfmJ2MLh5w0Y_IAJyUj6ti2), [pancreas](https://drive.google.com/drive/u/1/folders/1NncGDG5Cu795WJTmBse-Lm0GrJmtvTdc), [liver](https://drive.google.com/drive/u/1/folders/1vDM2VkNAT5dvFX5XTRhPe6b7zwYWqU_U) and [kidney](https://drive.google.com/drive/u/1/folders/12UDho-JEZHfK1c1laD5dBFNxvJumcoDF).
|
| 50 |
+
|
| 51 |
+
- Original
|
| 52 |
+
|
| 53 |
+
Here are the links for the datasets: [MSD-colon](http://medicaldecathlon.com/), [MSD-pancreas](http://medicaldecathlon.com/), [LiTS2017](https://competitions.codalab.org/competitions/17094) and [KiTS2021](https://kits-challenge.org/kits21/).
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
## Models
|
| 60 |
+
| colon | pancreas | liver | kidney |
|
| 61 |
+
|------------------------------|------------------------------|------------------------------|------------------------------|
|
| 62 |
+
| [Download](https://drive.google.com/drive/u/1/folders/1nPUC0cCsyA_w-tKkhL_Bw7lesBorGzCl) |[Download](https://drive.google.com/drive/u/1/folders/1JPiF7wtSnbFdl0ZLmFQt1b4H-XH4FDrM)| [Download](https://drive.google.com/drive/u/1/folders/1JAFOca1FxWebzZjRa1lKo1OAv0HXqeh6) |[Download](https://drive.google.com/drive/u/1/folders/1sN0HQLM-LfWB5Kp119YwMsZIfv3VJj7S)|
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
## Get Started
|
| 66 |
+
|
| 67 |
+
**Installation**
|
| 68 |
+
```
|
| 69 |
+
conda create -n prism python=3.9
|
| 70 |
+
conda activate prism
|
| 71 |
+
sudo install git
|
| 72 |
+
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 # install pytorch
|
| 73 |
+
pip install git+https://github.com/facebookresearch/segment-anything.git # install segment anything packages
|
| 74 |
+
pip install git+https://github.com/deepmind/surface-distance.git # for normalized surface dice (NSD) evaluation
|
| 75 |
+
pip install -r requirements.txt
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
**Train**
|
| 80 |
+
|
| 81 |
+
```
|
| 82 |
+
python train.py --data colon --data_dir your_data_directory --save_name your_save_name --multiple_outputs --dynamic --use_box --refine
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
add "--use_scribble" and "--efficient_scribble" if you want to train with scribbles.
|
| 86 |
+
|
| 87 |
+
**Train (Distributed Data Parallel)**
|
| 88 |
+
|
| 89 |
+
the only difference between this and above (train) command is the use of "--ddp".
|
| 90 |
+
```
|
| 91 |
+
python train.py --data colon --data_dir your_data_directory --save_name your_save_name -multiple_outputs --dynamic --use_box --refine --ddp
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
**Test**
|
| 97 |
+
|
| 98 |
+
put downloaded pretrained model under the implementation directory
|
| 99 |
+
```
|
| 100 |
+
python test.py --data colon --data_dir your_data_directory --split test --checkpoint best --save_name prism_pretrain --num_clicks 1 --iter_nums 11 --multiple_outputs --use_box --use_scribble --efficient_scribble --refine --refine_test
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
**FAQ**
|
| 107 |
+
|
| 108 |
+
if you got the error as AttributeError: module 'cv2' has no attribute 'ximgproc', please check [this](https://stackoverflow.com/questions/57427233/module-cv2-cv2-has-no-attribute-ximgproc) out
|
| 109 |
+
|
| 110 |
+
DDP mode has lower Dice and more epoch numbers may solve it
|
| 111 |
+
|
| 112 |
+
On my end, combining trainer and trainer_basic speeds up
|
| 113 |
+
|
| 114 |
+
training the model without refine module (as we reported in the paper) has better accuracy than with refine but not using it
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
## License
|
| 118 |
+
|
| 119 |
+
The model is licensed under the [Apache 2.0 license](LICENSE)
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
## Acknowledgements
|
| 123 |
+
Thanks for the code from: [SAM](https://github.com/facebookresearch/segment-anything), [SAM-Med3D](https://github.com/uni-medical/SAM-Med3D), [ProMISe](https://github.com/MedICL-VU/ProMISe), [ScribblePrompt](https://github.com/halleewong/ScribblePrompt), [nnU-Net](https://github.com/MIC-DKFZ/nnUNet)
|
| 124 |
+
|
| 125 |
+
If you find this repository useful, please consider citing:
|
| 126 |
+
```
|
| 127 |
+
@inproceedings{li2024prism,
|
| 128 |
+
title={Prism: A promptable and robust interactive segmentation model with visual prompts},
|
| 129 |
+
author={Li, Hao and Liu, Han and Hu, Dewei and Wang, Jiacheng and Oguz, Ipek},
|
| 130 |
+
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
|
| 131 |
+
pages={389--399},
|
| 132 |
+
year={2024},
|
| 133 |
+
organization={Springer}
|
| 134 |
+
}
|
| 135 |
+
```
|
| 136 |
+
```
|
| 137 |
+
@inproceedings{li2024interactive,
|
| 138 |
+
title={Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound Images},
|
| 139 |
+
author={Li, Hao and Oguz, Baris and Arenas, Gabriel and Yao, Xing and Wang, Jiacheng and Pouch, Alison and Byram, Brett and Schwartz, Nadav and Oguz, Ipek},
|
| 140 |
+
booktitle={International Workshop on Advances in Simplifying Medical Ultrasound},
|
| 141 |
+
pages={132--142},
|
| 142 |
+
year={2024},
|
| 143 |
+
organization={Springer}
|
| 144 |
+
}
|
| 145 |
+
```
|
| 146 |
+
Please send an email to hao.li.1@vanderbilt.edu for any questions and always happy to help! :)
|
| 147 |
+
|
figs/framework_v1.png
ADDED
|
Git LFS Details
|
figs/iterative_results.png
ADDED
|
Git LFS Details
|
figs/iterative_results_supp.png
ADDED
|
Git LFS Details
|
figs/qualitative_results.png
ADDED
|
Git LFS Details
|
implementation/lits/segmamba_lits/best.pth.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5955d596bb5a0952f9b7d28f8210462fd08fbe50a351f39f9902d7460c4aab2e
|
| 3 |
+
size 310138845
|
implementation/lits/segmamba_lits/last.pth.tar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5955d596bb5a0952f9b7d28f8210462fd08fbe50a351f39f9902d7460c4aab2e
|
| 3 |
+
size 310138845
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205022.log
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[20:50:22.663] Namespace(data='lits', save_dir='./implementation/lits/segmamba_lits', data_dir='/teamspace/studios/this_studio/lits', num_workers=2, split='train', use_small_dataset=False, model_type='segmamba', lr=4e-05, lr_scheduler='linear', warm_up=False, device='cuda:0', max_epoch=20, image_size=128, batch_size=2, checkpoint='best', checkpoint_sam='./checkpoint_sam/sam_vit_b_01ec64.pth', num_classes=2, tolerance=5, boundary_kernel_size=5, use_pretrain=False, pretrain_path='', resume=False, resume_best=False, ddp=False, gpu_ids=[0, 1], accumulation_steps=20, iter_nums=11, num_clicks=50, num_clicks_validation=10, use_box=True, dynamic_box=False, use_scribble=False, num_multiple_outputs=3, multiple_outputs=True, refine=True, no_detach=False, refine_test=False, dynamic=True, efficient_scribble=False, use_sam3d_turbo=False, save_predictions=False, save_csv=False, save_test_dir='./', save_name='segmamba_lits')
|
| 2 |
+
[20:50:35.673] epoch: 0/20, iter: 0/42: loss:8.5574: rank:-1
|
| 3 |
+
[20:50:38.575] epoch: 0/20, iter: 1/42: loss:8.3818: rank:-1
|
| 4 |
+
[20:50:46.413] epoch: 0/20, iter: 2/42: loss:7.1325: rank:-1
|
| 5 |
+
[20:50:48.770] epoch: 0/20, iter: 3/42: loss:6.2642: rank:-1
|
| 6 |
+
[20:50:57.829] epoch: 0/20, iter: 4/42: loss:6.4799: rank:-1
|
| 7 |
+
[20:51:00.228] epoch: 0/20, iter: 5/42: loss:6.9034: rank:-1
|
| 8 |
+
[20:51:06.002] epoch: 0/20, iter: 6/42: loss:6.7887: rank:-1
|
| 9 |
+
[20:51:08.538] epoch: 0/20, iter: 7/42: loss:6.4513: rank:-1
|
| 10 |
+
[20:51:16.236] epoch: 0/20, iter: 8/42: loss:6.1899: rank:-1
|
| 11 |
+
[20:51:18.960] epoch: 0/20, iter: 9/42: loss:6.0729: rank:-1
|
| 12 |
+
[20:51:27.247] epoch: 0/20, iter: 10/42: loss:5.718: rank:-1
|
| 13 |
+
[20:51:30.102] epoch: 0/20, iter: 11/42: loss:4.9813: rank:-1
|
| 14 |
+
[20:51:39.417] epoch: 0/20, iter: 12/42: loss:5.4939: rank:-1
|
| 15 |
+
[20:51:41.962] epoch: 0/20, iter: 13/42: loss:5.3117: rank:-1
|
| 16 |
+
[20:51:44.396] epoch: 0/20, iter: 14/42: loss:5.449: rank:-1
|
| 17 |
+
[20:51:46.650] epoch: 0/20, iter: 15/42: loss:5.6844: rank:-1
|
| 18 |
+
[20:51:52.956] epoch: 0/20, iter: 16/42: loss:5.3918: rank:-1
|
| 19 |
+
[20:51:55.556] epoch: 0/20, iter: 17/42: loss:4.9375: rank:-1
|
| 20 |
+
[20:52:00.858] epoch: 0/20, iter: 18/42: loss:5.327: rank:-1
|
| 21 |
+
[20:52:03.344] epoch: 0/20, iter: 19/42: loss:4.1737: rank:-1
|
| 22 |
+
[20:52:11.251] epoch: 0/20, iter: 20/42: loss:4.7739: rank:-1
|
| 23 |
+
[20:52:13.459] epoch: 0/20, iter: 21/42: loss:4.5736: rank:-1
|
| 24 |
+
[20:52:22.639] epoch: 0/20, iter: 22/42: loss:4.376: rank:-1
|
| 25 |
+
[20:52:24.957] epoch: 0/20, iter: 23/42: loss:4.7952: rank:-1
|
| 26 |
+
[20:52:32.556] epoch: 0/20, iter: 24/42: loss:4.3449: rank:-1
|
| 27 |
+
[20:52:35.071] epoch: 0/20, iter: 25/42: loss:3.9444: rank:-1
|
| 28 |
+
[20:52:43.794] epoch: 0/20, iter: 26/42: loss:4.4685: rank:-1
|
| 29 |
+
[20:52:46.204] epoch: 0/20, iter: 27/42: loss:4.8515: rank:-1
|
| 30 |
+
[20:52:50.912] epoch: 0/20, iter: 28/42: loss:5.0115: rank:-1
|
| 31 |
+
[20:52:53.284] epoch: 0/20, iter: 29/42: loss:3.5668: rank:-1
|
| 32 |
+
[20:52:57.536] epoch: 0/20, iter: 30/42: loss:4.543: rank:-1
|
| 33 |
+
[20:52:59.859] epoch: 0/20, iter: 31/42: loss:4.5505: rank:-1
|
| 34 |
+
[20:53:06.325] epoch: 0/20, iter: 32/42: loss:4.569: rank:-1
|
| 35 |
+
[20:53:08.813] epoch: 0/20, iter: 33/42: loss:2.9972: rank:-1
|
| 36 |
+
[20:53:16.092] epoch: 0/20, iter: 34/42: loss:4.2197: rank:-1
|
| 37 |
+
[20:53:18.636] epoch: 0/20, iter: 35/42: loss:3.2781: rank:-1
|
| 38 |
+
[20:53:23.086] epoch: 0/20, iter: 36/42: loss:3.7641: rank:-1
|
| 39 |
+
[20:53:25.604] epoch: 0/20, iter: 37/42: loss:3.8031: rank:-1
|
| 40 |
+
[20:53:30.865] epoch: 0/20, iter: 38/42: loss:4.6695: rank:-1
|
| 41 |
+
[20:53:33.241] epoch: 0/20, iter: 39/42: loss:4.6817: rank:-1
|
| 42 |
+
[20:53:36.407] epoch: 0/20, iter: 40/42: loss:2.5221: rank:-1
|
| 43 |
+
[20:53:38.141] epoch: 0/20, iter: 41/42: loss:4.6047: rank:-1
|
| 44 |
+
[20:53:38.141] - Train metrics: 5.1095066
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205243.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[20:52:43.960] Namespace(data='lits', save_dir='./implementation/lits/segmamba_lits', data_dir='/teamspace/studios/this_studio/lits', num_workers=2, split='train', use_small_dataset=False, model_type='segmamba', lr=4e-05, lr_scheduler='linear', warm_up=False, device='cuda:0', max_epoch=20, image_size=128, batch_size=2, checkpoint='best', checkpoint_sam='./checkpoint_sam/sam_vit_b_01ec64.pth', num_classes=2, tolerance=5, boundary_kernel_size=5, use_pretrain=False, pretrain_path='', resume=False, resume_best=False, ddp=False, gpu_ids=[0, 1], accumulation_steps=20, iter_nums=11, num_clicks=50, num_clicks_validation=10, use_box=True, dynamic_box=False, use_scribble=True, num_multiple_outputs=3, multiple_outputs=True, refine=True, no_detach=False, refine_test=False, dynamic=True, efficient_scribble=True, use_sam3d_turbo=False, save_predictions=False, save_csv=False, save_test_dir='./', save_name='segmamba_lits')
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205455.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[20:54:55.239] Namespace(data='lits', save_dir='./implementation/lits/segmamba_lits', data_dir='/teamspace/studios/this_studio/lits', num_workers=2, split='train', use_small_dataset=False, model_type='segmamba', lr=4e-05, lr_scheduler='linear', warm_up=False, device='cuda:0', max_epoch=20, image_size=128, batch_size=2, checkpoint='best', checkpoint_sam='./checkpoint_sam/sam_vit_b_01ec64.pth', num_classes=2, tolerance=5, boundary_kernel_size=5, use_pretrain=False, pretrain_path='', resume=False, resume_best=False, ddp=False, gpu_ids=[0, 1], accumulation_steps=20, iter_nums=11, num_clicks=50, num_clicks_validation=10, use_box=True, dynamic_box=False, use_scribble=True, num_multiple_outputs=3, multiple_outputs=True, refine=True, no_detach=False, refine_test=False, dynamic=True, efficient_scribble=True, use_sam3d_turbo=False, save_predictions=False, save_csv=False, save_test_dir='./', save_name='segmamba_lits')
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-205751.log
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[20:57:51.028] Namespace(data='lits', save_dir='./implementation/lits/segmamba_lits', data_dir='/teamspace/studios/this_studio/lits', num_workers=2, split='train', use_small_dataset=False, model_type='segmamba', lr=4e-05, lr_scheduler='linear', warm_up=False, device='cuda:0', max_epoch=20, image_size=128, batch_size=2, checkpoint='best', checkpoint_sam='./checkpoint_sam/sam_vit_b_01ec64.pth', num_classes=2, tolerance=5, boundary_kernel_size=5, use_pretrain=False, pretrain_path='', resume=False, resume_best=False, ddp=False, gpu_ids=[0, 1], accumulation_steps=20, iter_nums=11, num_clicks=50, num_clicks_validation=10, use_box=True, dynamic_box=False, use_scribble=True, num_multiple_outputs=3, multiple_outputs=True, refine=True, no_detach=False, refine_test=False, dynamic=True, efficient_scribble=True, use_sam3d_turbo=False, save_predictions=False, save_csv=False, save_test_dir='./', save_name='segmamba_lits')
|
| 2 |
+
[20:58:34.483] epoch: 0/20, iter: 0/42: loss:6.7362: rank:-1
|
| 3 |
+
[20:59:03.016] epoch: 0/20, iter: 1/42: loss:6.2911: rank:-1
|
| 4 |
+
[20:59:20.025] epoch: 0/20, iter: 2/42: loss:6.2703: rank:-1
|
| 5 |
+
[20:59:31.861] epoch: 0/20, iter: 3/42: loss:6.9132: rank:-1
|
| 6 |
+
[20:59:43.729] epoch: 0/20, iter: 4/42: loss:6.2347: rank:-1
|
| 7 |
+
[20:59:54.965] epoch: 0/20, iter: 5/42: loss:6.3368: rank:-1
|
| 8 |
+
[21:00:16.486] epoch: 0/20, iter: 6/42: loss:5.1229: rank:-1
|
| 9 |
+
[21:00:36.620] epoch: 0/20, iter: 7/42: loss:4.5165: rank:-1
|
| 10 |
+
[21:00:47.361] epoch: 0/20, iter: 8/42: loss:5.8889: rank:-1
|
| 11 |
+
[21:00:56.361] epoch: 0/20, iter: 9/42: loss:6.1573: rank:-1
|
| 12 |
+
[21:01:19.516] epoch: 0/20, iter: 10/42: loss:4.5671: rank:-1
|
| 13 |
+
[21:01:32.206] epoch: 0/20, iter: 11/42: loss:5.5973: rank:-1
|
| 14 |
+
[21:01:46.110] epoch: 0/20, iter: 12/42: loss:4.5529: rank:-1
|
| 15 |
+
[21:01:57.634] epoch: 0/20, iter: 13/42: loss:4.7768: rank:-1
|
| 16 |
+
[21:02:09.999] epoch: 0/20, iter: 14/42: loss:5.3851: rank:-1
|
| 17 |
+
[21:02:24.057] epoch: 0/20, iter: 15/42: loss:4.5796: rank:-1
|
| 18 |
+
[21:02:35.541] epoch: 0/20, iter: 16/42: loss:4.9914: rank:-1
|
| 19 |
+
[21:02:44.974] epoch: 0/20, iter: 17/42: loss:5.3536: rank:-1
|
| 20 |
+
[21:02:54.089] epoch: 0/20, iter: 18/42: loss:5.2157: rank:-1
|
| 21 |
+
[21:03:11.600] epoch: 0/20, iter: 19/42: loss:4.4662: rank:-1
|
| 22 |
+
[21:03:22.128] epoch: 0/20, iter: 20/42: loss:4.7908: rank:-1
|
| 23 |
+
[21:03:33.573] epoch: 0/20, iter: 21/42: loss:4.1763: rank:-1
|
| 24 |
+
[21:03:52.205] epoch: 0/20, iter: 22/42: loss:4.4056: rank:-1
|
| 25 |
+
[21:04:10.739] epoch: 0/20, iter: 23/42: loss:3.8492: rank:-1
|
| 26 |
+
[21:04:27.835] epoch: 0/20, iter: 24/42: loss:4.091: rank:-1
|
| 27 |
+
[21:04:39.345] epoch: 0/20, iter: 25/42: loss:3.913: rank:-1
|
| 28 |
+
[21:05:02.657] epoch: 0/20, iter: 26/42: loss:4.5026: rank:-1
|
| 29 |
+
[21:05:12.816] epoch: 0/20, iter: 27/42: loss:4.1648: rank:-1
|
| 30 |
+
[21:05:24.934] epoch: 0/20, iter: 28/42: loss:4.3919: rank:-1
|
| 31 |
+
[21:05:33.928] epoch: 0/20, iter: 29/42: loss:4.4407: rank:-1
|
| 32 |
+
[21:05:45.244] epoch: 0/20, iter: 30/42: loss:4.4362: rank:-1
|
| 33 |
+
[21:05:58.859] epoch: 0/20, iter: 31/42: loss:3.9054: rank:-1
|
| 34 |
+
[21:06:08.443] epoch: 0/20, iter: 32/42: loss:4.394: rank:-1
|
| 35 |
+
[21:06:20.210] epoch: 0/20, iter: 33/42: loss:3.8882: rank:-1
|
| 36 |
+
[21:06:30.116] epoch: 0/20, iter: 34/42: loss:4.3971: rank:-1
|
| 37 |
+
[21:06:48.494] epoch: 0/20, iter: 35/42: loss:3.8213: rank:-1
|
| 38 |
+
[21:07:06.562] epoch: 0/20, iter: 36/42: loss:3.538: rank:-1
|
| 39 |
+
[21:07:27.438] epoch: 0/20, iter: 37/42: loss:3.673: rank:-1
|
| 40 |
+
[21:07:42.053] epoch: 0/20, iter: 38/42: loss:3.4293: rank:-1
|
| 41 |
+
[21:08:00.813] epoch: 0/20, iter: 39/42: loss:4.1726: rank:-1
|
| 42 |
+
[21:08:11.390] epoch: 0/20, iter: 40/42: loss:4.2546: rank:-1
|
| 43 |
+
[21:08:20.427] epoch: 0/20, iter: 41/42: loss:3.4214: rank:-1
|
| 44 |
+
[21:08:20.428] - Train metrics: 4.7621613
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-210909.log
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[21:09:09.858] Namespace(data='lits', save_dir='./implementation/lits/segmamba_lits', data_dir='/teamspace/studios/this_studio/lits', num_workers=2, split='train', use_small_dataset=False, model_type='segmamba', lr=4e-05, lr_scheduler='linear', warm_up=False, device='cuda:0', max_epoch=20, image_size=128, batch_size=2, checkpoint='best', checkpoint_sam='./checkpoint_sam/sam_vit_b_01ec64.pth', num_classes=2, tolerance=5, boundary_kernel_size=5, use_pretrain=False, pretrain_path='', resume=False, resume_best=False, ddp=False, gpu_ids=[0, 1], accumulation_steps=20, iter_nums=11, num_clicks=50, num_clicks_validation=10, use_box=True, dynamic_box=False, use_scribble=True, num_multiple_outputs=3, multiple_outputs=True, refine=True, no_detach=False, refine_test=False, dynamic=True, efficient_scribble=True, use_sam3d_turbo=False, save_predictions=False, save_csv=False, save_test_dir='./', save_name='segmamba_lits')
|
| 2 |
+
[21:09:50.645] epoch: 0/20, iter: 0/42: loss:5.8351: rank:-1
|
| 3 |
+
[21:10:10.653] epoch: 0/20, iter: 1/42: loss:6.1341: rank:-1
|
| 4 |
+
[21:10:33.245] epoch: 0/20, iter: 2/42: loss:6.8898: rank:-1
|
| 5 |
+
[21:10:57.577] epoch: 0/20, iter: 3/42: loss:6.1827: rank:-1
|
| 6 |
+
[21:11:18.892] epoch: 0/20, iter: 4/42: loss:6.1998: rank:-1
|
| 7 |
+
[21:11:44.095] epoch: 0/20, iter: 5/42: loss:4.9246: rank:-1
|
| 8 |
+
[21:11:55.705] epoch: 0/20, iter: 6/42: loss:6.3802: rank:-1
|
| 9 |
+
[21:12:07.959] epoch: 0/20, iter: 7/42: loss:6.0113: rank:-1
|
| 10 |
+
[21:12:26.902] epoch: 0/20, iter: 8/42: loss:4.9691: rank:-1
|
| 11 |
+
[21:12:44.210] epoch: 0/20, iter: 9/42: loss:5.3392: rank:-1
|
| 12 |
+
[21:12:57.610] epoch: 0/20, iter: 10/42: loss:4.8586: rank:-1
|
| 13 |
+
[21:13:09.536] epoch: 0/20, iter: 11/42: loss:5.3768: rank:-1
|
| 14 |
+
[21:13:18.967] epoch: 0/20, iter: 12/42: loss:5.1704: rank:-1
|
| 15 |
+
[21:13:29.575] epoch: 0/20, iter: 13/42: loss:4.6515: rank:-1
|
| 16 |
+
[21:13:44.878] epoch: 0/20, iter: 14/42: loss:5.1376: rank:-1
|
| 17 |
+
[21:13:55.030] epoch: 0/20, iter: 15/42: loss:4.928: rank:-1
|
| 18 |
+
[21:14:04.131] epoch: 0/20, iter: 16/42: loss:4.966: rank:-1
|
| 19 |
+
[21:14:21.901] epoch: 0/20, iter: 17/42: loss:4.8536: rank:-1
|
| 20 |
+
[21:14:39.631] epoch: 0/20, iter: 18/42: loss:3.9355: rank:-1
|
| 21 |
+
[21:14:52.881] epoch: 0/20, iter: 19/42: loss:4.1299: rank:-1
|
| 22 |
+
[21:15:02.687] epoch: 0/20, iter: 20/42: loss:4.8235: rank:-1
|
| 23 |
+
[21:15:15.051] epoch: 0/20, iter: 21/42: loss:4.5646: rank:-1
|
| 24 |
+
[21:15:25.591] epoch: 0/20, iter: 22/42: loss:4.5925: rank:-1
|
| 25 |
+
[21:15:44.577] epoch: 0/20, iter: 23/42: loss:3.6751: rank:-1
|
| 26 |
+
[21:15:57.779] epoch: 0/20, iter: 24/42: loss:3.8956: rank:-1
|
| 27 |
+
[21:16:19.007] epoch: 0/20, iter: 25/42: loss:3.6836: rank:-1
|
| 28 |
+
[21:16:35.698] epoch: 0/20, iter: 26/42: loss:4.0975: rank:-1
|
| 29 |
+
[21:16:50.358] epoch: 0/20, iter: 27/42: loss:4.3446: rank:-1
|
| 30 |
+
[21:17:07.844] epoch: 0/20, iter: 28/42: loss:4.1784: rank:-1
|
| 31 |
+
[21:17:18.526] epoch: 0/20, iter: 29/42: loss:4.3717: rank:-1
|
| 32 |
+
[21:17:33.551] epoch: 0/20, iter: 30/42: loss:4.1323: rank:-1
|
| 33 |
+
[21:17:50.768] epoch: 0/20, iter: 31/42: loss:3.241: rank:-1
|
| 34 |
+
[21:18:03.168] epoch: 0/20, iter: 32/42: loss:4.0124: rank:-1
|
| 35 |
+
[21:18:18.346] epoch: 0/20, iter: 33/42: loss:4.3774: rank:-1
|
| 36 |
+
[21:18:36.772] epoch: 0/20, iter: 34/42: loss:4.8205: rank:-1
|
| 37 |
+
[21:18:47.896] epoch: 0/20, iter: 35/42: loss:4.588: rank:-1
|
| 38 |
+
[21:18:57.899] epoch: 0/20, iter: 36/42: loss:4.0828: rank:-1
|
| 39 |
+
[21:19:17.407] epoch: 0/20, iter: 37/42: loss:2.9969: rank:-1
|
| 40 |
+
[21:19:28.116] epoch: 0/20, iter: 38/42: loss:4.085: rank:-1
|
| 41 |
+
[21:19:49.217] epoch: 0/20, iter: 39/42: loss:4.404: rank:-1
|
| 42 |
+
[21:20:03.205] epoch: 0/20, iter: 40/42: loss:5.6263: rank:-1
|
| 43 |
+
[21:20:12.085] epoch: 0/20, iter: 41/42: loss:3.4617: rank:-1
|
| 44 |
+
[21:20:12.088] - Train metrics: 4.736407
|
implementation/lits/segmamba_lits/train_segmamba_lits_20251228-212200.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
requirements.txt
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
batchgenerators==0.25
|
| 2 |
+
matplotlib==3.7.1
|
| 3 |
+
MedPy==0.4.0
|
| 4 |
+
monai==1.1.0
|
| 5 |
+
nibabel==5.1.0
|
| 6 |
+
numpy==1.24.3
|
| 7 |
+
scipy==1.9.1
|
| 8 |
+
scikit-image
|
| 9 |
+
nibabel
|
| 10 |
+
einops
|
| 11 |
+
pandas
|
| 12 |
+
torchio
|
| 13 |
+
prefetch-generator
|
| 14 |
+
connected-components-3d
|
| 15 |
+
kornia
|
| 16 |
+
opencv-python
|
splits/README.md
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
this directory contains the data splits in our experiments
|
splits/colon/split.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a740fec88f7ebeb88b4b416364adf6fbd3f91e6e7494bc8567ae08a0817950b
|
| 3 |
+
size 20829
|
splits/kits/split.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:16366dcecac5e87349f320017520416738272848bdfec0807b95dad36b71382e
|
| 3 |
+
size 115818
|
splits/lits/split.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67bb0a58bb8a01029dd4aadeebf6a804cd13325472ae4e4a7d1ccddd7f5179c6
|
| 3 |
+
size 49971
|
splits/pancreas/split.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b3dc340acaf7ee0218d11864c84ccb5d90f48f786581e548c8b5e1b3a42dd4a7
|
| 3 |
+
size 128678
|
src/config/__pycache__/config_args.cpython-312.pyc
ADDED
|
Binary file (4.85 kB). View file
|
|
|
src/config/__pycache__/config_args.cpython-39.pyc
ADDED
|
Binary file (2.5 kB). View file
|
|
|
src/config/__pycache__/config_setup.cpython-312.pyc
ADDED
|
Binary file (2.41 kB). View file
|
|
|
src/config/__pycache__/config_setup.cpython-39.pyc
ADDED
|
Binary file (1.51 kB). View file
|
|
|
src/config/config_args.py
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
import os
|
| 3 |
+
import warnings
|
| 4 |
+
parser = argparse.ArgumentParser()
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
# data
|
| 8 |
+
parser.add_argument("--data", default=None, type=str, choices=["kits", "pancreas", "lits", "colon"])
|
| 9 |
+
parser.add_argument("--save_dir", default="./implementation/", type=str)
|
| 10 |
+
parser.add_argument("--data_dir", default="", type=str)
|
| 11 |
+
parser.add_argument("--num_workers", default=2, type=int)
|
| 12 |
+
parser.add_argument("--split", default="train", type=str)
|
| 13 |
+
parser.add_argument('--use_small_dataset', action="store_true")
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
# network
|
| 17 |
+
parser.add_argument('--model_type', type=str, default='vit_b_ori')
|
| 18 |
+
parser.add_argument("--lr", default=4e-5, type=float)
|
| 19 |
+
parser.add_argument("--lr_scheduler", default='linear', type=str, choices=["linear", "exp"])
|
| 20 |
+
parser.add_argument('--warm_up', action="store_true")
|
| 21 |
+
parser.add_argument("--device", default="cuda:0", type=str)
|
| 22 |
+
parser.add_argument("--max_epoch", default=200, type=int)
|
| 23 |
+
parser.add_argument("--image_size", default=128, type=int)
|
| 24 |
+
parser.add_argument("--batch_size", default=1, type=int)
|
| 25 |
+
parser.add_argument("--checkpoint", default="best", type=str)
|
| 26 |
+
parser.add_argument("--checkpoint_sam", default="./checkpoint_sam/sam_vit_b_01ec64.pth", type=str,
|
| 27 |
+
help='path of pretrained SAM')
|
| 28 |
+
parser.add_argument("--num_classes", default=2, type=int)
|
| 29 |
+
parser.add_argument("--tolerance", default=5, type=int)
|
| 30 |
+
parser.add_argument("--boundary_kernel_size", default=5, type=int,
|
| 31 |
+
help='an integer for kernel size of avepooling layer for boundary generation')
|
| 32 |
+
parser.add_argument("--use_pretrain", action="store_true")
|
| 33 |
+
parser.add_argument("--pretrain_path", default="", type=str)
|
| 34 |
+
parser.add_argument("--resume", action="store_true")
|
| 35 |
+
parser.add_argument("--resume_best", action="store_true")
|
| 36 |
+
parser.add_argument("--ddp", action="store_true")
|
| 37 |
+
parser.add_argument('--gpu_ids', type=int, nargs='+', default=[0, 1])
|
| 38 |
+
parser.add_argument('--accumulation_steps', type=int, default=20)
|
| 39 |
+
|
| 40 |
+
parser.add_argument('--iter_nums', type=int, default=11)
|
| 41 |
+
parser.add_argument('--num_clicks', type=int, default=50)
|
| 42 |
+
parser.add_argument('--num_clicks_validation', type=int, default=10)
|
| 43 |
+
parser.add_argument('--use_box', action="store_true")
|
| 44 |
+
parser.add_argument('--dynamic_box', action="store_true")
|
| 45 |
+
parser.add_argument('--use_scribble', action="store_true")
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
parser.add_argument('--num_multiple_outputs', type=int, default=3)
|
| 49 |
+
parser.add_argument('--multiple_outputs', action="store_true")
|
| 50 |
+
parser.add_argument('--refine', action="store_true")
|
| 51 |
+
parser.add_argument('--no_detach', action="store_true")
|
| 52 |
+
parser.add_argument('--refine_test', action="store_true")
|
| 53 |
+
|
| 54 |
+
parser.add_argument('--dynamic', action="store_true")
|
| 55 |
+
parser.add_argument('--efficient_scribble', action="store_true")
|
| 56 |
+
parser.add_argument("--use_sam3d_turbo", action="store_true")
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
# saving
|
| 61 |
+
parser.add_argument("--save_predictions", action="store_true")
|
| 62 |
+
parser.add_argument("--save_csv", action="store_true")
|
| 63 |
+
parser.add_argument("--save_test_dir", default='./', type=str)
|
| 64 |
+
parser.add_argument("--save_name", default='testing_only', type=str)
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
def check_and_setup_parser(args):
|
| 72 |
+
if args.save_name == 'testing_only':
|
| 73 |
+
warnings.warn("[save_name] (--save_name) should be a real name, currently is for testing purpose (--save_name=testing_only)")
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
args.save_dir = os.path.join(args.save_dir, args.data, args.save_name)
|
| 77 |
+
if not os.path.exists(args.save_dir):
|
| 78 |
+
os.makedirs(args.save_dir)
|
src/config/config_setup.py
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from src.models.build_sam3D import sam_model_registry3D
|
| 2 |
+
from src.dataset.dataloader import Dataset_promise, Dataloader_promise
|
| 3 |
+
import torchio as tio
|
| 4 |
+
from torch.nn.parallel import DistributedDataParallel as DDP
|
| 5 |
+
from torch.utils.data.distributed import DistributedSampler
|
| 6 |
+
import torch
|
| 7 |
+
def get_dataloader(args, split='', use_small=False):
|
| 8 |
+
transforms_list = [tio.ToCanonical(), tio.Resample(1), ]
|
| 9 |
+
if split == 'train':
|
| 10 |
+
transforms_list.append(tio.RandomFlip(axes=(0, 1, 2)))
|
| 11 |
+
|
| 12 |
+
transforms = tio.Compose(transforms_list)
|
| 13 |
+
|
| 14 |
+
dataset = Dataset_promise(
|
| 15 |
+
data=args.data,
|
| 16 |
+
data_dir=args.data_dir,
|
| 17 |
+
split=split,
|
| 18 |
+
transform=transforms,
|
| 19 |
+
image_size=args.image_size,
|
| 20 |
+
args=args,
|
| 21 |
+
)
|
| 22 |
+
|
| 23 |
+
batch_size = args.batch_size if split == 'train' else 1
|
| 24 |
+
|
| 25 |
+
if split == 'train':
|
| 26 |
+
train_sampler = None
|
| 27 |
+
shuffle = True
|
| 28 |
+
if args.ddp:
|
| 29 |
+
train_sampler = DistributedSampler(dataset)
|
| 30 |
+
shuffle = False
|
| 31 |
+
else:
|
| 32 |
+
train_sampler = None
|
| 33 |
+
shuffle = False
|
| 34 |
+
|
| 35 |
+
pin_memory = True
|
| 36 |
+
if split != 'train' and args.data == 'lits':
|
| 37 |
+
pin_memory = False
|
| 38 |
+
|
| 39 |
+
dataloader = Dataloader_promise(
|
| 40 |
+
dataset=dataset,
|
| 41 |
+
sampler=train_sampler,
|
| 42 |
+
batch_size=batch_size,
|
| 43 |
+
shuffle=shuffle,
|
| 44 |
+
num_workers=args.num_workers,
|
| 45 |
+
pin_memory=pin_memory,
|
| 46 |
+
)
|
| 47 |
+
return dataloader
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
def build_model(args, checkpoint=None):
|
| 52 |
+
sam_model = sam_model_registry3D[args.model_type](checkpoint=checkpoint, args=args).to(args.device)
|
| 53 |
+
if args.ddp:
|
| 54 |
+
sam_model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(sam_model)
|
| 55 |
+
sam_model = DDP(sam_model, device_ids=[args.rank], output_device=args.rank)
|
| 56 |
+
return sam_model
|
src/dataset/__pycache__/__init__.cpython-39.pyc
ADDED
|
Binary file (157 Bytes). View file
|
|
|
src/dataset/__pycache__/base_dataset_distance_map.cpython-39.pyc
ADDED
|
Binary file (9.77 kB). View file
|
|
|
src/dataset/__pycache__/dataloader.cpython-312.pyc
ADDED
|
Binary file (15.8 kB). View file
|
|
|
src/dataset/__pycache__/dataloader.cpython-39.pyc
ADDED
|
Binary file (7.08 kB). View file
|
|
|
src/dataset/__pycache__/datasets_distance_map.cpython-39.pyc
ADDED
|
Binary file (3.44 kB). View file
|
|
|
src/dataset/dataloader.py
ADDED
|
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from torch.utils.data import Dataset
|
| 2 |
+
from torch.utils.data import DataLoader
|
| 3 |
+
import torchio as tio
|
| 4 |
+
import pickle
|
| 5 |
+
import numpy as np
|
| 6 |
+
import os
|
| 7 |
+
import torch
|
| 8 |
+
import SimpleITK as sitk
|
| 9 |
+
from prefetch_generator import BackgroundGenerator
|
| 10 |
+
from monai.transforms import (
|
| 11 |
+
Compose,
|
| 12 |
+
RandCropByPosNegLabeld,
|
| 13 |
+
ScaleIntensityRanged,
|
| 14 |
+
NormalizeIntensityd,
|
| 15 |
+
RandShiftIntensityd,
|
| 16 |
+
RandZoomd,
|
| 17 |
+
)
|
| 18 |
+
import cc3d, math
|
| 19 |
+
|
| 20 |
+
class Dataset_promise(Dataset):
|
| 21 |
+
def __init__(self, data, data_dir, split='train', image_size=128, transform=None, pcc=False, args=None):
|
| 22 |
+
self.args = args
|
| 23 |
+
self.data = data
|
| 24 |
+
self.paths = data_dir
|
| 25 |
+
|
| 26 |
+
self._set_file_paths(self.paths, split)
|
| 27 |
+
self._set_dataset_stat()
|
| 28 |
+
|
| 29 |
+
self.image_size = (image_size, image_size, image_size)
|
| 30 |
+
self.transform = transform
|
| 31 |
+
self.threshold = 0
|
| 32 |
+
self.split = split
|
| 33 |
+
self.pcc = pcc
|
| 34 |
+
self.monai_transforms = self._get_transforms(split=split)
|
| 35 |
+
|
| 36 |
+
self.cc = 1
|
| 37 |
+
|
| 38 |
+
def __len__(self):
|
| 39 |
+
return len(self.label_paths)
|
| 40 |
+
|
| 41 |
+
def __getitem__(self, index):
|
| 42 |
+
sitk_image = sitk.ReadImage(self.image_paths[index])
|
| 43 |
+
sitk_label = sitk.ReadImage(self.label_paths[index])
|
| 44 |
+
|
| 45 |
+
if sitk_image.GetOrigin() != sitk_label.GetOrigin():
|
| 46 |
+
sitk_image.SetOrigin(sitk_label.GetOrigin())
|
| 47 |
+
if sitk_image.GetDirection() != sitk_label.GetDirection():
|
| 48 |
+
sitk_image.SetDirection(sitk_label.GetDirection())
|
| 49 |
+
|
| 50 |
+
if sitk_image.GetSpacing() != sitk_label.GetSpacing():
|
| 51 |
+
sitk_label.SetSpacing(sitk_image.GetSpacing())
|
| 52 |
+
|
| 53 |
+
subject = tio.Subject(
|
| 54 |
+
image=tio.ScalarImage.from_sitk(sitk_image),
|
| 55 |
+
label=tio.LabelMap.from_sitk(sitk_label),
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
subject_save = tio.Subject(
|
| 59 |
+
image=tio.ScalarImage.from_sitk(sitk_image),
|
| 60 |
+
label=tio.LabelMap.from_sitk(sitk_label),
|
| 61 |
+
)
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
if self.data == 'lits':
|
| 65 |
+
b = subject.label.data
|
| 66 |
+
a = tio.CropOrPad._bbox_mask(b[0].cpu().numpy())
|
| 67 |
+
w, h, d = a[1][0] - a[0][0], a[1][1] - a[0][1], a[1][2] - a[0][2]
|
| 68 |
+
w, h, d = max(w + 20, 128), max(h + 20, 128), max(d + 20, 128)
|
| 69 |
+
crop_transform = tio.CropOrPad(mask_name='label', target_shape=(w, h, d))
|
| 70 |
+
subject = crop_transform(subject)
|
| 71 |
+
subject_save = crop_transform(subject_save)
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
if self.target_label != 0:
|
| 76 |
+
subject = self._binary_label(subject)
|
| 77 |
+
subject_save = self._binary_label(subject_save)
|
| 78 |
+
|
| 79 |
+
if self.transform:
|
| 80 |
+
try:
|
| 81 |
+
subject = self.transform(subject)
|
| 82 |
+
subject_save = self.transform(subject_save)
|
| 83 |
+
except:
|
| 84 |
+
print(self.image_paths[index])
|
| 85 |
+
|
| 86 |
+
if (self.pcc):
|
| 87 |
+
subject = self._pcc(subject)
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
if subject.label.data.sum() <= self.threshold:
|
| 91 |
+
print(self.image_paths[index], 'label volume too small')
|
| 92 |
+
if self.split == 'train':
|
| 93 |
+
return self.__getitem__(np.random.randint(self.__len__()))
|
| 94 |
+
#return self.__getitem__(0)
|
| 95 |
+
else:
|
| 96 |
+
if self.data == 'lits':
|
| 97 |
+
return subject, self.image_paths[index]
|
| 98 |
+
else:
|
| 99 |
+
return subject.image.data.clone().detach(), subject.label.data.clone().detach(), self.image_paths[index]
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
if self.split == "train":
|
| 103 |
+
trans_dict = self.monai_transforms({"image": subject.image.data.clone().detach(),
|
| 104 |
+
"label": subject.label.data.clone().detach()})[0]
|
| 105 |
+
img_aug, seg_aug = trans_dict["image"], trans_dict["label"]
|
| 106 |
+
return img_aug.float(), seg_aug.float(), self.image_paths[index]
|
| 107 |
+
else:
|
| 108 |
+
if self.data == 'lits':
|
| 109 |
+
trans_dict = self.monai_transforms({"image": subject.image.data.clone().detach()})
|
| 110 |
+
subject.image.data = trans_dict["image"]
|
| 111 |
+
return subject, self.image_paths[index], subject_save
|
| 112 |
+
|
| 113 |
+
if self.data == 'kits':
|
| 114 |
+
subject = self._separate_crop(subject)
|
| 115 |
+
|
| 116 |
+
crop_transform = tio.CropOrPad(mask_name='label', target_shape=self.image_size)
|
| 117 |
+
subject = crop_transform(subject)
|
| 118 |
+
subject_save = crop_transform(subject_save)
|
| 119 |
+
|
| 120 |
+
trans_dict = self.monai_transforms({"image": subject.image.data.clone().detach()})
|
| 121 |
+
img_aug = trans_dict["image"]
|
| 122 |
+
return img_aug, subject.label.data.clone().detach(), self.image_paths[index], subject_save
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
def _separate_crop(self, subject):
|
| 126 |
+
label = subject.label.data
|
| 127 |
+
labels_out, N = cc3d.connected_components(label[0].cpu().numpy(), return_N=True)
|
| 128 |
+
crop_transform = tio.CropOrPad(mask_name='label', target_shape=self.image_size)
|
| 129 |
+
mid_cut = 0
|
| 130 |
+
|
| 131 |
+
if N > 1:
|
| 132 |
+
label_1, label_2 = torch.zeros_like(label), torch.zeros_like(label)
|
| 133 |
+
|
| 134 |
+
# left, right
|
| 135 |
+
mid_cut = math.ceil(label.size(1) / 2)
|
| 136 |
+
label_1[0, 0: mid_cut, :], label_2[0, mid_cut: -1, :] = label[0, 0: mid_cut, :], label[0, mid_cut: -1, :] # left, right
|
| 137 |
+
|
| 138 |
+
|
| 139 |
+
image_1, image_2 = subject.image.data, subject.image.data
|
| 140 |
+
|
| 141 |
+
subject_1 = tio.Subject(image=tio.ScalarImage(tensor=image_1), label=tio.LabelMap(tensor=label_1))
|
| 142 |
+
subject_2 = tio.Subject(image=tio.ScalarImage(tensor=image_2), label=tio.LabelMap(tensor=label_2))
|
| 143 |
+
|
| 144 |
+
subject_1, subject_2 = crop_transform(subject_1), crop_transform(subject_2)
|
| 145 |
+
|
| 146 |
+
# found 2 connected components for some cases (e.g. case 289), use below to eliminate
|
| 147 |
+
# however, this will bring warnings, but it's okay
|
| 148 |
+
if torch.unique(subject_2.label.data).size(0) == 1:
|
| 149 |
+
subject.image.data, subject.label.data = subject_1.image.data, subject_1.label.data
|
| 150 |
+
elif torch.unique(subject_1.label.data).size(0) == 1:
|
| 151 |
+
subject.image.data, subject.label.data = subject_2.image.data, subject_2.label.data
|
| 152 |
+
else:
|
| 153 |
+
subject.image.data = torch.cat([subject_1.image.data, subject_2.image.data], dim=0)
|
| 154 |
+
subject.label.data = torch.cat([subject_1.label.data, subject_2.label.data], dim=0)
|
| 155 |
+
else:
|
| 156 |
+
subject = crop_transform(subject)
|
| 157 |
+
|
| 158 |
+
return subject
|
| 159 |
+
|
| 160 |
+
def _set_file_paths(self, data_dir, split):
|
| 161 |
+
self.image_paths = []
|
| 162 |
+
self.label_paths = []
|
| 163 |
+
split_file = "split.pkl"
|
| 164 |
+
dataset_split = os.path.join(data_dir, split_file)
|
| 165 |
+
if not os.path.exists(dataset_split):
|
| 166 |
+
alt_dir = os.path.join(data_dir, "Task01_LITS17")
|
| 167 |
+
alt_split = os.path.join(alt_dir, split_file)
|
| 168 |
+
if os.path.exists(alt_split):
|
| 169 |
+
data_dir = alt_dir
|
| 170 |
+
dataset_split = alt_split
|
| 171 |
+
if not os.path.exists(dataset_split):
|
| 172 |
+
raise FileNotFoundError(f"split.pkl not found under {data_dir}")
|
| 173 |
+
with open(dataset_split, "rb") as f:
|
| 174 |
+
d = pickle.load(f)[0][split]
|
| 175 |
+
self.image_paths = [os.path.join(data_dir, d[i][0].strip("/")) for i in list(d.keys())]
|
| 176 |
+
self.label_paths = [os.path.join(data_dir, d[i][1].strip("/")) for i in list(d.keys())]
|
| 177 |
+
|
| 178 |
+
def _set_dataset_stat(self):
|
| 179 |
+
self.target_label = 0
|
| 180 |
+
if self.data == 'colon':
|
| 181 |
+
self.intensity_range, self.global_mean, self.global_std = (-57, 175), 65.175035, 32.651197
|
| 182 |
+
|
| 183 |
+
elif self.data == 'pancreas':
|
| 184 |
+
self.intensity_range, self.global_mean, self.global_std = (-39, 204), 68.45214, 63.422806
|
| 185 |
+
self.target_label = 2
|
| 186 |
+
|
| 187 |
+
elif self.data == 'lits':
|
| 188 |
+
self.intensity_range, self.global_mean, self.global_std = (-48, 163), 60.057533, 40.198017
|
| 189 |
+
self.target_label = 2
|
| 190 |
+
|
| 191 |
+
elif self.data == 'kits':
|
| 192 |
+
self.intensity_range, self.global_mean, self.global_std = (-54, 247), 59.53867, 55.457336
|
| 193 |
+
self.target_label = 2
|
| 194 |
+
|
| 195 |
+
|
| 196 |
+
def _get_transforms(self, split):
|
| 197 |
+
if split == "train":
|
| 198 |
+
transforms = Compose(
|
| 199 |
+
[
|
| 200 |
+
ScaleIntensityRanged(
|
| 201 |
+
keys=["image"],
|
| 202 |
+
a_min=self.intensity_range[0],
|
| 203 |
+
a_max=self.intensity_range[1],
|
| 204 |
+
b_min=self.intensity_range[0],
|
| 205 |
+
b_max=self.intensity_range[1],
|
| 206 |
+
clip=True,
|
| 207 |
+
),
|
| 208 |
+
RandCropByPosNegLabeld(
|
| 209 |
+
keys=["image", "label"],
|
| 210 |
+
spatial_size=(128, 128, 128),
|
| 211 |
+
label_key="label",
|
| 212 |
+
pos=2,
|
| 213 |
+
neg=0,
|
| 214 |
+
num_samples=1,
|
| 215 |
+
),
|
| 216 |
+
RandShiftIntensityd(keys=["image"], offsets=20, prob=0.5),
|
| 217 |
+
NormalizeIntensityd(keys=["image"], subtrahend=self.global_mean, divisor=self.global_std),
|
| 218 |
+
|
| 219 |
+
RandZoomd(keys=["image", "label"], prob=0.8, min_zoom=0.85, max_zoom=1.25,
|
| 220 |
+
mode=["trilinear", "nearest"]),
|
| 221 |
+
])
|
| 222 |
+
else:
|
| 223 |
+
transforms = Compose(
|
| 224 |
+
[
|
| 225 |
+
ScaleIntensityRanged(
|
| 226 |
+
keys=["image"],
|
| 227 |
+
a_min=self.intensity_range[0],
|
| 228 |
+
a_max=self.intensity_range[1],
|
| 229 |
+
b_min=self.intensity_range[0],
|
| 230 |
+
b_max=self.intensity_range[1],
|
| 231 |
+
clip=True,
|
| 232 |
+
),
|
| 233 |
+
NormalizeIntensityd(keys=["image"], subtrahend=self.global_mean, divisor=self.global_std),
|
| 234 |
+
]
|
| 235 |
+
)
|
| 236 |
+
return transforms
|
| 237 |
+
|
| 238 |
+
def _binary_label(self, subject):
|
| 239 |
+
label = subject.label.data
|
| 240 |
+
label = (label == self.target_label)
|
| 241 |
+
subject.label.data = label.float()
|
| 242 |
+
return subject
|
| 243 |
+
|
| 244 |
+
def _pcc(self, subject):
|
| 245 |
+
print("using pcc setting")
|
| 246 |
+
# crop from random click point
|
| 247 |
+
random_index = torch.argwhere(subject.label.data == 1)
|
| 248 |
+
if (len(random_index) >= 1):
|
| 249 |
+
random_index = random_index[np.random.randint(0, len(random_index))]
|
| 250 |
+
# print(random_index)
|
| 251 |
+
crop_mask = torch.zeros_like(subject.label.data)
|
| 252 |
+
# print(crop_mask.shape)
|
| 253 |
+
crop_mask[random_index[0]][random_index[1]][random_index[2]][random_index[3]] = 1
|
| 254 |
+
subject.add_image(tio.LabelMap(tensor=crop_mask, affine=subject.label.affine), image_name="crop_mask")
|
| 255 |
+
subject = tio.CropOrPad(mask_name='crop_mask', target_shape=self.image_size)(subject)
|
| 256 |
+
|
| 257 |
+
return subject
|
| 258 |
+
|
| 259 |
+
|
| 260 |
+
class Dataloader_promise(DataLoader):
|
| 261 |
+
def __iter__(self):
|
| 262 |
+
return BackgroundGenerator(super().__iter__())
|
| 263 |
+
|
| 264 |
+
|
| 265 |
+
|
src/implementation/colon/readme.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
put pretrained prism in this directory
|
src/implementation/kits/readme.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
put pretrained prism in this directory
|
src/implementation/lits/readme.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
put pretrained prism in this directory
|
src/implementation/pancreas/readme.log
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
put pretrained prism in this directory
|
src/models/__pycache__/build_sam3D.cpython-312.pyc
ADDED
|
Binary file (4.26 kB). View file
|
|
|
src/models/__pycache__/build_sam3D.cpython-39.pyc
ADDED
|
Binary file (1.92 kB). View file
|
|
|
src/models/__pycache__/image_encoder.cpython-312.pyc
ADDED
|
Binary file (25.6 kB). View file
|
|
|
src/models/__pycache__/image_encoder.cpython-39.pyc
ADDED
|
Binary file (16 kB). View file
|
|
|
src/models/__pycache__/mask_decoder.cpython-312.pyc
ADDED
|
Binary file (11.1 kB). View file
|
|
|
src/models/__pycache__/mask_decoder.cpython-39.pyc
ADDED
|
Binary file (6.82 kB). View file
|
|
|
src/models/__pycache__/prompt_encoder.cpython-312.pyc
ADDED
|
Binary file (16.7 kB). View file
|
|
|
src/models/__pycache__/prompt_encoder.cpython-39.pyc
ADDED
|
Binary file (9.85 kB). View file
|
|
|
src/models/__pycache__/sam3D.cpython-312.pyc
ADDED
|
Binary file (8.74 kB). View file
|
|
|
src/models/__pycache__/sam3D.cpython-39.pyc
ADDED
|
Binary file (6.64 kB). View file
|
|
|
src/models/__pycache__/segmamba_encoder.cpython-312.pyc
ADDED
|
Binary file (11.6 kB). View file
|
|
|
src/models/__pycache__/transformer.cpython-312.pyc
ADDED
|
Binary file (11.1 kB). View file
|
|
|
src/models/__pycache__/transformer.cpython-39.pyc
ADDED
|
Binary file (7.12 kB). View file
|
|
|
src/models/__pycache__/unet.cpython-312.pyc
ADDED
|
Binary file (11.2 kB). View file
|
|
|