Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -58,167 +58,65 @@ dataset_summary: '
|
|
58 |
'
|
59 |
---
|
60 |
|
61 |
-
#
|
62 |
-
|
63 |
-
<!-- Provide a quick summary of the dataset. -->
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2161 samples.
|
70 |
-
|
71 |
-
## Installation
|
72 |
-
|
73 |
-
If you haven't already, install FiftyOne:
|
74 |
-
|
75 |
-
```bash
|
76 |
-
pip install -U fiftyone
|
77 |
-
```
|
78 |
-
|
79 |
-
## Usage
|
80 |
-
|
81 |
-
```python
|
82 |
-
import fiftyone as fo
|
83 |
-
from fiftyone.utils.huggingface import load_from_hub
|
84 |
-
|
85 |
-
# Load the dataset
|
86 |
-
# Note: other available arguments include 'max_samples', etc
|
87 |
-
dataset = load_from_hub("pjramg/deeplesion_balanced_fiftyone")
|
88 |
-
|
89 |
-
# Launch the App
|
90 |
-
session = fo.launch_app(dataset)
|
91 |
-
```
|
92 |
|
|
|
93 |
|
94 |
## Dataset Details
|
95 |
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
-
|
103 |
-
- **
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
## Dataset Creation
|
139 |
-
|
140 |
-
### Curation Rationale
|
141 |
-
|
142 |
-
<!-- Motivation for the creation of this dataset. -->
|
143 |
-
|
144 |
-
[More Information Needed]
|
145 |
-
|
146 |
-
### Source Data
|
147 |
-
|
148 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
149 |
-
|
150 |
-
#### Data Collection and Processing
|
151 |
-
|
152 |
-
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
153 |
-
|
154 |
-
[More Information Needed]
|
155 |
-
|
156 |
-
#### Who are the source data producers?
|
157 |
-
|
158 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
159 |
-
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
### Annotations [optional]
|
163 |
-
|
164 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
165 |
-
|
166 |
-
#### Annotation process
|
167 |
-
|
168 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
169 |
-
|
170 |
-
[More Information Needed]
|
171 |
-
|
172 |
-
#### Who are the annotators?
|
173 |
-
|
174 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
175 |
-
|
176 |
-
[More Information Needed]
|
177 |
-
|
178 |
-
#### Personal and Sensitive Information
|
179 |
-
|
180 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
181 |
-
|
182 |
-
[More Information Needed]
|
183 |
-
|
184 |
-
## Bias, Risks, and Limitations
|
185 |
-
|
186 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
187 |
-
|
188 |
-
[More Information Needed]
|
189 |
-
|
190 |
-
### Recommendations
|
191 |
-
|
192 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
193 |
-
|
194 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
195 |
-
|
196 |
-
## Citation [optional]
|
197 |
-
|
198 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
199 |
-
|
200 |
-
**BibTeX:**
|
201 |
-
|
202 |
-
[More Information Needed]
|
203 |
-
|
204 |
-
**APA:**
|
205 |
-
|
206 |
-
[More Information Needed]
|
207 |
-
|
208 |
-
## Glossary [optional]
|
209 |
-
|
210 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
211 |
-
|
212 |
-
[More Information Needed]
|
213 |
-
|
214 |
-
## More Information [optional]
|
215 |
|
216 |
-
|
|
|
|
|
|
|
217 |
|
218 |
-
##
|
219 |
|
220 |
-
|
|
|
|
|
221 |
|
222 |
-
##
|
223 |
|
224 |
-
|
|
|
58 |
'
|
59 |
---
|
60 |
|
61 |
+
# DeepLesion Benchmark Subset (Balanced 2K)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
|
63 |
+
This dataset is a curated subset of the [DeepLesion dataset](https://nihcc.app.box.com/v/DeepLesion), prepared for demonstration and benchmarking purposes. It consists of 2,000 CT lesion samples, balanced across 8 coarse lesion types, and filtered to include lesions with a short diameter > 10mm.
|
64 |
|
65 |
## Dataset Details
|
66 |
|
67 |
+
- **Source**: [DeepLesion](https://nihcc.app.box.com/v/DeepLesion)
|
68 |
+
- **Institution**: National Institutes of Health (NIH) Clinical Center
|
69 |
+
- **Subset size**: 2,000 images
|
70 |
+
- **Lesion types**: lung, abdomen, mediastinum, liver, pelvis, soft tissue, kidney, bone
|
71 |
+
- **Selection criteria**:
|
72 |
+
- Short diameter > 10mm
|
73 |
+
- Balanced sampling across all types
|
74 |
+
- **Windowing**: All slices were windowed using DICOM parameters and converted to 8-bit PNG format
|
75 |
+
|
76 |
+
## License
|
77 |
+
|
78 |
+
This dataset is shared under the **[CC BY-NC-SA 4.0 License](https://creativecommons.org/licenses/by-nc-sa/4.0/)**, as specified by the NIH DeepLesion dataset creators.
|
79 |
+
|
80 |
+
> This dataset is intended **only for non-commercial research and educational use**.
|
81 |
+
> You must credit the original authors and the NIH Clinical Center when using this data.
|
82 |
+
|
83 |
+
## Citation
|
84 |
+
|
85 |
+
If you use this data, please cite:
|
86 |
+
|
87 |
+
```bibtex
|
88 |
+
@article{yan2018deeplesion,
|
89 |
+
title={DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning},
|
90 |
+
author={Yan, Ke and Zhang, Yao and Wang, Le Lu and Huang, Xuejun and Summers, Ronald M},
|
91 |
+
journal={Journal of medical imaging},
|
92 |
+
volume={5},
|
93 |
+
number={3},
|
94 |
+
pages={036501},
|
95 |
+
year={2018},
|
96 |
+
publisher={SPIE}
|
97 |
+
}
|
98 |
+
|
99 |
+
Curation done by FiftyOne.
|
100 |
+
@article{moore2020fiftyone,
|
101 |
+
title={FiftyOne},
|
102 |
+
author={Moore, B. E. and Corso, J. J.},
|
103 |
+
journal={GitHub. Note: https://github.com/voxel51/fiftyone},
|
104 |
+
year={2020}
|
105 |
+
}
|
106 |
+
```
|
107 |
+
## Intended Uses
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
+
- Embedding demos
|
110 |
+
- Lesion similarity and retrieval
|
111 |
+
- Benchmarking medical image models
|
112 |
+
- Few-shot learning on lesion types
|
113 |
|
114 |
+
## Limitations
|
115 |
|
116 |
+
- This is a small subset of the full DeepLesion dataset
|
117 |
+
- Not suitable for training full detection models
|
118 |
+
- Labels are coarse and may contain inconsistencies
|
119 |
|
120 |
+
## Contact
|
121 |
|
122 |
+
Created by Paula Ramos for demo purposes using FiftyOne and the DeepLesion public metadata.
|