takarajordan commited on
Commit
1552de2
·
verified ·
1 Parent(s): febec55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -6,4 +6,62 @@ language:
6
  - en
7
  size_categories:
8
  - 1K<n<10K
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  size_categories:
8
  - 1K<n<10K
9
+ ---
10
+ # FloodNet: High Resolution Aerial Imagery Dataset for Post-Flood Scene Understanding
11
+
12
+ This is the HF hosted version of **FloodNet**.
13
+
14
+ The **FloodNet 2021: A High Resolution Aerial Imagery Dataset for Post-Flood Scene Understanding** provides high-resolution UAS imageries with detailed semantic annotation regarding the damages. To advance the damage assessment process for post-disaster scenarios, the authors of the dataset presented a unique challenge considering **classification**, **semantic segmentation**, and **visual question answering (VQA)**, highlighting the UAS imagery-based FloodNet dataset.
15
+
16
+ ## Challenge Tracks
17
+
18
+ The Challenge has two tracks:
19
+
20
+ 1. **Image Classification and Semantic Segmentation** (available on DatasetNinja)
21
+ 2. **Visual Question Answering** (current)
22
+
23
+ ---
24
+
25
+ Frequent, and increasingly severe, natural disasters threaten human health, infrastructure, and natural systems. The provision of accurate, timely, and understandable information has the potential to revolutionize disaster management. For quick response and recovery on a large scale, after a natural disaster such as a hurricane, access to aerial images is critically important for the response team.
26
+
27
+ The emergence of small **unmanned aerial systems (UAS)**, along with inexpensive sensors, presents the opportunity to collect thousands of images after each natural disaster with high flexibility and easy maneuverability for rapid response and recovery. Moreover, UAS can access hard-to-reach areas and perform data collection tasks that can be unsafe for humans if not impossible. Despite all these advancements and efforts to collect such large datasets, analyzing them and extracting meaningful information remains a significant challenge in scientific communities.
28
+
29
+ ---
30
+
31
+ ## Data Collection
32
+
33
+ The data was collected with a small UAS platform, **DJI Mavic Pro quadcopters**, after Hurricane Harvey. The entire dataset contains **2343 images**, divided into:
34
+
35
+ - **Training set (~60%)**
36
+ - **Validation set (~20%)**
37
+ - **Test set (~20%)**
38
+
39
+ For **Track 1** (Semi-supervised Classification and Semantic Segmentation), in the training set, there are:
40
+
41
+ - Around **400 labeled images** (~25% of the training set)
42
+ - Around **1050 unlabeled images** (~75% of the training set)
43
+
44
+ For **Track 2** (Supervised VQA), in the training set, there are:
45
+
46
+ - Around **1450 images**
47
+ - A total of **4511 image-question pairs**
48
+
49
+ ---
50
+
51
+ ## Annotations for Visual Question Answering (VQA)
52
+
53
+ The presented dataset contains annotations from **Track 2**. For the **Visual Question Answering (VQA)** task, the images are associated with multiple questions. These questions are divided into the following categories:
54
+
55
+ - **Simple Counting**: Questions will be designed to count the number of objects regardless of their attribute.
56
+ _For example: "How many buildings are there in the image?"_
57
+
58
+ - **Complex Counting**: Questions will be asked to count the number of objects belonging to a specific attribute.
59
+ _For example: "How many flooded buildings are there in the image?"_
60
+
61
+ - **Condition Recognition**: In this category, questions are mainly designed to ask about the condition of the object and the neighborhood.
62
+ _For example: "What is the condition of the road in the given image?"_
63
+
64
+ - **Yes/No Questions**: For this type of question, the answer will be either ‘Yes’ or ‘No’.
65
+ _For example: "Is there any flooded road?"_
66
+
67
+ ---