The idea behind xView2 is relatively simple. As input, you’ve got a satellite image of an area before a disaster. As output, you’ve got a satellite image of the same area, taken immediately after the earthquake, tsunami, flood, volcanic eruption, wildfire, tornado, or apocalyptic combination of any of the above. All the algorithm has to do is identify structures and then rate each structure on a four-point damage scale that ranges from spotless to obliterated.
Fortunately, this kind of pattern recognition is something that computer vision algorithms tend to do very well. The key to their effectiveness is the training data they’re fed, and xView2 is providing a massive, hand-labeled dataset for competitors to use. Leveraging Digital Globe’s Open Data Program, xView2 has managed to amass 19,804 square kilometers of pre-disaster and post-disaster imagery at a resolution of 0.3 meters per pixel. The images feature 550,230 building outlines, each one drawn by a human and assigned a building damage assessment score.
The folks running xView2 have been very careful to make sure that this dataset is as accurate and as high quality as possible. Fifteen countries are represented, including exotic locations like Australia, Indonesia, Tuscaloosa, and Bangladesh. The standardized Joint Damage Scale for buildings (which weirdly did not exist before) was developed with input from FEMA, the US Air Force, and local first responders. Those agencies also had an opportunity to check the labeling for accuracy before the dataset was finalized.
The winner of the xView2 challenge will be the algorithm that performs the best on a previously unseen dataset, recognizing and labeling buildings by smashed-up-ness on the Joint Damage Scale with the closest adherence to the ratings given by expert humans. The algorithm will have to be a generalist, able to recognize and score buildings after any of the six kinds of disasters, anywhere in the world. The hope is that the winning algorithm could be used to compare pre-disaster satellite images with post-disaster images taken from aircraft or drones, helping first responders move even more quickly and effectively. And even if the best algorithm isn’t perfect, that’s okay. Even a pretty good algorithm could be very useful, especially when time is a factor.
Refreshingly, the Defense Innovation Unit seems to be mostly interested in encouraging people to participate and do well in the xView2 challenge, without getting hung up on owning the winning software or anything. You can compete in the Open Source track, where you can win $25,000 as long as you agree to release your code under a permissive license. If you’d rather keep your code private, but you’re okay with giving the government a non-exclusive license to use it, the Government Purpose track has a first prize of $38,000. Anyone participating in the Open Source track can potentially win the other track too. The final Evaluation Only track is for teams who really don’t want to share anything; the government will check out your algorithm and tell you how you did, but that’s it. The top prize in that case is $3,000.
The dataset for the xView2 challenge is available now, with submissions due on 22 November. Winners will be announced at the Humanitarian Assistance and Disaster Recovery (HADR) workshop at NeurIPS in December.