Pytorch Implementation of Lightweight and robust representation of economic scales from satellite imagery.
This research utilized two types of data. One is the demographic information on target areas and the other is the corresponding satellite imagery. Both data types were collected from ArcGIS, which provides a publicly available data repository on maps and geographic information.
Demographic information is called the Esri Advanced Demographics is accessible by the ArcGIS GeoEnrichment Service API.
Satellite images from the tiles of World Imagery
The code has been tested running under Python 3.5.2. with the following packages installed (along with their dependencies):
- numpy == 1.15.4
- pandas == 0.23.4
- torch == 1.0.1.post2
- torchnet == 0.0.4
- torchvision == 0.2.2.post3
- scikit-learn == 0.19.1
- xgboost == 0.90
We recommend using the open data science platform Anaconda.
Default values of hyper-parameter are defined in parameters.py, data_pruning_parser().
usage: 1-data_pruning.py [-h] [--lr LR] [--batch-size BATCH_SIZE]
[--epochs EPOCHS]
[--checkpoint-epochs CHECKPOINT_EPOCHS]
[--evaluation-epochs EVALUATION_EPOCHS]
[--workers WORKERS] [--load] [--modelurl MODELURL]
[--train-path TRAIN_PATH] [--test-path TEST_PATH]
Data Pruning Parser
optional arguments:
-h, --help show this help message and exit
--lr LR, --learning-rate LR
learning rate
--batch-size BATCH_SIZE
batch size
--epochs EPOCHS total epochs
--checkpoint-epochs CHECKPOINT_EPOCHS
checkpoint frequency
--evaluation-epochs EVALUATION_EPOCHS
evaluation frequency
--workers WORKERS number of workers
--load load trained model
--modelurl MODELURL model path
--train-path TRAIN_PATH
Train images directory path to remove uninhabited areas
--test-path TEST_PATH
Test images directory path to remove uninhabited areas
$ python3 1-data_pruning.py --train-path ./data/sample_train --test-path ./data/sample_test
Default values of hyper-parameter are defined in parameters.py, extract_embeddings_parser().
usage: 2-extract_embeddings.py [-h] [--lr LR] [--batch-size BATCH_SIZE]
[--ema_decay EMA_DECAY] [--rampup RAMPUP]
[--consistency CONSISTENCY] [--epochs EPOCHS]
[--checkpoint-epochs CHECKPOINT_EPOCHS]
[--evaluation-epochs EVALUATION_EPOCHS]
[--workers WORKERS] [--load]
[--modelurl MODELURL]
Extract Embeddings Parser
optional arguments:
-h, --help show this help message and exit
--lr LR, --learning-rate LR
learning rate
--batch-size BATCH_SIZE
batch size
--ema_decay EMA_DECAY
total epochs
--rampup RAMPUP total epochs
--consistency CONSISTENCY
total epochs
--epochs EPOCHS total epochs
--checkpoint-epochs CHECKPOINT_EPOCHS
checkpoint frequency
--evaluation-epochs EVALUATION_EPOCHS
evaluation frequency
--workers WORKERS number of workers
--load load trained model
--modelurl MODELURL model path
$ python3 2-extract_embeddings.py --batch-size 50 --epochs 100
Extracted embeddings from satellite images are saved to "./data/train/reduced", and "./data/test/reduced" The size of matrix would be differ as the number of satellite images is different from every districts. (# of satellite images X 512)
Default values of hyper-parameter are defined in parameters.py, predict_demographics_parser().
usage: 3-predict_demographics.py [-h] [--idx IDX]
Predict Demographics Parser
optional arguments:
-h, --help show this help message and exit
--idx IDX select which demographics to predict
$ python3 3-predict_demographics.py --idx 0
Prediction result (R-squared and Mean Squared Error) will be shown in command line. You can add some codes for saving results in any other file format.