[go: up one dir, main page]

Skip to content
This repository has been archived by the owner on Aug 25, 2024. It is now read-only.

dymaxionlabs/ap-latam

AP Latam

Build Status codecov Join the chat at https://gitter.im/dymaxionlabs/ap-latam

This is the main repository of AP Latam project.

For more information on the website frontend, see the repository at https://github.com/dymaxionlabs/ap-latam-web.

Dependencies

  • Python 3+
  • GDAL
  • Proj4
  • libspatialindex
  • Dependencies for TensorFlow with GPU support

Install

Quick install and usage: Docker image

If you have Docker installed on your machine, with NVIDIA CUDA installed and configured, you can simply pull our image and run the scripts for training and detection.

Otherwise, follow the steps in this tutorial to install Docker, CUDA and nvidia-docker. This has been tested on an Ubuntu 16.04 LTS instance on Google Cloud Platform.

For all scripts you will need to mount a data volume so that the scripts can read the input rasters and vector files, and write the resulting vector file.

It is recommended that you first set an environment variable that points to the data directory in your host machine, like this:

export APLATAM_DATA=$HOME/aplatam-data

Then, to use any of the scripts, you would have to run them using nvidia-docker and mounting a volume to $APLATAM_DATA like this:

nvidia-docker run -ti -v $APLATAM_DATA:/data dymaxionlabs/ap-latam SCRIPT_TO_RUN [ARGS...]

where SCRIPT_TO_RUN is either ap_train or ap_detect and [ARGS...] the command line arguments of the specified script. You can run with --help to see all available options on each script.

For example, suppose you have the following files inside the $APLATAM_DATA directory:

  • Training rasters on images/
  • A settlements vector file settlements.geojson

To prepare a dataset and train a model you would run:

nvidia-docker run -ti -v $APLATAM_DATA:/data dymaxionlabs/ap-latam \
  ap_train /data/images /data/settlements.geojson /data/dataset

When using [nvidia-]docker run for the first time, it will pull the image automatically for you, so it is not neccessary to do [nvidia-]docker pull first.

run_with_docker.sh

You can also use run_with_docker.sh to do the same:

export APLATAM_DATA=$HOME/data/
./run_with_docker.sh ap_train /data/images /data/settlements.geojson /data/dataset
...

Development

First you will need to install the following packages. On Debian-based distros run:

sudo apt install libproj-dev gdal-bin build-essential libgdal-dev libspatialindex-dev python3-venv virtualenv

Clone the repository and run python setup.py install to install the package with its dependencies. Add --extras gpu to install GPU dependencies (TensorFlow for GPUs).

Run make to run tests and make cov to build a code coverage report. You can run make to do both.

Issue tracker

Please report any bugs and enhancement ideas using the GitHub issue tracker:

https://github.com/dymaxionlabs/ap-latam/issues

Feel free to also ask questions on our Gitter channel, or by email.

Help wanted

Any help in testing, development, documentation and other tasks is highly appreciated and useful to the project.

For more details, see the file CONTRIBUTING.md.

License

Source code is released under a BSD-2 license. Please refer to LICENSE.md for more information.