[go: up one dir, main page]

Skip to content

Commit

Permalink
Merge branch 'docker'
Browse files Browse the repository at this point in the history
  • Loading branch information
limbo018 committed Nov 5, 2019
2 parents 9d94df8 + 5f5fd6a commit d09d8da
Show file tree
Hide file tree
Showing 4 changed files with 75 additions and 11 deletions.
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ set(CMAKE_CXX_STANDARD 11)

find_program(PYTHON "python" REQUIRED)
find_package(ZLIB REQUIRED)
find_package(Boost 1.62.0 REQUIRED)
find_package(Boost 1.55.0 REQUIRED)
get_filename_component(Boost_DIR ${Boost_INCLUDE_DIRS}/../ ABSOLUTE)
find_package(CUDA 9.0)
find_package(Cairo)
Expand Down
29 changes: 29 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
FROM pytorch/pytorch:1.0-cuda10.0-cudnn7-devel
LABEL maintainer="Yibo Lin <yibolin@pku.edu.cn>"

# install system dependency
RUN apt-get update \
&& apt-get install -y \
wget \
flex \
bison \
libcairo2-dev \
libboost-all-dev

# install cmake
ADD https://cmake.org/files/v3.8/cmake-3.8.2-Linux-x86_64.sh /cmake-3.8.2-Linux-x86_64.sh
RUN mkdir /opt/cmake \
&& sh /cmake-3.8.2-Linux-x86_64.sh --prefix=/opt/cmake --skip-license \
&& ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake \
&& cmake --version

# install python dependency
RUN pip install \
pyunpack>=0.1.2 \
patool>=1.12 \
matplotlib>=2.2.2 \
cairocffi>=0.9.0 \
pkgconfig>=1.4.0 \
setuptools>=39.1.0 \
scipy>=1.1.0 \
numpy>=1.15.4
51 changes: 43 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
Deep learning toolkit-enabled VLSI placement.
With the analogy between nonlinear VLSI placement and deep learning training problem, this tool is developed with deep learning toolkit for flexibility and efficiency.
The tool runs on both CPU and GPU.
Over 30X speedup over the CPU implementation ([RePlAce](https://doi.org/10.1109/TCAD.2018.2859220)) is achieved in global placement and legalization on ISPD 2005 contest benchmarks with a Nvidia Tesla V100 GPU.
Over ```30X``` speedup over the CPU implementation ([RePlAce](https://doi.org/10.1109/TCAD.2018.2859220)) is achieved in global placement and legalization on ISPD 2005 contest benchmarks with a Nvidia Tesla V100 GPU.

DREAMPlace runs on both CPU and GPU. If it is installed on a machine without GPU, only CPU support will be enabled.
DREAMPlace runs on both CPU and GPU. If it is installed on a machine without GPU, only CPU support will be enabled with multi-threading.

| Bigblue4 | Density Map | Electric Potential | Electric Field |
| -------- | ----------- | ------------------ | -------------- |
Expand All @@ -24,17 +24,17 @@ DREAMPlace runs on both CPU and GPU. If it is installed on a machine without GPU

# Dependency

- Pytorch 1.0.0
- Python 2.7 or Python 3.5/3.6/3.7

- Python 2.7 or Python 3.5
- [Pytorch](https://pytorch.org/) 1.0.0
- Other version around 1.0.0 may also work, but not tested

- [GCC](https://gcc.gnu.org/)
- Recommend GCC 5.1 or later.
- Other compilers may also work, but not tested.

- [Boost](https://www.boost.org)
- Need to install and visible for linking
- Need to be compiled with C++11 and same ```_GLIBCXX_USE_CXX11_ABI``` as PyTorch

- [Limbo](https://github.com/limbo018/Limbo)
- Integrated as a git submodule
Expand All @@ -57,7 +57,7 @@ DREAMPlace runs on both CPU and GPU. If it is installed on a machine without GPU
- Otherwise, python implementation is used.

- [NTUPlace3](http://eda.ee.ntu.edu.tw/research.htm) (Optional)
- If the binary is provided, it can be used to perform detailed placement
- If the binary is provided, it can be used to perform detailed placement.

To pull git submodules in the root directory
```
Expand All @@ -79,12 +79,44 @@ pip install -r requirements.txt

# How to Build

Two options are provided for building: with and without [Docker](https://hub.docker.com).

## Build with Docker

You can use the Docker container to avoid building all the dependencies yourself.
1. Install Docker on [Windows](https://docs.docker.com/docker-for-windows/), [Mac](https://docs.docker.com/docker-for-mac/) or [Linux](https://docs.docker.com/install/).
2. To enable the GPU features, install [NVIDIA-docker](https://github.com/NVIDIA/nvidia-docker); otherwise, skip this step.
3. Navigate to the repository.
4. Get the docker container with either of the following options.
- Option 1: pull from the cloud [limbo018/dreamplace](https://hub.docker.com/r/limbo018/dreamplace).
```
docker pull limbo018/dreamplace:cuda
```
- Option 2: build the container.
```
docker build . --file Dockerfile --tag your_name/dreamplace:cuda
```
5. Enter bash environment of the container. Replace ```limbo018``` with your name if option 2 is chosen in the previous step.

Run with GPU.
```
docker run --gpus 1 -it -v $(pwd):/DREAMPlace limbo018/dreamplace:cuda bash
```
Run without GPU.
```
docker run -it -v $(pwd):/DREAMPlace limbo018/dreamplace:cuda bash
```
6. ```cd /DREAMPlace```.
7. Go to next section to complete building.

## Build without Docker

[CMake](https://cmake.org) is adopted as the makefile system.
To build, go to the root directory.
```
mkdir build
cd build
cmake ..
cmake .. -DCMAKE_INSTALL_PREFIX=your_install_path
make
make install
```
Expand Down Expand Up @@ -142,7 +174,7 @@ python dreamplace/Placer.py --help
# Features

* [0.0.2](https://github.com/limbo018/DREAMPlace/releases/tag/0.0.2)
- Multi-thread CPU and optional GPU acceleration support
- Multi-threaded CPU and optional GPU acceleration support

* [0.0.5](https://github.com/limbo018/DREAMPlace/releases/tag/0.0.5)
- Net weighting support through .wts files in Bookshelf format
Expand All @@ -154,3 +186,6 @@ python dreamplace/Placer.py --help

* [1.0.0](https://github.com/limbo018/DREAMPlace/releases/tag/1.0.0)
- Improved efficiency for wirelength and density operators from TCAD extension

* [1.1.0](https://github.com/limbo018/DREAMPlace/releases/tag/1.1.0)
- Docker container for building environment
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ def forward(ctx, pos, flat_netpin, netpin_start, pin2net_map, net_weights, net_m
ctx.pos = pos
if pos.is_cuda:
torch.cuda.synchronize()
print("\t\twirelength forward %.3f ms" % ((time.time()-tt)*1000))
logger.debug("wirelength forward %.3f ms" % ((time.time()-tt)*1000))
return output[0]

@staticmethod
Expand All @@ -288,7 +288,7 @@ def backward(ctx, grad_pos):
output[int(output.numel()//2):].masked_fill_(ctx.pin_mask, 0.0)
if grad_pos.is_cuda:
torch.cuda.synchronize()
print("\t\twirelength backward kernel %.3f ms" % ((time.time()-tt)*1000))
logger.debug("wirelength backward kernel %.3f ms" % ((time.time()-tt)*1000))
return output, None, None, None, None, None, None, None

class WeightedAverageWirelength(nn.Module):
Expand Down

0 comments on commit d09d8da

Please sign in to comment.