Get Started

SSSegmentation is an open source supervised semantic segmentation toolbox based on pytorch.

In this chapter, we demonstrate some necessary preparations before developing or using SSSegmentation.

Install SSSegmentation for Developing

SSSegmentation works on Linux, Windows and macOS. It requires Python 3.7+, CUDA 10.2+ and PyTorch 1.8+.

If you are experienced with Python and PyTorch and have already installed them, just skip this section and jump to the next section Prepare Datasets. Otherwise, you can follow the instructions in this section for installing SSSegmentation.

Install Anaconda

Anaconda is an open-source package and environment management system that runs on Windows, macOS, and Linux.

We recommend the users to download and install Anaconda to create an independent environment for SSSegmentation. Specifically, you can download and install Anaconda or Miniconda from the official website. If you have any questions about installing Anaconda, you can refer to the official document for more details.

After installing Anaconda, you can create a conda environment for SSSegmentation and activate it, e.g.,

conda create --name ssseg python=3.8 -y
conda activate ssseg

For more advanced usages of Anaconda, please also refer to the official document.

Install Requirements

Now, we can install the necessary requirements in the created environment ssseg.

Step 1: Install Basic Requirements (Necessary)

Specifically, we can first install some essential third-party packages,

# clone the source codes from official repository
git clone https://github.com/CharlesPikachu/sssegmentation
cd sssegmentation
# install some essential requirements
pip install -r requirements.txt
# install ssseg with develop mode
python setup.py develop

With the above commands, these python packages will be installed,

  • chainercv: set in requirements/evaluate.txt,

  • cityscapesscripts: set in requirements/evaluate.txt,

  • pycocotools: set in requirements/evaluate.txt,

  • pillow: set in requirements/io.txt,

  • pandas: set in requirements/io.txt,

  • opencv-python: set in requirements/io.txt,

  • numpy: set in requirements/science.txt,

  • scipy: set in requirements/science.txt,

  • tqdm: set in requirements/terminal.txt,

  • argparse: set in requirements/terminal.txt,

  • cython: set in requirements/misc.txt.

  • fvcore: set in requirements/misc.txt.

All requirements are also summarized at our official repository.

Step 2: Install PyTorch and Torchvision (Necessary)

Next, it is imperative to install PyTorch and torchvision, which is “Tensors and Dynamic neural networks in Python with strong GPU acceleration”. Particularly, we recommend the users to follow the official instructions to install them.

Here, we also provide some example commands about installing PyTorch and Torchvision,

# conda
conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 -c pytorch
# CUDA 11.6
conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.6 -c pytorch -c nvidia
# CUDA 11.7
conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia
# CPU Only
conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 cpuonly -c pytorch
# OSX
pip install torch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0
# ROCM 5.2 (Linux only)
pip install torch==1.13.0+rocm5.2 torchvision==0.14.0+rocm5.2 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/rocm5.2
# CUDA 11.6
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116
# CUDA 11.7
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
# CPU only
pip install torch==1.13.0+cpu torchvision==0.14.0+cpu torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cpu

Please note that, SSSegmentation requires torch.cuda.is_available() to be True and thus, does not support the cpu-only version’ pytorch and torchvision now.

Step 3: Install MMCV (Optional)

Since some of algorithms integrated in SSSegmentation rely on MMCV that is a foundational library for computer vision research, you are required to install MMCV if you want to use the following algorithms,

Specifically, the users can follow the official instructions to install MMCV.

Here, we recommend the users to install the pre-build mmcv-full package according to your CUDA and PyTorch version,

# mmcv < 2.0.0, mmcv versions include mmcv-full and mmcv
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
# mmcv > 2.0.0, mmcv versions include mmcv and mmcv-lite
pip install mmcv -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

Please note that, if the users do not plan to use these mmcv-dependent algorithms, it is not necessary for you to install mmcv.

Step 4: Install Apex (Optional)

Apex holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. So, the users can install it to utilize Mixed Precision (FP16) Training supported by Apex to train the segmentors.

In details, the users can follow the official instructions to install Apex.

Also, you can leverage the following commands to install Apex,

git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key... 
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
# a Python-only build
pip install -v --disable-pip-version-check --no-build-isolation --no-cache-dir ./

Please note that, SSSegmentation supports two types of mixed precision training, i.e., apex and pytorch,

import torch

# use Mixed Precision (FP16) Training supported by Apex
SEGMENTOR_CFG['fp16_cfg'] = {'type': 'apex', 'initialize': {'opt_level': 'O1'}, 'scale_loss': {}}
# use Mixed Precision (FP16) Training supported by Pytorch
SEGMENTOR_CFG['fp16_cfg'] = {'type': 'pytorch', 'autocast': {'dtype': torch.float16}, 'grad_scaler': {}}

So, the users can choose to utilize Mixed Precision (FP16) Training supported by Pytorch to train the segmentors if you find it is difficult to install Apex in your environment.

Step 5: Install TIMM (Optional)

Timm is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation scripts. It comes packaged with >700 pretrained models, and is designed to be flexible and easy to use.

SSSegmentation provides support for importing backbone networks from timm to train the segmentors. So, if the users want to leverage this feature, you can follow the official instructions to install timm.

Of course, you can also simply install it with the following command,

pip install timm

For more details, you can refer to TIMM official repository and TIMM official document.

Step 6: Install Albumentations (Optional)

Albumentations is a Python library for fast and flexible image augmentations.

SSSegmentation provides support for importing data augmentation transforms from albumentations to train the segmentors. Thus, if the users want to utilize this feature, you can follow the official instructions to install albumentations.

Of course, you can also simply install it with the following command,

pip install -U albumentations

For more details, you can refer to Albumentations official repository and Albumentations official document.

Install SSSegmentation as Third-party Package

If the users just want to use SSSegmentation as a dependency or third-party package, you can install SSSegmentation with pip as following,

# from pypi
pip install SSSegmentation
# from Github repository
pip install git+https://github.com/CharlesPikachu/sssegmentation.git

Here, we assume that you have installed a suitable version of Python, PyTorch and other optional requirements (e.g., mmcv and timm) in your environment before importing SSSegmentation.

Prepare Datasets

Except for installing SSSegmentation, you are also required to download the benchmark datasets before training the integrated segmentation frameworks.

Supported Dataset List

Here is a summary of the supported benchmark datasets and the corresponding download sources,

Dataset Official Websites Download with Provided Scripts Download from Baidu Disk
VSPW Click
CMD bash scripts/prepare_datasets.sh vspw
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
Supervisely Click
CMD bash scripts/prepare_datasets.sh supervisely
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
Dark Zurich Click
CMD bash scripts/prepare_datasets.sh darkzurich
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
Nighttime Driving Click
CMD bash scripts/prepare_datasets.sh nighttimedriving
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
CIHP Click
CMD bash scripts/prepare_datasets.sh cihp
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
COCOStuff10k Click
CMD bash scripts/prepare_datasets.sh cocostuff10k
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
COCOStuff164k Click
CMD bash scripts/prepare_datasets.sh coco
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
MHPv1&v2 Click
CMD bash scripts/prepare_datasets.sh mhpv1 & bash scripts/prepare_datasets.sh mhpv2
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
LIP Click
CMD bash scripts/prepare_datasets.sh lip
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
ADE20k Click
CMD bash scripts/prepare_datasets.sh ade20k
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
SBUShadow Click
CMD bash scripts/prepare_datasets.sh sbushadow
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
CityScapes Click
CMD bash scripts/prepare_datasets.sh cityscapes
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
ATR Click
CMD bash scripts/prepare_datasets.sh atr
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
Pascal Context Click
CMD bash scripts/prepare_datasets.sh pascalcontext
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
MS COCO Click
CMD bash scripts/prepare_datasets.sh coco
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
HRF Click
CMD bash scripts/prepare_datasets.sh hrf
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
CHASE DB1 Click
CMD bash scripts/prepare_datasets.sh chase_db1
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
PASCAL VOC Click
CMD bash scripts/prepare_datasets.sh pascalvoc
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
DRIVE Click
CMD bash scripts/prepare_datasets.sh drive
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i
STARE Click
CMD bash scripts/prepare_datasets.sh stare
URL https://pan.baidu.com/s/1TZbgxPnY0Als6LoiV80Xrw with access code fn1i

For easier io reading, we will generate train.txt/val.txt/test.txt to record the imageids of training, validation and test images for each dataset. So, it is recommended to adopt the provided script (i.e., scripts/prepare_datasets.sh) to download the supported datasets or download the supported datasets from the provided network disk link rather than official website.

Prepare Datasets with Provided Scripts

We strongly recommend users to use the scripts we provide to prepare datasets. And before using the script, it is necessary for you to install wget (for downloading datasets), 7z (for processing compressed packages) and tar (for processing compressed packages) in the environment.

For example, for Linux users, you can run the following commands to install them,

# wget
apt-get install wget
# 7z
apt-get install p7zip
# tar
apt-get install tar

Note that in fact, most Linux systems will install the above three packages by default. So you don’t need to run the above command to install them again.

For windows users, you can download the corresponding software installation package to install them,

Besides, the windows users also need to install Cmder to execute the provided script.

After installing these prerequisites, you can use scripts/prepare_datasets.sh to prepare the supported benchmark datasets as following,

------------------------------------------------------------------------------------
scripts/prepare_datasets.sh - prepare datasets for training and inference of SSSegmentation.
------------------------------------------------------------------------------------
Usage:
    bash scripts/prepare_datasets.sh <dataset name>
Options:
    <dataset name>: The dataset name you want to download and prepare.
                    The keyword should be in ['ade20k', 'lip', 'pascalcontext', 'cocostuff10k',
                                              'pascalvoc', 'cityscapes', 'atr', 'chase_db1',
                                              'cihp', 'hrf', 'drive', 'stare', 'nighttimedriving',
                                              'darkzurich', 'sbushadow', 'supervisely', 'vspw',
                                              'mhpv1', 'mhpv2', 'coco',]
    <-h> or <--help>: Show this message.
Examples:
    If you want to fetch ADE20k dataset, you can run 'bash scripts/prepare_datasets.sh ade20k'.
    If you want to fetch Cityscapes dataset, you can run 'bash scripts/prepare_datasets.sh cityscapes'.
------------------------------------------------------------------------------------

For example, if you want to train the segmentors with ADE20K dataset, you can prepare the datasets with the following commands,

bash scripts/prepare_datasets.sh ade20k

If the terminal finally outputs “Download ade20k done.”, it means you have downloaded the dataset successfully. Otherwise, you may have to check and fix your environment issues before re-executing the provided script.