Installation
segger Installation Guide¶
segger provides multiple installation options to suit your requirements. You can install it using:
- Virtual environments (recommended for most users)
- Containerized environments (Docker or Singularity)
- Editable mode from GitHub (for developers or users who want to modify the source code)
Recommendation
To avoid dependency conflicts, we recommend installing segger in a virtual environment or a container environment.
segger requires CUDA 11 or CUDA 12 for GPU acceleration.
Installation in Virtual Environment¶
Using venv
¶
# Step 1: Create and activate the virtual environment.
python3.10 -m venv segger-venv
source segger-venv/bin/activate
# Step 2: Install segger with CUDA support.
pip install --upgrade pip
pip install .[cuda12]
# Step 3: Verify the installation.
python --version
pip show segger
# step 4 [Optional]: If your system doesn't have a universally installed CUDA toolkit, you can link CuPy to PyTorch's CUDA runtime library.
export LD_LIBRARY_PATH=$(pwd)/segger-venv/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib:$LD_LIBRARY_PATH
Using conda
¶
# Step 1: Create and activate the conda environment.
conda create -n segger-env python=3.10
conda activate segger-env
# Step 2: Install segger with CUDA support.
pip install --upgrade pip
pip install .[cuda12]
# Step 3: Verify the installation.
python --version
pip show segger
# Step 4 [Optional]: If your system doesn't have a universally installed CUDA toolkit, you can link CuPy to PyTorch's CUDA runtime library.
export LD_LIBRARY_PATH=$(conda info --base)/envs/segger-env/lib/python3.10/site-packages/nvidia/cuda_nvrtc/lib:$LD_LIBRARY_PATH
How to Choose Between [cuda11]
and [cuda12]
¶
- Check Your NVIDIA Driver Version: Run
nvidia-smi
. Use[cuda11]
for driver version ≥ 450.80.02 or[cuda12]
for version ≥ 525.60.13. - Check for a CUDA Toolkit: Run
nvcc --version
. If it outputs a CUDA version (11.x or 12.x), choose the corresponding[cuda11]
or[cuda12]
. - Default to PyTorch CUDA Runtime: If CUDA toolkit is not installed, segger can use PyTorch's bundled CUDA runtime. You can link CuPy as shown in Step 4 of the venv/conda installation.
Installation in Container Environment¶
Using docker
¶
# Step 1: Pull the official Docker image.
docker pull danielunyi42/segger_dev:cuda121
# Step 2: Run the Docker container with GPU support.
docker run --gpus all -it danielunyi42/segger_dev:cuda121
The official Docker image comes with all dependencies pre-installed, including the CUDA toolkit, PyTorch, and CuPy. The current images support CUDA 11.8 and CUDA 12.1, which can be specified in the image tag.
Using singularity
¶
# Step 1: Pull the official Docker image.
singularity pull docker://danielunyi42/segger_dev:cuda121
# Step 2: Run the Singularity container with GPU support.
singularity exec --nv segger_dev_cuda121.sif
The Singularity image is derived from the official Docker image and includes all pre-installed dependencies.
Directory Mapping for Input and Output Data¶
Directory mapping allows:
- Access to input data (spatial transcriptomics datasets) from your local machine inside the container.
- Saving output data (segmentation results and logs) generated by segger to your local machine.
Setting up directory mapping is really easy:
-
For Docker:
docker run --gpus all -it -v /path/to/local/data:/workspace/data danielunyi42/segger_dev:cuda121
-
For Singularity:
singularity exec --nv -B /path/to/local/data:/workspace/data segger_dev_cuda121.sif
- Place your input datasets in
/path/to/local/data
on your host machine. - Inside the container, access these datasets from
/workspace/data
. - Save results to
/workspace/data
, which will be available in/path/to/local/data
on the host machine.
Editable GitHub installation¶
For developers or users who want to modify the source code:
git clone https://github.com/EliHei2/segger_dev.git
cd segger_dev
pip install -e ".[cuda12]"
Common Installation Issues
-
Python Version: Ensure you are using Python >= 3.10. Check your Python version by running:
If your version is lower than 3.10, please upgrade Python.python --version
-
CUDA Compatibility (GPU): For GPU installations, verify that your system has the correct NVIDIA drivers installed. Run:
Ensure that the displayed CUDA version is compatible with your selectednvidia-smi
[cuda11]
or[cuda12]
extra.- Minimum driver version for CUDA 11.x:
450.80.02
- Minimum driver version for CUDA 12.x:
525.60.13
- Minimum driver version for CUDA 11.x:
-
Permissions: If you encounter permission errors during installation, use the --user flag to install the package without requiring administrative privileges:
Alternatively, consider using a virtual environment (venv or conda) to isolate the installation.pip install --user .[cuda12]
-
Environment Configuration: Ensure that all required dependencies are installed in your environment.