Skip to content

Final project for MBP1413 Winter 2024, by Richard, Sylvia, and Chris

Notifications You must be signed in to change notification settings

YinniKun/mbp1413-final

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Evaluating and Optimizing Training and Interference Performances Between Variations of UNET Models for Nuclei Detection and Segmentation

This is the repo for the final project of MBP1413 Winter 2024, by Richard, Sylvia, and Chris.

Description

In this project, we looked at the performance between different models of UNet (specifically, UNET and UNETR) for general nuclei segmentation from various imaging modalities and tried to optimize the model performances with various data processing methods and hyperparameter tuning.

For details of the findings, please refer to the report (named final-report.pdf) and the presentation (named final-presentation.pdf) for this project in this /docs. The codes used to generate the results used in the report and presentation are also found in this repo.

Data Availability

The data used to train the models, as well as the unprocessed results (such as loss curves, model validation results, etc.) of this project can be found at this Google Drive

Environment Installation (tested on Ubuntu 22.04)

Prerequisites

For Mac Users

brew install graphviz

For Debian System Users (Ubuntu, etc.)

sudo apt install graphviz

For Redhat System Users (CentOS, Fedora, etc.)

sudo yum install graphviz

Conda Environment Setup

git clone https://github.com/YinniKun/mbp1413-final.git
cd mbp1413-final
conda-env create -f environment.yml
conda activate monai

Usage

To run locally, use:

python main.py
-c /path/to/config/yaml/file
-m train # default is train, can be train or test
-d # flag for downloading dataset. Action won't be triggered if not using this flag
-r # flag for resuming the training process. Action won't be triggered if not using this flag
-e 200 # the epochs number, default is 200
-l 0.005 # the learning rate, default is 0.005 
-mo unetr # the model to be trained/tested, default is unet
-sch # flag for using lr_scheduler
-opt SGD # deafult is Adam, can be SGD or Adam
-sa # flag for saving the model architecture plot

Detailed information can be shown using -h
To run on Compute Canada, use:

sbatch run.sh

About

Final project for MBP1413 Winter 2024, by Richard, Sylvia, and Chris

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published