In this project, we implement a CycleGAN framework to improve the quality of CT images. This framework facilitates the transformation of noisy, low-dose CT images into their clean, high-dose equivalents through unpaired image-to-image translation. We explore the performance of generators trained under two different regimes: CycleGAN losses and supervised losses. Our results demonstrate that CycleGAN is capable of effectively denoising quarter-dose images into clear full-dose images without the need for paired training data. We assess our implementation both qualitatively and quantitatively, utilizing PSNR and SSIM metrics on a test dataset to evaluate the efficacy of the translated images.
    Experiments were performed using an NVIDIA A6000 48GB GPU. We established a controlled experimental environment by setting a fixed random seed at 42 to ensure reproducibility. The training was conducted over 120 epochs, utilizing a batch size of 16. The network's learning rate was set to 
For our CycleGAN model, the generator and discriminator architectures were configured with varying channel multipliers, specifically [1, 2, 4, 8], to progressively increase the complexity of feature extraction. The number of channels in the initial layers of the generator and discriminator was set to 32 and 64, respectively. Each model incorporated three residual blocks to enhance learning capability without a significant increase in computational demand.
Two crucial hyperparameters, 
AAPM Data for the experiments can be obtained from drive link, ensuring that all input data was consistently accessed and processed during the experimental runs.
    AAPM data used in this project is from this drive link.
Arguments:
    --path_checkpoint PATH_CHECKPOINT
                        checkpoint path when using pre-trained model (default: ./CT_denoising)
    --model_name MODEL_NAME
                        model name (default: cyclegan_v1)
    --path_data PATH_DATA
                        path to data (default: ./AAPM_data)
    --batch_size BATCH_SIZE
                        batch size (default: 16)
    --lambda_cycle LAMBDA_CYCLE
                        lambda cycle (default: 10)
    --lambda_iden LAMBDA_IDEN
                        lambda identity (default: 5)  
    --beta1 BETA1         beta1 (default: 0.5)
    --beta2 BETA2         beta2 (default: 0.999)
    --num_epoch NUM_EPOCH
                        number of epochs (default: 120)
    --g_channels G_CHANNELS
                        number of channels in the generator (default: 32)
    --d_channels D_CHANNELS
                        number of channels in the discriminator (default: 64)
    --ch_mult CH_MULT [CH_MULT ...]
                        channel multipliers (default: [1, 2, 4, 8])
    --num_res_blocks NUM_RES_BLOCKS
                        number of residual blocks (default: 3)  
    --lr LR              learning rate (default: 2e-4)
    --seed SEED          random seed (default: 42)
    --use_checkpoint      use pre-trained checkpoint
python train.py --path_data {DATASET_PATH} Below is an example of training the model with the provided dataset.
python train.py --path_data ../data/AAPM_dataArguments:
    --path_checkpoint PATH_CHECKPOINT
                        checkpoint path when using pre-trained model (default: ./CT_denoising)
    --model_name MODEL_NAME
                        model name (default: cyclegan_v1)
    --path_data PATH_DATA
                        path to data (default: ./AAPM_data)
    --g_channels G_CHANNELS
                        number of channels in the generator (default: 32)
    --ch_mult CH_MULT [CH_MULT ...]
                        channel multipliers (default: [1, 2, 4, 8])
    --num_res_blocks NUM_RES_BLOCKS
                        number of residual blocks (default: 3)
    --seed SEED          random seed (default: 42)
    --num_visualize NUM_VISUALIZE
                        number of images to visualize (default: 6)
python test.py --path_data {DATASET_PATH} --use_checkpointBelow is an example of testing the model with the provided dataset.
python test.py --path_data ../data/AAPM_data --use_checkpointThis project was conducted as part of the KAIST AI618 course.





