The dataset that I used were from Iran researcher Mohammad Rahimzadeh. From the dataset that he shared, I took 1000 images that contain 500 infected Covid-19 images and 500 normal images.
The dataset that I used is shared in this folder : https://drive.google.com/drive/folders/1sqKQh_Kbi7h8ao-u0TFmY5DST4lga36F?usp=sharing
The whole dataset can be seen here : https://github.com/mr7495/COVID-CTset
The raw images are 16-bit grayscale images in TIFF format and normal monitors cannot visualize the image clearly. According to Mohammad Rahimzadeh instruction, the dataset must be normalized first by converted it to float by dividing each image pixel value by the maximum pixel value of that image. By using this normalisation the images will be 32-bit float type pixel values which can be seen in normal monitors.
After normalized the images, I added 2 preprocessing methods to these images which are gaussian filter and CLAHE. The gaussian filter can reduce the noise of the images, also detect the edge of the images. As for CLAHE, this method can improve the visibility level of the images.
In this project, I used AlexNet architecture with 5 optimizers which are SGD, Adadelta, RMSprop, Adam, and AdaMax. The learning rates that were used are 0.1, 0.001, and 0.0001. After the training process, I analyze and compare the accuracy, loss, and precision.
The result of the project can be seen here : https://drive.google.com/drive/folders/1QoH7QBv56g0BoOm3jCDRbkFBTzQkwcEJ?usp=sharing
I also set the checkpoint for each training, so the best weight can be saved. With the best weight that I’ve got, I loaded it to the application prototype that I built so this application could detect the CT-Scan images whether is it Covid-19 or normal.
If you have any questions, contact me by this email: [email protected]