[this article isn’t complete, but you can still get a little overview of the project]
Again, for the lazy people, here’s the devpost
Let me cite https://www.mayoclinic.org really quick:
“Melanoma, the most serious type of skin cancer, develops in the cells (melanocytes) that produce melanin — the pigment that gives your skin its color. Melanoma can also form in your eyes and, rarely, inside your body, such as in your nose or throat.”
Yeah, this is no good. Especially knowing it can look exactly like a benign mole, but be fatal. That’s why many people don’t care to go to the doctor, until it’s too late and the cancer grew large enough to send them to their grave.
Melanomore is an android app that allows you to take pictures from your camera, and send them directly to our Machine Learning model (more on that later), in order to predict whether or not this mole is benign or malignant.
In the frontend, we have a full android app, that has access to your camera, and auto-focuses on the mole.
The app then fetches the output from our model, and displays the results.
The backend is simple too, it listens to requests, and preprocesses the images before passing them into the model, and sending back the softmax of the results, which can be interpreted as a percentage of the confidence of the model. (i.e 51% doens’t tell much when 97% is quite sure)
Here’s the part I participated in. Since we were in a Hackathon, we went for a tried and true CNN classification model, with depthwise convolutions and MaxPooling Layers.
At the end, we have a AveragePooling layer, and a single probabilistic linear layer.