AI Security Evaluator
This project is to test and validate how resilient the given AI model is against attack algorithms.

Upload your neural network model in a docker file together with the command to launch them, or try the system on pre-uploaded image classification models.

Start Test
To start your test, please select the algorithms to apply
and choose a classification model.
Select adversarial attack algorithms to apply:

Bellow is the image classification neural networks already uploaded and deployed, please select one of them to test. If the algorithm you would like to test is not in the list, please choose "None", and upload your own algorithm.
Algorithm upload:
The requirements of docker image file and the command must follow the instructions.