Publications

  1. Multiview Robust Adversarial Stickers for Arbitrary Objects in the Physical World
    S Oslund, C Washington, A So, T Chen, H Ji
    Journal of Computational and Cognitive Engineering 1 (4), 152-158, 22, 2022

    Summary: Among different adversarial attacks on deep learning models for image classification, physical attacks have been considered easier to implement without assuming access to victims’ devices.

  2. On multiview robustness of 3d adversarial attacks
    P Yao, A So, T Chen, H Ji
    Practice and Experience in Advanced Research Computing, 372-378, 7, 2020

    Summary: Nowadays deep neural networks have been applied widely in many applications of computer vision including medical diagnosis and self-driving cars. However, deep neural networks are threatened by adversarial examples usually in which image pixels were perturbed unnoticeable to humans but enough to fool the deep networks.

  3. Privacy preserving inference with convolutional neural network ensemble
    A Xiong, M Nguyen, A So, T Chen
    2020 IEEE 39th International Performance Computing and Communications, 3, 2020

    Summary: Machine Learning as a Service on cloud not only provides a solution to scale demanding workloads, but also allows broader accessibility for the utilization of trained deep neural networks.

  4. Multiview-Robust 3D Adversarial Examples of Real-world Objects
    P Yao, A So, T Chen, H Ji
    CVPR 2020 Workshop

    Summary: Implementation of a method of robust 3D adversarial attacks which considers different viewpoints where the victim camera can be placed. In particular, we find a method to create 3D adversarial examples that can achieve 100% attack success rate from all viewpoints with any integer spherical coordinates.