Deep learning has achieved unprecedented performance in object recognition and scene understanding. However, deep models are also found vulnerable to adversarial attacks. Of particular relevance to robotics systems are pixel-level attacks that can completely fool a neural network by altering very few pixels (e.g. 1-5) in an image. We present the first technique to detect the presence of adversarial pixels in images for the robotic systems, employing an Adversarial Detection Network (ADNet). The proposed network efficiently recognize an input as adversarial or clean by discriminating the peculiar activation signals of the adversarial samples from the clean ones. It acts as a defense mechanism for the robotic vision system by detecting and rejecting the adversarial samples. We thoroughly evaluate our technique on three benchmark datasets including CIFAR-10, CIFAR-100 and Fashion MNIST. Results demonstrate effective detection of adversarial samples by ADNet.