Loss function plays a key role in self-supervised monocular depth estimation methods. Current reprojection loss functions are hand-designed and mainly focus on local patch similarity but overlook the global distribution differences between a synthetic image and a target image. In this paper, we leverage global distribution differences by introducing an adversarial loss into the training stage of self-supervised depth estimation. Specifically, we formulate this task as a novel view synthesis problem. We use a depth estimation module and a pose estimation module to form a generator, and then design a discriminator to learn the global distribution differences between real and synthetic images. With the learned global distribution differences, the adversarial loss can be back-propagated to the depth estimation module to improve its performance. Experiments on the KITTI dataset have demonstrated the effectiveness of the adversarial loss. The adversarial loss is further combined with the reprojection loss to achieve the state-of-the-art performance on the KITTI dataset.