Extending cone-beam CT (CBCT) use toward dose accumulation and adaptive radiotherapy (ART) necessitates more accurate HU reproduction since cone-beam geometries are heavily degraded by photon scatter. This study proposes a novel method which aims to demonstrate how deep learning based on phantom data can be used effectively for CBCT intensity correction in patient images. Four anthropomorphic phantoms were scanned on a CBCT and conventional fan-beam CT system. Intensity correction is performed by estimating the cone-beam intensity deviations from prior information contained in the CT. Residual projections were extracted by subtraction of raw cone-beam projections from virtual CT projections. An improved version of U-net is utilized to train on a total of 2001 projection pairs. Once trained, the network could estimate intensity deviations from input patient head and neck raw projections. The results from our novel method showed that corrected CBCT images improved the (contrast-to-noise ratio) with respect to uncorrected reconstructions by a factor of 2.08. The mean absolute error and structural similarity index improved from 318 HU to 74 HU and 0.750 to 0.812 respectively. Visual assessment based on line-profile measurements and difference image analysis indicate the proposed method reduced noise and the presence of beam-hardening artefacts compared to uncorrected and manufacturer reconstructions. Projection domain intensity correction for cone-beam acquisitions of patients was shown to be feasible using a convolutional neural network trained on phantom data. The method shows promise for further improvements which may eventually facilitate dose monitoring and ART in the clinical radiotherapy workflow.