Satellite images have become more easily accessible due to government programmes and the availability of commercial earth observation satellites. The increased volume of data and the recent advancement in deep learning-based techniques have made possible the development of fast and accurate methods for satellite image segmentation, which aims at automatically giving each pixel in the satellite image a label of the underlying object/ground type. However, despite the abundant supply of raw satellite images, the labelling of these images is a manual task and is therefore time and labour intensive. When a large labelled dataset is absent, resorting to a smaller data source significantly hampers the performance of the machine learning model. Therefore, an approach which mitigates the effects of using small labelled datasets should be investigated. This paper provides a solution to the problem of insufficient data samples for satellite image segmentation. To this end, two methods are developed and compared with the benchmark U-Net model: the use of transfer learning on an Xception architecture, and the use of steerable filters. The first model transfers knowledge from a pre-trained Xception model and predicts with a multi-resolution feature fusion module, specifically designed to recover fine details. The latter further improves the capability of our neural network model in handling data with high rotational variation. Both models are experimentally shown to significantly enhance the segmentation performance despite a small-sized training dataset.