Groundwater management and allocation planning involves a rigorous assessment of the performance of operational decisions such as extraction/injection rates on community and environmental objectives. Maximizing performance through numerical optimization can be essential for high-value resources and is often computationally infeasible due to long simulation model run times combined with nonconvex objectives and constraints. In order to mitigate these drawbacks, surrogate models can be used in place of complex models during the optimization process. There exist a number of machine learning techniques that can be used to develop a data-driven surrogate model. However, the curse of dimensionality, common to groundwater management, limits the use of these techniques due to the necessity for large training data sets. Even though it is now possible to handle large data sets, the generation of these data sets themselves remains computationally prohibitive as they require numerous simulations to produce accurate surrogates. In this study, we integrate a dimensionality reduction method using truncated singular value decomposition to reduce the number of decision variables, thereby reducing the size of the training data set needed. Correspondingly, we demonstrate a simple technique for acquiring an approximate minimax Latin Hypercube design from within the subspace. We also implement a novel technique for adaptive resampling through particle swarm optimization in order to maintain accuracy of the surrogate model throughout the optimization process. The resulting accurate surrogate model for the Perth regional aquifer system of Western Australia runs in a matter of seconds. Adopting this approach can produce timely solutions, making formal optimization tractable for practitioners.