In acoustic event detection, the training data size of some acoustic events are often small and imbalanced. To deal with these, this paper proposes to generate the virtual training data categorically using the auxiliary classifier generative adversarial networks. Soft labels of acoustic events are first calculated to represent the acoustic event localization information. The closer the current frame is to the middle of the manually labeled acoustic event, the higher the soft label will be, which makes the soft labels positively correlated to the acoustic event localization. Then the acoustic event class and the quantized soft labels are used as the input condition to the auxiliary classifier generative adversarial networks to generate arbitrary number of training samples. Experimental results on the TUT Sound Event 2016 under the home environment and TUT Sound Event 2017 under the street environment demonstrate the improved performance of the proposed technique compared to existing acoustic event detection systems.