Graph convolutional networks (GCNs) generalize convolutional neural networks into irregular graph-like structures. Generally, graph topologies are set by hand and fixed over all layers. Handcrafted connections may not be optimal and cannot fully use the self-learning ability of deep learning. In this work, we explore a topology-learnable graph convolution for skeleton-based action recognition. Specifically, a spatial graph convolution can be decomposed into a feature learning component that evolves the features of each graph vertex, and a graph vertex fusion component in which the latent graph topologies can be learned adaptively. Different initialization strategies for the learnable fusion matrix are evaluated. Experimental results that are based on the spatial-temporal GCNs for skeleton-based action recognition, demonstrate that convolution can work on graphs like on images, even if only a specific fusion matrix initialization that uses adjacency matrices is applied. Moreover, the self-learning process can learn the latent topology of a graph beyond the handcrafted topology, thereby making graph convolution flexible and universal.