Geometric deep learning has emerged as a new area of research by approaching graph neural networks from a geometric angle. Graphs are limited to the encoding of binary relations between vertices. Higher-order relations provide a key tool towards capturing local and global information on graphs. Existing higher-order domains, such as simplicial complexes, cell complexes and hypergraphs, have rigid definitions. We introduce a more flexible domain, called the combinatorial complex (CC), which combines desirable traits of simplicial/cell complexes and hypergraphs. Based on CCs, we introduce combinatorial complex neural networks (CCNNs). Existing operations, such as message passing, pooling and unpooling, are unified under our CCNN framework. Thus, our foundational work generalizes contemporary advances in topological deep learning, both in terms of a flexible underlying domain and in terms of associated operations. Beyond the mathematical formalism, our work demonstrates that CCNNs retain a competitive edge over state-of-the-art task-specific deep learning models.