A foundational hypothesis in cognitive science is that some of human thinking happens in a language of thought (LoT), which is universal across humans (Fodor, 1975). According to this hypothesis, words in different natural languages are labels for primitive concepts or their combinations in LoT. What are LoT's primitives? This is a major challenge because LoT is not directly observable, and thus needs to be inferred or reverse-engineered (Piantadosi, 2016). We put forward a novel approach to reverse-engineering LoT from cross-linguistic data, capitalizing on the existing knowledge about the optimization of the trade-off between complexity and informativeness in natural languages (Kemp and Regier, 2012; Kemp et al., 2018). As a case study, we will use the approach to ask what LoT's number primitives are, using cross-linguistic data on numeral systems.