In this paper, we introduce a minimal cognitive architecture designed to explore the mechanisms underlying human language learning abilities. Our model inspired by research in artificial intelligence incorporates sequence memory, chunking and schematizing as key domain-general cognitive mechanisms. It combines an emergentist approach with the generativist theory of type systems. By modifying the type system to operationalize theories on usage-based learning and emergent grammar, we build a bridge between theoretical paradigms that are usually considered incompatible. Using a minimal error-correction reinforcement learning approach, we show that our model is able to extract functional grammatical systems from limited exposure to small artificial languages. Our results challenge the need for complex predispositions for language and offer a promising path for further development in understanding cognitive prerequisites for language and the emergence of grammar during learning.