Natural language is arguably one of the most impressive achievements in evolution. Most human beings can naturally and effortlessly learn their first language during the early years of life. While it seems that acquiring a language requires learning simple conditional rules, typically reinforced, there are underlying mechanisms that facilitate its emergence. Logical categories or equivalence relations form the core of these mechanisms. In a logical category, perceptually unrelated stimuli become equivalent in terms of properties such as identity, symmetry, and transitivity after the reinforcement of simple if then conditionals. Interestingly, human subjects unable to learn any language also struggle to establish stimulus equivalence after successfully learning those simple conditionals. Here, we demonstrate that Large Language Models (LLMs) currently being used to assist people in their jobs or, even more significantly, to replace them, can learn simple conditionals but fall short in tests for the emergence of equivalence relations.
LLMs: Large Language Models.