Cognitive Science Modeling Laboratory

The Cognitive Science Modeling Laboratory (CSML) provides a space for students and the lab director to work on projects that use computers to help understand cognition. Much of this work has involved Dynamic Associative Networks (DANs) as opposed to standard, artificial neural networks (ANNs). DANs differ from ANNs in that they use no predetermined network structure; rather, we promiscuously add nodes in the network wherever needed to improve cognitive function. They also differ in that there are no fixed weights. Instead, they use dynamic weights that are determined by information-theoretic methods. Training with DANs is done by way of case-based reasoning. Additionally, new information can be added to DANs without re-training the network, a genuine advantage over more traditional models.

Even though DANs have been a primary focus in the lab, projects have also included several agent-based modeling projects and some involving prediction engines. Recently, for instance, students in the lab worked on a project, Remodeling Kallipolis, to create an agent-based model of parts of Plato's Republic. Prior to that, students worked on a two-year project, Predicting the NFL, during which network structures were built to determine winners in advance of NFL games. (We managed to achieve 70% accuracy in prediction, well over a coin toss.)

The lab has a long history of different kind of projects. It began in the mid-1990's as the "Internet Applications Laboratory" developing search engines for academic use. As the "Digital Humanities" grew as an area of study, the lab's name was changed to the "Digital Humanities Lab," and then, recently, the "Cognitive Science Modeling Lab," to reflect more closely what we do here. Over the years, the lab has been staffed by more than fifty students working on a variety of projects, including Internet search engine design, agent-based exploration of traffic light patterns in Evansville, Indiana and classroom simulations. Regarding DANs in particular, we have worked on a range of models that include:

  1. Object identification based on properties and context-sensitivity
  2. Comparison of similarities and differences among properties and objects
  3. Shape recognition of simple shapes regardless of where they might appear in an artificial visual field
  4. Association across simulated sense modalities
  5. Primary sequential memory of any seven digit number
  6. Network branching from one subnet to another based on the presence of a single stimulus
  7. Eight-bit register control that could perform standard, machine-level operations as with standard Turing-style computational devices
  8. Rudimentary natural language processing based on a stimulus/response (i.e. anti-Chomskian) conception of language