For full functionality of this site it is necessary to enable JavaScript.
Here are the
instructions how to enable JavaScript in your web browser.

# Computational Approaches to Learning and Modeling Adaptive Biological Systems

### Prerequisites: Calculus, Elements of Linear Algebra, Probability and Statistics

Requirements: Students will be given Lectures and Readings in Cognitive and Behavioral Neuroscience

A midterm and Final will be given

A project to program a Neural Net Implementation of a Model in one of the Readings

**Justification and Course Description:**

### Pattern Recognition, Artificial Intelligence, and Neural Modeling have a common history, which surged beginning in the 1950’s, some of which has been subsumed under the general heading of Machine Learning. The underlying theory and development of model-based analysis of data have been published in a wide range of journals, including physiological and experimental based journals. At the same time, the NASA space program engendered a strong interest in adaptive control theory, whose purpose was to investigate the design of systems that could adapt parameters to reflect changes in environmental conditions. Neurocomputing takes its model from biological systems and by the same token understanding of biological systems have benefited much from system theoretic and neural network models whose purpose is to mimic behavior and the physiological process. These methodologies have been important in neuroscience and in medicine and have impacted computational approaches. This course will introduce students to the mathematical models, computational methods, and experimental basis for learning processes in biological systems (animals and humans) and how modeling these systems has impacted computational algorithms. The topics that will be covered in this course will include elements of statistical decision theory, adaptive control, machine learning approaches to pattern classification, and neural nets, and will provide a formal structure for solving problems in behavioral, physiological and representation-mediated behavior. Students completing this course will have the background not only to apply this knowledge to research problems in behavioral and cognitive neuroscience and artificial intelligence, but also to pattern matching in other settings and natural language processing.

**Learning Goal:**

The learning goal of this course is to teach students how to meld what has been done in Pattern Recognition, Adaptive Control, Machine learning, and Neural Networks with the experimental results from behavioral learning and physiological findings on the underlying behavior of sensory motor systems. This course will achieve this learning goal by not only teaching students the mathematical and computational foundations of these techniques, but also introducing them to the applications in behaviorally and physiologically based studies. As there is no single text that covers the material proposed, students will be given reading assignments. An important goal of the course is for students to learn how to read and understand primary source material and papers.

**Assessment:**

To assess whether students have achieved the goals of this course, the following criteria will be used:

- Midterm and Final Exams (About 70% of grade- 40%, 30%, whichever is higher).
- Their comprehension of assigned readings will be assessed by class participation in lectures (10%).
- Group Programming assignments (Programming Language of Choice) will be assigned to implement different types of Neural Networks and Learning Algorithms (20% of grade). This will assess whether the students have sufficiently grasped the material presented to implement the systems and learning algorithms that will be discussed.

**Lecture Outline:**

- Introduction (Raphan and Delamater)

- Basic concepts in Pattern Classification
- Design Sets and Test Sets
- Probabilistic Models of Pattern Classification
- Simple Pattern Classification Algorithms
- Application of Pattern Classification in Neuroscience

- Mathematical Preliminaries of Pattern Classification (Raphan)

- Feature Selection for Describing a Pattern Space
- Specifying the Loss Function
- Minimizing Approximate Risk
- The N-Class problem
- Linear and Non-Linear Discriminant Functions
- Statistical Approaches to Pattern Classification

- Cluster Analysis (Raphan)

- Describing Sub-regions in in Pattern Space
- Statistical Methods for Describing Subregions
- Clustering Algorithms

- Potential Functions in Pattern Classification (Raphan)

- Choosing the Form of the Potential Function
- Efficient Decision Rules based on Potential Function Methods

- Neural Network Approaches to Pattern Classification (Raphan)

- The Perceptron a Linear Discrimination Classifier
- Three Layered Architecture
- Neural Nets perform approximation and interpolation functions
- The Hopfeld-Tank Model
- The Hopenstaedt Model

- Simple Approaches To Training Neural Network (Raphan)

- Supervised and Unsupervised Learning
- The Perceptron Gradient Descent and Back Propagation
- Hebbian Learning

- Competitive Learning in Neural Nets (Raphan)

- The competitive Learning paradigm
- Competitive Learning Models (Linsker and Fukashima models)
- Adaptive Resonance Theory: A Stabilized Version of Competitive Learning
- The Bolzman Machine Learning Algorithm

- Formulation and Results from Animal and Human Learning (Delamater)

- Paradigms for Studying Associative Learning (Pavlovian, Instrumental)
- Classic associative learning phenomena
- Classification learning
- Reinforcement learning

- Neural Nets Application to Modeling Associative Learning Phenomena (Delamater)

b)Neural Mechanisms and the Prediction Error Concept

c)Adaptive Learning of Attention (Mackintosh, Pearce & Hall, Kruschke)

- Configural and Elemental Theories of Learning (Delamater)

b)Wagner’s Model and Critique of Configural Theories

- Real Time Theories of Reinforcement Learning (Delamater)

b)Sutton-Barto Model and Trace Eligibility

c)Time derivative Models and Dopamine the Reward System

- Pattern Completion/Separation and Hippocampus (Delamater)

b)Gluck and Myer’s Predictive Autoencoding Neural Network

c)O’Reilly & Rudy’s Pattern Completion/Separation Neural Network

d)Hippocampal processes

- Interval Timing (Delamater)

a)Basic behavioral facts (peak timing, scalar timing, within trial dynamic)

b)Information processing theory (Church, Meck, & Gibbon)

c)Multiple oscillator theory, Drift diffusion theory

**Bibliography:**

__Raphan Sessions:__

Tarun Khanna, Foundations of Neural Networks, Addison Wesley 1990, 196 pp.

Anil K. Jain and Richard C. Dubes, Algorithms for Clustering of Data, 1988, 320 pp.

Simon Haykin, Neural Networks: A Comprehensive Foundation, Pearson, 1999, 842 pp.

Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer 2006, 738 pp.

__Delamater Sessions:__

Buhusi, C.V., Schmajuk, N.A., 1999. Timing in simple conditioning and occasion setting: a neural network approach. *Behavioural Processes* 45 (1–3), 33–57.

Church, R.M., Broadbent, H.A., 1990. Alternative representations of time, number, and rate. Cognition 37, 55–81.

Gluck, M. A., & Myers, C. E. (1993). Hippocampal mediation of stimulus representation: A computational theory. *Hippocampus*, 3, 491-516.

Gibbon, J. (1977). Scalar expectancy theory and Weber’s law in animal timing. *Psychological Review*, 84, 279–325.

Kruschke, J.K. (1992). ALCOVE: an exemplar-based connectionist model of category learning. *Psychological Review*, 99, 22-44.

Nasser, H.M., Delamater, A.R. (In Press). The determining conditions for Pavlovian learning: Psychological and neurobiological considerations. In R. Murphy, R. Honey (Eds.) *The Wiley-Blackwell Handbook on the Cognitive Neuroscience of Learning*, John Wiley & Sons, Ltd.

Mackintosh, N. J. (1975). A theory of attention: Variations in the associability of stimuli with reinforcement. *Psychological Review*, *82*(4), 276–298.Pearce & Hall (1980)

O'Reilly, R. C. (1996). Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm. *Neural Computation*, 8, 895**–**938.

O'Reilly, R. C., & Rudy, J. W. (2001). Conjunctive representations in learning and memory: Principles of cortical and hippocampal function. *Psychological Review*, 108, 311**–**345.

Pearce, J. M., & Hall, G. (1980). A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. *Psychological Review*, *87*(6), 532–552.

Pearce, J. M. (1994). Similarity and discrimination: A selective review and a connectionist model. *Psychological Review*, 101, 587**–**607.

Rescorla, R.A., Wagner, A.R., 1972. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black, A.H., Prokasy, W.F. (Eds.), *Classical Conditioning: II. Current Research and Theory*. Appleton-Century-Crofts, New York, pp. 64–99.

Simen, P., Balci, F., deSouza, L., Cohen, J.D., and Holmes, P. (2011). A Model of Interval Timing by Neural Integration. *Journal of Neuroscience*, 31, 9238-9253.

Sutton, R.S., Barto, A.G. (1990). Time-derivative models of Pavlovian reinforcement. In M. Gabriel and J. Moore (eds.), *Learning and Computational Neuroscience: Foundations of Adaptive Networks*, pp. 497 537. MIT Press, Cambridge, MA.

Wagner, A. R., & Brandon, S. E. (2001). A componential theory of Pavlovian conditioning. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of contemporary learning theories (pp. 23**–**64). Mahwah, NJ: Erlbaum.

__Raphan Sessions:__

__Delamater Sessions:__