Show The Graduate Center Menu
 
 

Computational Approaches to Learning and Modeling Adaptive Biological Systems

Prerequisites: Calculus, Elements of Linear Algebra, Probability and Statistics
 
Requirements: Students will be given Lectures and Readings in Cognitive and Behavioral                                        Neuroscience
                     A midterm and Final will be given
                    A project to program a Neural Net Implementation of a Model in one of the Readings                           

Justification and Course Description:

 

Pattern Recognition, Artificial Intelligence, and Neural Modeling have a common history, which surged beginning in the 1950’s, some of which has been subsumed under the general heading of Machine Learning. The underlying theory and development of model-based analysis of data have been published in a wide range of journals, including physiological and experimental based journals.   At the same time, the NASA space program engendered a strong interest in adaptive control theory, whose purpose was to investigate the design of systems that could adapt parameters to reflect changes in environmental conditions. Neurocomputing takes its model from biological systems and by the same token understanding of biological systems have benefited much from system theoretic and neural network models whose purpose is to mimic behavior and the physiological process. These methodologies have been important in neuroscience and in medicine and have impacted computational approaches. This course will introduce students to the mathematical models, computational methods, and experimental basis for learning processes in biological systems (animals and humans) and how modeling these systems has impacted computational algorithms. The topics that will be covered in this course will include elements of statistical decision theory, adaptive control, machine learning approaches to pattern classification, and neural nets, and will provide a formal structure for solving problems in behavioral, physiological and representation-mediated behavior. Students completing this course will have the background not only to apply this knowledge to research problems in behavioral and cognitive neuroscience and artificial intelligence, but also to pattern matching in other settings and natural language processing.

Learning Goal:

The learning goal of this course is to teach students how to meld what has been done in Pattern Recognition, Adaptive Control, Machine learning, and Neural Networks with the experimental results from behavioral learning and physiological findings on the underlying behavior of sensory motor systems. This course will achieve this learning goal by not only teaching students the mathematical and computational foundations of these techniques, but also introducing them to the applications in behaviorally and physiologically based studies. As there is no single text that covers the material proposed, students will be given reading assignments. An important goal of the course is for students to learn how to read and understand primary source material and papers.
 
Assessment:

To assess whether students have achieved the goals of this course, the following criteria will be used:

  1. Midterm and Final Exams (About 70% of grade- 40%, 30%, whichever is higher).
  2. Their comprehension of assigned readings will be assessed by class participation in lectures (10%).
  3. Group Programming assignments (Programming Language of Choice) will be assigned to implement different types of Neural Networks and Learning Algorithms (20% of grade). This will assess whether the students have sufficiently grasped the material presented to implement the systems and learning algorithms that will be discussed.
 

Lecture Outline:

 
  1. Introduction                                                                         (Raphan and Delamater)
  1. Basic concepts in Pattern Classification
  2. Design Sets and Test Sets
  3. Probabilistic Models of Pattern Classification
  4. Simple Pattern Classification Algorithms
  5. Application of Pattern Classification in Neuroscience
 
  1. Mathematical Preliminaries of Pattern Classification                                             (Raphan)                                                   
  1. Feature Selection for Describing a Pattern Space
  2. Specifying the Loss Function
  3. Minimizing Approximate Risk
  4. The N-Class problem
  5. Linear and Non-Linear Discriminant Functions
  6. Statistical Approaches to Pattern Classification
 
  1. Cluster Analysis                                                                                                    (Raphan)
  1. Describing Sub-regions in in Pattern Space
  2. Statistical Methods for Describing Subregions
  3. Clustering Algorithms
 
  1. Potential Functions in Pattern Classification                                                         (Raphan)
  1. Choosing the Form of the Potential Function
  2. Efficient Decision Rules based on Potential Function Methods
 
  1. Neural Network Approaches to Pattern Classification                                           (Raphan)
  1. The Perceptron a Linear Discrimination Classifier
  2. Three Layered Architecture
  3. Neural Nets perform approximation and interpolation functions
  4. The Hopfeld-Tank Model
  5. The Hopenstaedt Model
 
  1. Simple Approaches To Training Neural Network                                                  (Raphan)
  1. Supervised and Unsupervised Learning
  2. The Perceptron Gradient Descent and Back Propagation
  3. Hebbian Learning
 
  1. Competitive Learning in Neural Nets                                                                 (Raphan)
  1. The competitive Learning paradigm
  2. Competitive Learning Models (Linsker and Fukashima models)
  3. Adaptive Resonance Theory: A Stabilized Version of Competitive Learning
  4. The Bolzman Machine Learning Algorithm
 
  1. Formulation and Results from Animal and Human Learning                                       (Delamater)
  1. Paradigms for Studying Associative Learning (Pavlovian, Instrumental)
  2. Classic associative learning phenomena
  3. Classification learning
  4. Reinforcement learning
 
  1. Neural Nets Application to Modeling Associative Learning Phenomena                        (Delamater)
       a)Rescorla-Wagner model and the Delta Rule
       b)Neural Mechanisms and the Prediction Error Concept
       c)Adaptive Learning of Attention (Mackintosh, Pearce & Hall, Kruschke)
  
  1. Configural and Elemental Theories of Learning                                                    (Delamater)
      a)Pearce Model and Critique of Elemental Theories
      b)Wagner’s Model and Critique of Configural Theories
 
  1. Real Time Theories of Reinforcement Learning                                                (Delamater)
      a)Behavioral effects of temporal variables in Pavlovian learning
      b)Sutton-Barto Model and Trace Eligibility
      c)Time derivative Models and Dopamine the Reward System
 
  1. Pattern Completion/Separation and Hippocampus                                              (Delamater)
     a)Acquired Equivalence/Distinctiveness, Context Generalization/Discrimination
     b)Gluck and Myer’s Predictive Autoencoding Neural Network
     c)O’Reilly & Rudy’s Pattern Completion/Separation Neural Network
     d)Hippocampal processes
 
  1. Interval Timing                                                                                               (Delamater)

     a)Basic behavioral facts (peak timing, scalar timing, within trial dynamic)
     b)Information processing theory (Church, Meck, & Gibbon)
     c)Multiple oscillator theory, Drift diffusion theory
 
 
 

Bibliography:


Raphan Sessions:
Tarun Khanna, Foundations of Neural Networks, Addison Wesley 1990, 196 pp.
Anil K. Jain and Richard C. Dubes, Algorithms for Clustering of Data, 1988, 320 pp.
Simon Haykin, Neural Networks: A Comprehensive Foundation, Pearson, 1999, 842 pp.
Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer 2006, 738 pp.
 
Delamater Sessions:
Buhusi, C.V., Schmajuk, N.A., 1999. Timing in simple conditioning and occasion setting: a neural network approach. Behavioural Processes 45 (1–3), 33–57.
 
Church, R.M., Broadbent, H.A., 1990. Alternative representations of time, number, and rate. Cognition 37, 55–81.
 
Gluck, M. A., & Myers, C. E. (1993). Hippocampal mediation of stimulus representation: A computational theory. Hippocampus, 3, 491-516.
 
Gibbon, J. (1977). Scalar expectancy theory and Weber’s law in animal timing. Psychological Review, 84, 279–325.
 
Kruschke, J.K. (1992). ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review, 99, 22-44.
Nasser, H.M., Delamater, A.R. (In Press).  The determining conditions for Pavlovian learning: Psychological and neurobiological considerations. In R. Murphy, R. Honey (Eds.) The Wiley-Blackwell Handbook on the Cognitive Neuroscience of Learning, John Wiley & Sons, Ltd.
Mackintosh, N. J. (1975). A theory of attention: Variations in the associability of stimuli with reinforcement. Psychological Review, 82(4), 276–298.Pearce & Hall (1980)
O'Reilly, R. C. (1996). Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm. Neural Computation, 8, 895938.
 
O'Reilly, R. C., & Rudy, J. W. (2001). Conjunctive representations in learning and memory: Principles of cortical and hippocampal function. Psychological Review, 108, 311345.
 
Pearce, J. M., & Hall, G. (1980). A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Review, 87(6), 532–552.
Pearce, J. M. (1994). Similarity and discrimination: A selective review and a connectionist model. Psychological Review, 101, 587607.
 
Rescorla, R.A., Wagner, A.R., 1972. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. In: Black, A.H., Prokasy, W.F. (Eds.), Classical Conditioning: II. Current Research and Theory. Appleton-Century-Crofts, New York, pp. 64–99.
 
Simen, P., Balci, F., deSouza, L., Cohen, J.D., and Holmes, P. (2011). A Model of Interval Timing by Neural Integration. Journal of Neuroscience, 31, 9238-9253.
Sutton, R.S., Barto, A.G. (1990). Time-derivative models of Pavlovian reinforcement. In M. Gabriel and J. Moore (eds.), Learning and Computational Neuroscience: Foundations of Adaptive Networks, pp. 497 537. MIT Press, Cambridge, MA.
Wagner, A. R., & Brandon, S. E. (2001). A componential theory of Pavlovian conditioning. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of contemporary learning theories (pp. 2364). Mahwah, NJ: Erlbaum.