Show The Graduate Center Menu

Superiorization: An Automated Computational Model for Constrained Optimization

 
 

Superiorization: An Automated Computational Model for Constrained Optimization

Instructor: Distinguished Professor Gabor T. Herman

Superiorization: An automated computational model for constrained optimization
Thursdays 2:00 -4:00 p.m. - 3 credits -

Outline:

Many applications use constrained optimization, with the constraints arising from the desire to produce a solution that is constraints-compatible. It is typically the case that a large number of solutions would be considered good enough from the point of view of being constraints-compatible. In such a case, an optimization criterion is introduced that helps us to distinguish the "better" constraints-compatible solutions.

The superiorization methodology is a recently-developed heuristic approach to constrained optimization.The underlying idea is that in many applications there exist computationally-efficient iterative algorithms that produce constraints-compatible solutions. Often the algorithm is perturbation resilient in the sense that, even if certain kinds of changes are made at the end of each iterative step, the algorithm still produces a constraints-compatible solution. This property is exploited by using such perturbations to steer the algorithm to a solution that is not only constraints-compatible, but is also desirable according to a specified optimization criterion. The approach is very general, it is applicable to many iterative procedures and optimization criteria. Most importantly, superiorization is a totally automatic procedure that turns an iterative algorithm into its superiorized version. This, and its practical consequences, will be discussed in various application areas for a variety of constrained optimization tasks.

 

Administrative:

The course will conist of weekly lectures by the Instructor based on papers published on the topic within the last decade; an abbreviated history of this work is presented below. There will be no exams, but there will be computer projects that will provide the students with opportunity to implement superiorization algorithms and to compare them, in a rigorous fashion, with classical constrained optimization methods. These will be carried out using the SNARK09 software, which and its use for traditional consrained optimization will be introduced in the early lectures. The grade will be determined by the quality of the material submitted by the student on the computer projects, but for a good grade the student will also be expected to attend all lectures from beginning to end.

 

Student learning outcome:

After completing this course the student will be able to:
1. State and present the theory and applications of constrained optimization.
2. Explain the principles behind superiorization and their practical usefulness.
3. Implement and rigorously evaluate the efficacy of algorithms for constrained optimization.

 

An abbreviated history of superiorization:

Superiorization was born in 2006 in the Discrete Imaging and Graphics Group at the Graduate Center of CUNY. Two international visitors to the group (Dan Butnariu and Ivan Kazantsev) worked with the Instructor and his then doctoral student Ran Davidi (now at Stanford University) on the paper Butnariu, D., Davidi, R., Herman, G.T., Kazantsev, I.G.: Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems, IEEE Journal of Selected Topics in Signal Processing 1:540-547, 2007. This publication was the first one presenting the idea of superiorization (although it was not identified in it by that name). The following figure (Figure 1) is from that paper, it shows plots of the values of a function that we are supposed to minimize using superiorization (the curve in the bottom left of the figure) and using a traditional constrained optimization method (the other curve) againt computer time in seconds. Clearly, superiorization is much more efficacious.

 

Figure 1

Superiorization became the central topic of Ran Davidi's CUNY doctoral dissertation. One of the publications based on it, namely Herman, G.T., Davidi, R.: Image reconstruction from a small number of projections, Inverse Problems 24:045011, 2008 has been cited 80 times according to Google Scholar.

Another well-received paper written by the Instructor, Ran Davidi and another previous PhD student of the Instructor, Edgar Garduno, now in the Department of Computer Science of the Applied Mathematics and Systems Research Institute of the Universidad Nacional Autonoma de Mexico, is Garduno, E., Davidi, R., Herman, G.T.: Reconstruction from a few projections by l1-minimization of the Haar transform, Inverse Problems 27:055006, 2011 which was selected for an Insights report of the Institute of Physics.

The superiorization methodology has been adopted also by others working in a various fields; see, for example, Penfold, S.N., Schulte, R.W., Censor, Y., Rosenfeld, A.B.: Total variation superiorization schemes in proton computed tomography image reconstruction, Medical Physics, 37:5887-5895, 2010.

The Instructor's most recent work on superiorization is an application to Positron Emission Tomography (PET): Garduno, E., Herman, G.T.: Superiorization of the ML-EM algorithm, IEEE Transactions on Nuclear Science, accepted for publication

The images below show results of reconstructions of activity in a slice through the brain from simulated PET data: Figure 2 is a reconstruction using traditional constrained optimization, Figure 3 is a reconstruction using superiorization, and Figure 4 are plots of the activity along the 323rd column in the actual brain slice (red), in the reconstruction by the traditional method reconstruction (green) and in the superiorization reconstruction (blue).


Figure 2

Figure 3

Figure 4