# Alumni Dissertations and Theses

#### Filter Dissertations and Theses By:

### RANDOMIZED SEARCH OF GRAPHS IN LOG SPACE AND PROBABILISTIC COMPUTATION

Author:Wen-Ju ChengYear of Dissertation:2014Program:Computer ScienceAdvisor:James CoxAbstract:italic

### The Bilinear Brain, Bilinear Methods For EEG Analysis And Brain Computer Interfaces.

Author:Christoforos ChristoforouYear of Dissertation:2009Program:Computer ScienceAdvisor:Robert HaralickAbstract:Analysis of electro-encephalographic (EEG) signals has been proven an extremely useful research tool for studying the neural correlates of behavior. Single-trial analysis of these signals is essential for the development of non-invasive Brain Computer Interfaces. Analysis of these signals is often expressed as a single-trial classi¯cation problem. The goal is to infer the underling cognitive state of an individual using purely EEG signals. The high dimensional space of EEG observations and the low signal-to-noise ratio (SNR) -often -20db or less - as well as the inter-subject variability and limited observations available for training, make the single-trial classi¯cation of EEG an extremely challenging computational task. To address these challenges we introduce concepts from Multi-linear Algebra and incorporate EEG domain knowledge. More precisely, we formulate the problem in a matrix space and introduce a bilinear combination of a matrix to reduce the space dimensions. Thus the title of this dissertation: "The Bilinear Brain". We also address the issue of inter-subject variability by de¯ning a model that is partially subject-invariant. We develop two classi¯cation algorithms based on the Bilinear model. We term the ¯rst algorithm Second Order Bilinear Discriminant Analysis (SOBDA). It combines ¯rst order and second order statistics of the observation space. The second algorithm we term Bilinear Feature Based Discriminant (BFBD) and addresses the issue of inter-subject variability. We evaluate our methods on both simulated and real human EEG data-sets and show that our method outperforms state-of-the-art methods on di®erent experimental paradigms.

### The Bilinear Brain, Bilinear Methods For EEG Analysis And Brain Computer Interfaces.

Author:Christoforos ChristoforouYear of Dissertation:2009Program:Computer ScienceAdvisor:Robert HaralickAbstract:Analysis of electro-encephalographic (EEG) signals has been proven an extremely useful research tool for studying the neural correlates of behavior. Single-trial analysis of these signals is essential for the development of non-invasive Brain Computer Interfaces. Analysis of these signals is often expressed as a single-trial classi¯cation problem. The goal is to infer the underling cognitive state of an individual using purely EEG signals. The high dimensional space of EEG observations and the low signal-to-noise ratio (SNR) -often -20db or less - as well as the inter-subject variability and limited observations available for training, make the single-trial classi¯cation of EEG an extremely challenging computational task. To address these challenges we introduce concepts from Multi-linear Algebra and incorporate EEG domain knowledge. More precisely, we formulate the problem in a matrix space and introduce a bilinear combination of a matrix to reduce the space dimensions. Thus the title of this dissertation: "The Bilinear Brain". We also address the issue of inter-subject variability by de¯ning a model that is partially subject-invariant. We develop two classi¯cation algorithms based on the Bilinear model. We term the ¯rst algorithm Second Order Bilinear Discriminant Analysis (SOBDA). It combines ¯rst order and second order statistics of the observation space. The second algorithm we term Bilinear Feature Based Discriminant (BFBD) and addresses the issue of inter-subject variability. We evaluate our methods on both simulated and real human EEG data-sets and show that our method outperforms state-of-the-art methods on di®erent experimental paradigms.

### Hash Functions, Latin Squares and Secret Sharing Schemes

Author:Chi ChumYear of Dissertation:2010Program:Computer ScienceAdvisor:Xiaowen ZhangAbstract:A secret sharing scheme creates an effective method to safeguard a secret by dividing it among several participants. Since the original idea introduced by Shamir and Blakley in 1979, a variety of threshold secret sharing schemes and other types have been suggested by researchers. The first part of this thesis shows how to apply hash functions in secret sharing scheme designs. By using hash functions and the herding hashes technique, we first set up a (t + 1, n) threshold scheme which is perfect and ideal, and then extend it to schemes for any general access structure. The schemes can be further set up as verifiable if necessary. The secret can be quickly recovered due to the fast calculation of the hash function. In particular, secret sharing schemes based on Latin squares will be discussed. The practical hash functions used today, such as SHA-1 and SHA-2 families, are iterative hash functions. Although there are many suggestions to improve the security of an iterative hash function, the general idea of processing the message block by block still enables many attacks, which make use of the intermediate hash values, possible. The second part of this thesis proposes a new hash function construction scheme that applies the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash construction to prevent those attacks.

### Hash Functions, Latin Squares and Secret Sharing Schemes

Author:Chi ChumYear of Dissertation:2010Program:Computer ScienceAdvisor:Xiaowen ZhangAbstract:A secret sharing scheme creates an effective method to safeguard a secret by dividing it among several participants. Since the original idea introduced by Shamir and Blakley in 1979, a variety of threshold secret sharing schemes and other types have been suggested by researchers. The first part of this thesis shows how to apply hash functions in secret sharing scheme designs. By using hash functions and the herding hashes technique, we first set up a (t + 1, n) threshold scheme which is perfect and ideal, and then extend it to schemes for any general access structure. The schemes can be further set up as verifiable if necessary. The secret can be quickly recovered due to the fast calculation of the hash function. In particular, secret sharing schemes based on Latin squares will be discussed. The practical hash functions used today, such as SHA-1 and SHA-2 families, are iterative hash functions. Although there are many suggestions to improve the security of an iterative hash function, the general idea of processing the message block by block still enables many attacks, which make use of the intermediate hash values, possible. The second part of this thesis proposes a new hash function construction scheme that applies the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash construction to prevent those attacks.

### Algorithms for Superiorization and their Applications to Image Reconstruction

Author:Ran DavidiYear of Dissertation:2010Program:Computer ScienceAdvisor:Gabor HermanAbstract:Computational tractability with limited computing resources is a major barrier for the ever increasing problem sizes of constrained optimization models that seek a minimum of an objective function satisfying a set of constraints. On the other hand, there exist efficient and computationally much less demanding iterative methods for finding a feasible solution that only fulfills the constraints. These methods can handle problem sizes beyond which existing optimization algorithms cannot function. To bridge this gap we present a new concept called superiorization, envisioned methodologically as lying between optimization and feasibility seeking. It enables us to use efficient iterative methods to steer the iterates toward a point that is feasible and superior, but not necessarily optimal, with respect to the given objective/merit function. Using efficient iterative methods to do `superiorization' instead of `full constrained optimization' or only `feasibility' is a new tool for handling mathematical models that include constraints and a merit function. The target improvement of the superiorization methodology is to affect the computational treatment of the mathematical models so that we can reach solutions that are desirable from the point of view of the application at hand at a relatively small computational cost. The key to superiorization is our discovery that two principal prototypical algorithmic schemes, string-averaging projections and block-iterative projections, which include many projection methods, are bounded perturbation resilient. While stability of algorithms under perturbations is usually made to cope with all kinds of imperfections in the data, here we have taken a proactive approach designed to extract specific benefits from the kind of stability that we term perturbation-resilience. Superiorization uses perturbations proactively to reach feasible points that are superior, according to some criterion, to the ones to which we would get without employing perturbations. In this work, we set forth the fundamental principle of the superiorization methodology, give some mathematical formulations, theorems and results, and show potential benefits in the field of image reconstruction from projections.

### Algorithms for Superiorization and their Applications to Image Reconstruction

Author:Ran DavidiYear of Dissertation:2010Program:Computer ScienceAdvisor:Gabor HermanAbstract:Computational tractability with limited computing resources is a major barrier for the ever increasing problem sizes of constrained optimization models that seek a minimum of an objective function satisfying a set of constraints. On the other hand, there exist efficient and computationally much less demanding iterative methods for finding a feasible solution that only fulfills the constraints. These methods can handle problem sizes beyond which existing optimization algorithms cannot function. To bridge this gap we present a new concept called superiorization, envisioned methodologically as lying between optimization and feasibility seeking. It enables us to use efficient iterative methods to steer the iterates toward a point that is feasible and superior, but not necessarily optimal, with respect to the given objective/merit function. Using efficient iterative methods to do `superiorization' instead of `full constrained optimization' or only `feasibility' is a new tool for handling mathematical models that include constraints and a merit function. The target improvement of the superiorization methodology is to affect the computational treatment of the mathematical models so that we can reach solutions that are desirable from the point of view of the application at hand at a relatively small computational cost. The key to superiorization is our discovery that two principal prototypical algorithmic schemes, string-averaging projections and block-iterative projections, which include many projection methods, are bounded perturbation resilient. While stability of algorithms under perturbations is usually made to cope with all kinds of imperfections in the data, here we have taken a proactive approach designed to extract specific benefits from the kind of stability that we term perturbation-resilience. Superiorization uses perturbations proactively to reach feasible points that are superior, according to some criterion, to the ones to which we would get without employing perturbations. In this work, we set forth the fundamental principle of the superiorization methodology, give some mathematical formulations, theorems and results, and show potential benefits in the field of image reconstruction from projections.

### Epistemic paradox and explicit modal logic

Author:Walter DeanYear of Dissertation:2010Program:Computer ScienceAdvisor:Sergei ArtemovAbstract:The dissertation presents a unified treatment of what I refer to as the epistemic paradoxes (i.e. the Knower Paradox of Montague and Kaplan and Fitch's Paradox of Knowability) in the context of a quantified extension of Artemov's Logic of Proofs [LP]. The system adopted is a modified form of Fitting's Quantified Logic of Proofs [QLP] and contains not only so-called explicit modalities (i.e. statements of the form t:F with intended interpretation of "t denotes a proof of F") but also first-order quantifiers intended to range over proofs or informal justifications. This allows for a formalization of a justification-based notion of knowledge via statements of the form (exists x) x:F. The first chapter seeks to motivate the use of explicit modalities for reasoning about knowledge and informal provability relative to the tradition of epsitemic logic. The second chapter contains a consolidated treatment of the syntax and arithmetic and semantics of LP and QLP. A relational semantics for QLP is developed and several new results are presented. The third chapter focuses on the reconstruction of the Knower Paradox in QLP. The central observation is that by analyzing the K(x) predicate employed by Montague and Kaplan in terms of proof quantification by replacing K("F") with (exists x) x:F, we realize that an additional logic principle is required for the derivation of the Knower. This principle -- which is similar to a form of Universal Generalization for proofs anticipated by G\"odel [1938a] -- is examined both proof theoretically and through the use of an arithmetic semantics for QLP. The fourth chapter focuses on the reconstruction of the Knowability Paradox in QLP. The central observation is the so-called Knowability Principle on which the paradox is based -- i.e. the claim that if F is true, then it is possible to know F -- is most appropriately formulated via the use of proof quantifiers as (exists x) x:F as opposed to the use of propositional possibility and knowledge operators.

### A SCALABLE AGENT-BASED SYSTEM FOR NETWORK FLOW RECONSTRUCTION WITH APPLICATIONS TO DETERMINING THE STRUCTURE AND DYNAMICS OF DISTRIBUTED DENIAL OF SERVICE ATTACKS

Author:Omer DEMIRYear of Dissertation:2010Program:Computer ScienceAdvisor:Bilal KhanAbstract:In this thesis we describe a novel agent-based architecture for flow reconstruction, and demonstrate how it can be applied to obtain a description of the structure and dynamics of distributed denial of service (DDoS) attacks. We show that the system can operate in a decentralized manner, effectively providing a description of the structure and dynamics of traffic flows even with very modest levels of agent deployment. By providing structural information, the system facilitates the execution of DDoS mitigation strategies close to the actual sources of attack traffic. Through simulations, we validate the efficacy with which the system is able to discover traffic source locations and the structure of traffic flows. Through packet-level simulations, we show favorable convergence properties for the system. We describe several schemes for selecting the precise links on which agents should be placed, and show that these placement schemes yield marked improvements in system performance and scalability. Finally, we introduce a prototype attacker localization scheme called SLANT, which combines information from a sequence of attacks on different victims, in order to further isolate traffic source locations. SLANT shows promise for using multiple attack data to determine the exact locations of the attackers, even at moderate agent deployment levels.

### Simplifying Network Testing: Techniques and Approaches Towards Automating and Simplifying the Testing Process

Author:Constantinos DjouvasYear of Dissertation:2009Program:Computer ScienceAdvisor:Nancy GriffethAbstract:The dramatic increase of companies and consumers that heavily depend on networks mandates the creation of reliable network devices. Such reliability can be achieved by testing both the conformance of individual protocols of an implementation to their corresponding specifications and the interaction between different protocols. With the increase of computer power and the advances in network testing research, one would expect that efficient approaches for testing network implementations would be available. However, such approaches are not available due to reasons like the complexity of network protocols, the need for different protocols to interoperate, the limited information on implementation because of proprietary codes, and the potentially unbounded size of the network to be tested. To address these issues, a novel technique is proposed that improves the quality of the test while reducing the time and effort network testing requires. The proposed approach achieves these goals, by automating the process of creating models to be used for validating an implementation. More precisely, it utilizes observations acquired by monitoring the behavior of the implementation for the automatic generation of models. In this way, generated models can accurately represent the actual implementation. Thus, testing is reduced to the problem of verifying that certain properties hold on the generated model. This work presents algorithms that efficiently create models from observations and shows their effectiveness through the presentation of three different examples. In addition, the difficulty of validating models using theorem provers is addressed. To address this issue, techniques available in the literature are utilized and approaches that assist testers with completing proofs are proposed. Results suggest that the complexity of making proofs using theorem proving can be reduced when models are members of the same class, i.e., their structure can be predicted. A final problem this work addresses is that of scale, i.e., the impracticality or even impossibility of testing every possible network configuration. To address this problem, the concept of

self-similarity is introduced. A self-similar network has the property that can be sufficiently represented by a smaller network. Thus, proving the correctness of a smaller network is sufficient for proving the correctness of any self-similar network that can be represented by this smaller one.