Alumni Dissertations and Theses

 
 

Alumni Dissertations and Theses

Filter Dissertations and Theses By:

 
 
  • RANDOMIZED SEARCH OF GRAPHS IN LOG SPACE AND PROBABILISTIC COMPUTATION

    Author:
    Wen-Ju Cheng
    Year of Dissertation:
    2014
    Program:
    Computer Science
    Advisor:
    James Cox
    Abstract:

    italic

  • The Bilinear Brain, Bilinear Methods For EEG Analysis And Brain Computer Interfaces.

    Author:
    Christoforos Christoforou
    Year of Dissertation:
    2009
    Program:
    Computer Science
    Advisor:
    Robert Haralick
    Abstract:

    Analysis of electro-encephalographic (EEG) signals has been proven an extremely useful research tool for studying the neural correlates of behavior. Single-trial analysis of these signals is essential for the development of non-invasive Brain Computer Interfaces. Analysis of these signals is often expressed as a single-trial classi¯cation problem. The goal is to infer the underling cognitive state of an individual using purely EEG signals. The high dimensional space of EEG observations and the low signal-to-noise ratio (SNR) -often -20db or less - as well as the inter-subject variability and limited observations available for training, make the single-trial classi¯cation of EEG an extremely challenging computational task. To address these challenges we introduce concepts from Multi-linear Algebra and incorporate EEG domain knowledge. More precisely, we formulate the problem in a matrix space and introduce a bilinear combination of a matrix to reduce the space dimensions. Thus the title of this dissertation: "The Bilinear Brain". We also address the issue of inter-subject variability by de¯ning a model that is partially subject-invariant. We develop two classi¯cation algorithms based on the Bilinear model. We term the ¯rst algorithm Second Order Bilinear Discriminant Analysis (SOBDA). It combines ¯rst order and second order statistics of the observation space. The second algorithm we term Bilinear Feature Based Discriminant (BFBD) and addresses the issue of inter-subject variability. We evaluate our methods on both simulated and real human EEG data-sets and show that our method outperforms state-of-the-art methods on di®erent experimental paradigms.

  • Hash Functions, Latin Squares and Secret Sharing Schemes

    Author:
    Chi Chum
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Xiaowen Zhang
    Abstract:

    A secret sharing scheme creates an effective method to safeguard a secret by dividing it among several participants. Since the original idea introduced by Shamir and Blakley in 1979, a variety of threshold secret sharing schemes and other types have been suggested by researchers. The first part of this thesis shows how to apply hash functions in secret sharing scheme designs. By using hash functions and the herding hashes technique, we first set up a (t + 1, n) threshold scheme which is perfect and ideal, and then extend it to schemes for any general access structure. The schemes can be further set up as verifiable if necessary. The secret can be quickly recovered due to the fast calculation of the hash function. In particular, secret sharing schemes based on Latin squares will be discussed. The practical hash functions used today, such as SHA-1 and SHA-2 families, are iterative hash functions. Although there are many suggestions to improve the security of an iterative hash function, the general idea of processing the message block by block still enables many attacks, which make use of the intermediate hash values, possible. The second part of this thesis proposes a new hash function construction scheme that applies the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash construction to prevent those attacks.

  • Algorithms for Superiorization and their Applications to Image Reconstruction

    Author:
    Ran Davidi
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Gabor Herman
    Abstract:

    Computational tractability with limited computing resources is a major barrier for the ever increasing problem sizes of constrained optimization models that seek a minimum of an objective function satisfying a set of constraints. On the other hand, there exist efficient and computationally much less demanding iterative methods for finding a feasible solution that only fulfills the constraints. These methods can handle problem sizes beyond which existing optimization algorithms cannot function. To bridge this gap we present a new concept called superiorization, envisioned methodologically as lying between optimization and feasibility seeking. It enables us to use efficient iterative methods to steer the iterates toward a point that is feasible and superior, but not necessarily optimal, with respect to the given objective/merit function. Using efficient iterative methods to do `superiorization' instead of `full constrained optimization' or only `feasibility' is a new tool for handling mathematical models that include constraints and a merit function. The target improvement of the superiorization methodology is to affect the computational treatment of the mathematical models so that we can reach solutions that are desirable from the point of view of the application at hand at a relatively small computational cost. The key to superiorization is our discovery that two principal prototypical algorithmic schemes, string-averaging projections and block-iterative projections, which include many projection methods, are bounded perturbation resilient. While stability of algorithms under perturbations is usually made to cope with all kinds of imperfections in the data, here we have taken a proactive approach designed to extract specific benefits from the kind of stability that we term perturbation-resilience. Superiorization uses perturbations proactively to reach feasible points that are superior, according to some criterion, to the ones to which we would get without employing perturbations. In this work, we set forth the fundamental principle of the superiorization methodology, give some mathematical formulations, theorems and results, and show potential benefits in the field of image reconstruction from projections.

  • Epistemic paradox and explicit modal logic

    Author:
    Walter Dean
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Sergei Artemov
    Abstract:

    The dissertation presents a unified treatment of what I refer to as the epistemic paradoxes (i.e. the Knower Paradox of Montague and Kaplan and Fitch's Paradox of Knowability) in the context of a quantified extension of Artemov's Logic of Proofs [LP]. The system adopted is a modified form of Fitting's Quantified Logic of Proofs [QLP] and contains not only so-called explicit modalities (i.e. statements of the form t:F with intended interpretation of "t denotes a proof of F") but also first-order quantifiers intended to range over proofs or informal justifications. This allows for a formalization of a justification-based notion of knowledge via statements of the form (exists x) x:F. The first chapter seeks to motivate the use of explicit modalities for reasoning about knowledge and informal provability relative to the tradition of epsitemic logic. The second chapter contains a consolidated treatment of the syntax and arithmetic and semantics of LP and QLP. A relational semantics for QLP is developed and several new results are presented. The third chapter focuses on the reconstruction of the Knower Paradox in QLP. The central observation is that by analyzing the K(x) predicate employed by Montague and Kaplan in terms of proof quantification by replacing K("F") with (exists x) x:F, we realize that an additional logic principle is required for the derivation of the Knower. This principle -- which is similar to a form of Universal Generalization for proofs anticipated by G\"odel [1938a] -- is examined both proof theoretically and through the use of an arithmetic semantics for QLP. The fourth chapter focuses on the reconstruction of the Knowability Paradox in QLP. The central observation is the so-called Knowability Principle on which the paradox is based -- i.e. the claim that if F is true, then it is possible to know F -- is most appropriately formulated via the use of proof quantifiers as (exists x) x:F as opposed to the use of propositional possibility and knowledge operators.

  • A SCALABLE AGENT-BASED SYSTEM FOR NETWORK FLOW RECONSTRUCTION WITH APPLICATIONS TO DETERMINING THE STRUCTURE AND DYNAMICS OF DISTRIBUTED DENIAL OF SERVICE ATTACKS

    Author:
    Omer DEMIR
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Bilal Khan
    Abstract:

    In this thesis we describe a novel agent-based architecture for flow reconstruction, and demonstrate how it can be applied to obtain a description of the structure and dynamics of distributed denial of service (DDoS) attacks. We show that the system can operate in a decentralized manner, effectively providing a description of the structure and dynamics of traffic flows even with very modest levels of agent deployment. By providing structural information, the system facilitates the execution of DDoS mitigation strategies close to the actual sources of attack traffic. Through simulations, we validate the efficacy with which the system is able to discover traffic source locations and the structure of traffic flows. Through packet-level simulations, we show favorable convergence properties for the system. We describe several schemes for selecting the precise links on which agents should be placed, and show that these placement schemes yield marked improvements in system performance and scalability. Finally, we introduce a prototype attacker localization scheme called SLANT, which combines information from a sequence of attacks on different victims, in order to further isolate traffic source locations. SLANT shows promise for using multiple attack data to determine the exact locations of the attackers, even at moderate agent deployment levels.

  • Simplifying Network Testing: Techniques and Approaches Towards Automating and Simplifying the Testing Process

    Author:
    Constantinos Djouvas
    Year of Dissertation:
    2009
    Program:
    Computer Science
    Advisor:
    Nancy Griffeth
    Abstract:

    The dramatic increase of companies and consumers that heavily depend on networks mandates the creation of reliable network devices. Such reliability can be achieved by testing both the conformance of individual protocols of an implementation to their corresponding specifications and the interaction between different protocols. With the increase of computer power and the advances in network testing research, one would expect that efficient approaches for testing network implementations would be available. However, such approaches are not available due to reasons like the complexity of network protocols, the need for different protocols to interoperate, the limited information on implementation because of proprietary codes, and the potentially unbounded size of the network to be tested. To address these issues, a novel technique is proposed that improves the quality of the test while reducing the time and effort network testing requires. The proposed approach achieves these goals, by automating the process of creating models to be used for validating an implementation. More precisely, it utilizes observations acquired by monitoring the behavior of the implementation for the automatic generation of models. In this way, generated models can accurately represent the actual implementation. Thus, testing is reduced to the problem of verifying that certain properties hold on the generated model. This work presents algorithms that efficiently create models from observations and shows their effectiveness through the presentation of three different examples. In addition, the difficulty of validating models using theorem provers is addressed. To address this issue, techniques available in the literature are utilized and approaches that assist testers with completing proofs are proposed. Results suggest that the complexity of making proofs using theorem proving can be reduced when models are members of the same class, i.e., their structure can be predicted. A final problem this work addresses is that of scale, i.e., the impracticality or even impossibility of testing every possible network configuration. To address this problem, the concept of self-similarity is introduced. A self-similar network has the property that can be sufficiently represented by a smaller network. Thus, proving the correctness of a smaller network is sufficient for proving the correctness of any self-similar network that can be represented by this smaller one.

  • PROTRU: Leveraging Provenance to Enhance Network Trust in a Wireless Sensor Network

    Author:
    Gulustan Dogan
    Year of Dissertation:
    2013
    Program:
    Computer Science
    Advisor:
    Ted Brown
    Abstract:

    Trust can be an important component of wireless sensor networks for believability of the produced data and historical value is a crucial asset in deciding trust of the data. A node's trust can change over time after its initial deployment due to various reasons such as energy loss, environmental conditions or exhausting sources. Provenance can play a significant role for supporting the calculation of information trust by recording the data flow and snapshots of the network. Furthermore provenance can be used for registering previous trust records and other information such as node type, data type, node location, average of the historical data. We will introduce a node-level trust-enhancing architecture for sensor networks using provenance. Our network will be cognitive in the sense that our system will react automatically upon detecting anomalies. Through simulations we will verify our model and will show that our approach can provide substantial enhancements in information trust as compared to the traditional network approaches.

  • Multiscale Feature Extraction and Matching with Applications to 3D Face Recognition and 2D Shape Warping

    Author:
    Hadi Fadaifard
    Year of Dissertation:
    2011
    Program:
    Computer Science
    Advisor:
    George Wolberg
    Abstract:

    Shape matching is defined as the process of computing a dissimilarity measure between shapes. Partial 3D shape matching refers to a more difficult subproblem that deals with measuring the dissimilarity between partial regions of 3D objects. Despite a great deal of attention drawn to 3D shape matching in the fields of computer vision and computer graphics, partial shape matching applied to objects of arbitrary scale remains a difficult problem. This work addresses the problem of partial 3D shape matching with no assumptions about the scale factors of the input objects. We introduce a multiscale feature extraction and matching technique that employs a new scale-space based representation for 3D surfaces. The representation is shown to be insensitive to noise, computationally efficient, and capable of automatic scale selection. Applications of the proposed representation are presented for automatic 3D surface registration, face detection, and face recognition. Test results involving two well-known 3D face datasets consisting of several thousand scanned human faces demonstrate that the performance of our recognition system is superior over competing methods. Estimating differential surface attributes, such as normals and curvatures, plays an important role in the performance of 3D matching systems. Noise in the data, however, poses the main challenge in estimating these attributes. Surface reconstruction methods, such as Moving Least Squares (MLS), help in minimizing the effects of noise. In this work, we also review the MLS approach for surface reconstruction, and show how the input noise affects the estimated differential attributes of the surface. We demonstrate how these results, together with statistical hypothesis testing, may be used to determine the smallest neighborhood size needed to estimate surface attributes. MLS reconstruction and the discrete Laplace-Beltrami operator are well-known geometric tools that have a wide range of applications. In addition to their prominent use in our 3D work, we describe a novel use of these tools in a 2D shape deformation system for retargeting garments among arbitrary poses.

  • Automatic Readability Assessment

    Author:
    Lijun Feng
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Matt Huenerfauth
    Abstract:

    We describe the development of an automatic tool to assess the readability of text documents. Our readability assessment tool predicts elementary school grade levels of texts with high accuracy. The tool is developed using supervised machine learning techniques on text corpora annotated with grade levels and other indicators of reading difficulty. Various independent variables or features are extracted from texts and used for automatic classification. We systematically explore different feature inventories and evaluate the grade-level prediction of the resulting classifiers. Our evaluation comprises well-known features at various linguistic levels from the existing literature, such as those based on language modeling, part-of-speech, syntactic parse trees, and shallow text properties, including classic readability formulas like the Flesch-Kincaid Grade Level formula. We focus in particular on discourse features, including three novel feature sets based on the density of entities, lexical chains, and coreferential inference, as well as features derived from entity grids. We evaluate and compare these different feature sets in terms of accuracy and mean squared error by cross-validation. Generalization to different corpora or domains is assessed in two ways. First, using two corpora of texts and their manually simplified versions, we evaluate how well our readability assessment tool can discriminate between original and simplified texts. Second, we measure the correlation between grade levels predicted by our tool, expert ratings of text difficulty, and estimated latent difficulty derived from experiments involving adult participants with mild intellectual disabilities. The applications of this work include selection of reading material tailored to varying proficiency levels, ranking of documents by reading difficulty, and automatic document summarization and text simplification.