Alumni Dissertations and Theses

 
 

Alumni Dissertations and Theses

Filter Dissertations and Theses By:

 
 
  • A Symbolic Exploration of the Joint State Space and the Underlying Argumentation-based Reasoning Processes for Multiagent Planning

    Author:
    Yuqing Tang
    Year of Dissertation:
    2012
    Program:
    Computer Science
    Advisor:
    Simon Parsons
    Abstract:

    Coming up with coherent behaviors for a team of agents in a non-deterministic environment is a complex process. The problem is further complicated by information regarding the environment being defeasible --- new information will disqualify the old information --- while at the same time this information is distributed, uncertain and possibly inconsistent. In this dissertation, I propose an approach based on symbolic model checking techniques to implicitly explore the state space of a system of agents to form a coherent set of joint behaviors for the agents with uncertainty captured as non-deterministism in the state space. The exploration process is based on individual agents' defeasible factored information about the dynamic of the state space towards a set of possibly inconsistent goals. This process can be interpreted using an argumentation-based approach. The exploration of the state space and the argumentation-based reasoning processes are carried out by a sequence of logical operations in the logic of Quantified Boolean Formulae (QBFs) and its model representation --- Binary Decision Diagrams (BDDs). All the algorithms to carry out these processes can be computed with a polynomial number of QBF operations in terms of the number of inputs and the number of boolean variables in the domain. Although general symbolic solving QBFs and BDDs is PSPACE complete, the practices of QBF solvers and BDD implementations allow us to employ various techniques and application-specific heuristics to solve a number of applications. In summary, the approaches proposed in this dissertation provide a formal system and implementation techniques to integrate non-monotonic reasoning and non-deterministic planning into defeasible multiagent decentralized planning so that coherent behaviors can be formed for a system of agents in a very demanding setting where the information regarding the environment and goals regarding what to do are distributed, incomplete, possibly inconsistent and uncertain.

  • 3D SCENE MODELING AND UNDERSTANDING FROM IMAGE SEQUENCES

    Author:
    Hao Tang
    Year of Dissertation:
    2013
    Program:
    Computer Science
    Advisor:
    Zhigang Zhu
    Abstract:

    A new method for 3D modeling is proposed, which generates a content-based 3D mosaic (CB3M) representation for long video sequences of 3D, dynamic urban scenes captured by a camera on a mobile platform. In the first phase, a set of parallel-perspective (pushbroom) mosaics with varying viewing directions is generated to capture both the 3D and dynamic aspects of the scene under the camera coverage. In the second phase, a unified patch-based stereo matching algorithm is applied to extract parametric representations of the color, structure and motion of the dynamic and/or 3D objects in urban scenes, where a lot of planar surfaces exist. Multiple pairs of stereo mosaics are used for facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection. The outcome of this phase is a CB3M representation, which is a highly compressed visual representation for a dynamic 3D scene, and has object contents of both 3D and motion information. In the third phase, a multi-layer based scene understanding algorithm is proposed, resulting in a planar surface model for higher-level object representations. Experimental results are given for both simulated and several different real video sequences of large-scale 3D scenes to show the accuracy and effectiveness of the representation. We also show the patch-based stereo matching algorithm and the CB3M representation can be generalized to 3D modeling with perspective views using either a single camera or a stereovision head on a ground mobile platform or a pedestrian. Applications of the proposed method include airborne or ground video surveillance, 3D urban scene modeling, traffic survey, transportation planning and the visual aid for perception and navigation of blind people.

  • Efficient Communication Through Structured Node Labeling in Peer-to-Peer Networks

    Author:
    Andi Toce
    Year of Dissertation:
    2013
    Program:
    Computer Science
    Advisor:
    Abbe Mowshowitz
    Abstract:

    Peer to Peer (P2P) networks have become increasingly popular in recent years as administrators are choosing to move away from monolithic centralized networks. These networks, while offering significant advantages in a variety of applications, are presented with a new set of challenges, from communication efficiency to network vulnerability and security. Numerous researchers have proposed a variety of solutions to improve the quality of service in P2P networks. A critical factor in maintaining an acceptable level of service quality is the efficiency and reliability of communication among nodes. One way of improving communication is to assign identifiers (labels) to each node and then use these labels to facilitate and improve message routing. This proposed work introduces two labeling schemata for P2P networks. Each participating node is assigned a label set. Labels are then used to determine node positions within an engineered logical overlay and identify routing paths during communication. We prove that the assignment of labels reduces the overall cost of communication thus saving valuable network resources. These theoretical findings are confirmed by experimental results with randomly selected P2P networks of various sizes. Detailed statistics on the performance of each protocol are provided which show clearly the practical utility of the labeling approach.

  • AUTOMATED LEARNER CLASSIFICATION THROUGH INTERFACE EVENT STREAM AND SUMMARY STATISTICS ANALYSIS

    Author:
    Edgar Troudt
    Year of Dissertation:
    2014
    Program:
    Computer Science
    Advisor:
    Danny Kopec
    Abstract:

    Reading comprehension is predominately measured through multiple choice examinations. Yet, as we will discuss in this thesis, such exams are often criticized for their inaccuracies. With the advent of "big data" and the rise of ITS (Intelligent Tutoring Systems), increasing focus will be placed on finding dynamic, automated ways of measuring students' aptitude and progress. This work takes the first step towards automated learner classification based on the application of graphic organizers. We address the following specific problem experimentally: How effectively can we measure task comprehension via human translation of written text into a visual representation on a computer? Can an algorithm employ data from user interface (UI) interaction during the problem solving process, to classify the user's abilities? Specifically, from the data we show machine learning predictions of what a human expert would say about the: 1. integrity of the visual representation produced; 2. level of logical problem solving strategy the user applies to the exercise; 3. level of effort the user gives to the exercise. The core of the experiment is a software system that allows a human subject to read a preselected text and then "draw" a diagram by manipulating icons on a grid-canvas using standard transforms.

  • Algorithms and Hypothesis Selection in Dynamic Homology Phylogenetic Analysis

    Author:
    Andres Varon
    Year of Dissertation:
    2010
    Program:
    Computer Science
    Advisor:
    Amotz Bar-Noy
    Abstract:

    Phylogeny and alignment estimation are two important, and closely related biological problems. In the typical alignment problem, insertions, deletions, and substitutions need to be inferred, to understand the evolutionary patterns of life. With the technological advances of the last 20 years, phylogenetic analyses will grow to include complete chromosomes and genomes. With these data sets, not only insertions, deletions, and substitutions, but also rearrangements such as duplications, translocations, transpositions, and inversions must be taken into consideration. In this study, phylogenetic analysis is explored at three different levels. At the first level, new heuristic algorithms for the joint estimation of phylogenies and alignments under the Maximum Parsimony optimality criterion are described. Our experimental study showed that the new algorithms perform dramatically better when compared to previous heuristics. These new algorithms will allow biologists to analyze larger data sets in shorter periods of time. At the second level, new and existing algorithms where implemented in the computer program POY version 4. POY has had a significant impact in the biology community, and is used by hundreds of biologists around the world. POY will serve as a platform for long term research both in algorithm engineering, and biology. At the third level, the problem of parameter and model selection in complete chromosome analyses is explored. We propose and describe the use of Kolmogorov Complexity (KC) as optimality criterion, as a unifying criterion for phylogenetic analysis. Our evaluation using simulations showed that KC correctly identifies phylogeny and model with high frequency. These results are evidence that KC is very well suited for the difficulties posed by the phylogenetic analysis of complete genomes.

  • GEOMETRIC SEPARATION AND PACKING PROBLEMS

    Author:
    Ivo Vigan
    Year of Dissertation:
    2015
    Program:
    Computer Science
    Advisor:
    Peter Brass
    Abstract:

    The first part of this thesis investigates combinatorial and algorithmic aspects of geometric separation problems in the plane. In such a setting one is given a set of points and a set of separators such as lines, line segments or disks. The goal is to select a small subset of those separators such that every path between any two points is intersected by at least one separator. We first look at several problems which arise when one is given a set of points and a set of unit disks embedded in the plane and the goal is to separate the points using a small subset of the given disks. Next, we focus on a separation problem involving only one region: Given a region in the plane, bounded by a piecewise linear closed curve, such as a fence, place few guards inside the fenced region such that wherever an intruder cuts through the fence, the closest guard is at most a distance one away. Restricting the separating objects to be lines, we investigate combinatorial aspects which arise when we use them to pairwise separate a set of points in the plane; hereafter we generalize the notion of separability to arbitrary sets and present several enumeration results. Lastly, we investigate a packing problem with a non-convex shape in R^3. We show that R^3 can be packed at a density of 0.222 with tori of major radius one and minor radius going to zero. Furthermore, we show that the same torus arrangement yields the asymptotically optimal number of pairwise linked tori.

  • Timed Modal Epistemic Logic

    Author:
    Ren-June Wang
    Year of Dissertation:
    2012
    Program:
    Computer Science
    Advisor:
    Sergei Artemov
    Abstract:

    There will be three parts in this thesis. The first part is a survey of epistemic logic. Epistemic logic was first introduced by philosophers, and later found its applications in fields such as Computer Science and Economics. The survey will cover both the philosophical debates and applications of epistemic logic, and then discussions of the logical omniscience problem will follow. The second part is the introduction of a new logical framework timed Modal Epistemic Logic. tMEL. tMEL is extended from ordinary modal epistemic logic, MEL, by adding numerical labels to knowledge statement to indicate when the statement is known. We will argue how a logical framework reasoning about both knowledge and the time of reasoning can help to resolve the problem of logical omniscience, and tMEL serves well as a logically non-omniscient epistemic system. Finally, we will discuss the syntactical relations between MEL, tMEL, and Justification Logic, from which the study of MEL is originated. Our focus will be on the relations between axiomatic proofs of these logical frameworks. We will first determine a proper subclass of modal logical proofs called non-circular, and prove that this class of proofs is complete. And then we will show that every non-circular MEL proof can be turned into a tMEL proof by finding suitable number labels, and prove that there is a two-way translation between proofs in tMEL and Justification Logic. Combining these results, a formal connection between non-circular proofs and proofs in Justification Logic is established, and the whole procedure gives us an alternative algorithm for the realization between theorems in modal logic and Justification Logic. This is the end of the abstract.

  • AN ADAPTIVE AND INTEGRATED MULTIMODAL SENSING AND PROCESSING FRAMEWORK FOR LONG RANGE MOVING OBJECT DETECTION AND CLASSIFICATION

    Author:
    Tao Wang
    Year of Dissertation:
    2013
    Program:
    Computer Science
    Advisor:
    Zhigang Zhu
    Abstract:

    In applications such as surveillance, inspection and traffic monitoring, long-range detection and classification of targets (vehicles, humans, etc) is a highly desired feature for a sensing system. A single modality will no longer provide the required performance due to the challenges in detection and classification with low resolutions, noisy sensor signals, and various environmental factors due to large sensing distances. Multimodal sensing and processing, on the other hand, can provide complementary information from heterogeneous sensor modalities, such as audio, visual and range sensors. However, there is a lack of effective sensing mechanisms and systematic approaches for sensing and processing using multimodalities. In this thesis, we described a systematical framework for Adaptive and Integrated Multimodal Sensing and Processing (thereafter, the AIM-SP framework) that integrates novel multimodal long-range sensors, adaptive feature selection and learning-based object detection and classification for achieving the goal of adaptive and integrated multimodal sensing and processing. Based on the AIM-SP framework, we have made three unique contributions. First, we have designed a novel multimodal sensor system called Vision-Aided Automated Vibrometry (VAAV), which is capable of automatically obtaining visual, range and acoustic signatures for moving object detection at a large distance. Second, multimodal data, acquired from multiple sensing sources, are integrated and represented in a Multimodal Temporal Panorama (MTP) for easy alignment and fast labeling. Accuracy of target detection can be improved using multimodalities. Further, a visual reconstruction method is developed to remove occlusions, motion blurs and perspective distortions of moving vehicles. With various types of features extracted on aligned multimodal samples, we made our third contribution on feature modality selection using two approaches. The first approach uses multi-branch sequential-based feature searching (MBSF) and the second one uses boosting-based feature learning (BBFL).

  • SYSTEMATIC COMPARISON OF CROSS-LINGUAL PROJECTION TECHNIQUES FOR LOW-DENSITY NLP UNDER STRICT RESOURCE CONSTRAINTS

    Author:
    Joshua Waxman
    Year of Dissertation:
    2014
    Program:
    Computer Science
    Advisor:
    Matt Huenerfauth
    Abstract:

    The field of low-density NLP is often approached from an engineering perspective, and evaluations are typically haphazard - considering different architectures, given different languages, and different available resources - without a systematic comparison. The resulting architectures are then tested on the unique corpus and language for which this approach has been designed. This makes it difficult to truly evaluate which approach is truly the "best," or which approaches are best for a given language. In this dissertation, several state-of-the-art architectures and approaches to low-density language Part-Of-Speech Tagging are reimplemented; all of these techniques exploit a relationship between a high-density (HD) language and a low-density (LD) language. As a novel contribution, a testbed is created using a representative sample of seven (HD - LD) language pairs, all drawn from the same massively parallel corpus, Europarl, and selected for their particular linguistic features. With this testbed in place, never-before-possible comparisons are conducted, to evaluate which broad approach performs the best for particular language pairs, and investigate whether particular language features should suggest a particular NLP approach. A survey of the field suggested some unexplored approaches with potential to yield better performance, be quicker to implement, and require less intensive linguistic resources. Under strict resource limitations, which are typical for low-density NLP environments, these characteristics are important. The approaches investigated in this dissertation are each a form of language ifier, which modifies an LD-corpus to be more like an HD-corpus, or alternatively, modifies an HD-corpus to be more like an LD-corpus, prior to supervised training. Each relying on relatively few linguistic resources, four variations of language ifier designs have been implemented and evaluated in this dissertation: lexical replacement, affix replacement, cognate replacement, and exemplar replacement. Based on linguistic properties of the languages drawn from the Europarl corpus, various predictions were made of which prior and novel approaches would be most effective for languages with specific linguistic properties, and these predictions were evaluated through systematic evaluations with the testbed of languages. The results of this dissertation serve as guidance for future researchers who must select an appropriate cross-lingual projection approach (and a high-density language from which to project) for a given low-density language. Finally, all the languages drawn from the Europarl corpus are actually HD, but for the sake of the evaluation testbed in this dissertation, certain languages are treated as if they were LD (ignoring any available HD resources). In order to evaluate how various approaches perform on an actual LD language, a case study was conducted in which part-of-speech taggers were implemented for Tajiki, harnessing linguistic resources from a related HD-language, Farsi, using all of the prior and novel approaches investigated in this dissertation. Insights from this case study were documented so that future researchers can gain insight into what their experience might be in implementing NLP tools for an LD language given the strict resource limitations considered in this dissertation.

  • A Parametrization-Based Surface Reconstruction System for Triangular Mesh Simplification with Application to Large Scale Scenes

    Author:
    Adriana Wise
    Year of Dissertation:
    2014
    Program:
    Computer Science
    Advisor:
    Ioannis Stamos
    Abstract:

    The laser scanner is nowadays widely used to capture the geometry of art, animation maquettes, or large architectural, industrial, and land form models. It thus poses specific problems depending on the model scale. This thesis provides a solution for simplification of triangulated data and for surface reconstruction of large data sets, where feature edges provide an obvious segmentation structure. It also explores a new method for model segmentation, with the goal of applying multiresolution techniques to data sets characterized by curvy areas and the lack of clear demarcation features. The preliminary stage of surface segmentation, which takes as input single or multiple scan data files, generates surface patches which are processed independently. The surface components are mapped onto a two-dimensional domain with boundary constraints, using a novel parametrization weight coefficient. This stage generates valid parameter domain points, which can be fed as arguments to parametric modeling functions or surface approximation schemes. On this domain, our approach explores two types of remeshing. First, we generate points in a regular grid pattern, achieving multiresolution through a flexible grid step, which nevertheless is designed to produce a globally uniform resampling aspect. In this case, for reconstruction, we attempt to solve the open problem of border reconciliation across adjacent domains by retriangulating the border gap between the grid and the fixed irregular border. Alternatively, we straighten the domain borders in the parameter domain and coarsely triangulate the resulting simplified polygons, resampling the base domain triangles in a 1-4 subdivision pattern, achieving multiresolution from the number of subdivision steps. For mesh reconstruction, we use a linear interpolation method based on the original mesh triangles as control points on local planes, using a saved triangle correspondence between the original mesh and the parametric domain. We also use a region-wide approximation method, applied to the parameter grid points, which first generates data-trained control points, and then uses them to obtain the reconstruction values at the resamples. In the grid resampling scheme, due to the border constraints, the reassembly of the segmented, sequentially processed data sets is seamless. In the subdivision scheme, we align adjacent border fragments in the parameter space, and use a region-to-fragment map to achieve the same border reconstruction across two neighboring components. We successfully process data sets up to 1,000,000 points in one pass of our program, and are capable of assembling larger scenes from sequential runs. Our program consists of a single run, without intermediate storage. Where we process large input data files, we fragment the input using a nested application of our segmentation algorithm to reduce the size of the input scenes, and our pipeline reassembles the reconstruction output from multiple data files into a unique view.