Show The Graduate Center Menu

Big Spatial Data Management

 
 

Big Spatial Data Management

Instructor: Assistant Professor Jianting Zhang


Background

The increasingly larger data volumes and more complex semantics of spatial information never cease to request more computing power to turn such data and information into knowledge to facilitate decision making in many applications, ranging from location based services to intelligent transportation systems. Current generation of spatial databases and moving object database technologies based on aged hardware architectures is incapable of processing data with reasonable effort and there are Spatial Big-Data (SBD) challenges. In particular, although locating and navigation devices (e.g. GPS, cellular/wifi network-based and their combinations) embedded in smartphones (nearly 500 million sold in 2011) have already generated large volumes of location and trajectory data, the next generation of consumer electronics, such as Google Glasses, are likely to generate even larger volumes of location-dependent multimedia data where spatial and trajectory data management techniques will play critical roles in understanding the data.

Graphics Processing Units (GPUs) are massively data parallel devices featuring a much larger number of processing cores (~10^3) and concurrent threads (~10^5) which make them significantly different from CPUs that currently support much fewer processing cores (~10^1) and concurrent threads (<10^2). In addition, the current GPU memory bandwidth (~10^2 GB/s) is more than an order of magnitude higher than that of CPU (~10^1 GB/s) and three orders higher than that of disks (~10^2 MB/s). Different from high-performance computing resources in the past that are typically only available to highly selective research groups, GPUs nowadays are quite affordable to virtually all research groups and individuals. For example, the Nvidia GTX Titan GPUs with 2,688 cores that support 15*2048 concurrent threads, 6 GB memory and 1.3 and 4.5 Teraflops computing power (double and single precision, respectively) currently available from the market for around $1,000. On the other hand, the Intel Xeon Phi accelerators based on its Many- Integrated-Core (MIC) architecture represent a hybridization of classic multi-core CPUs and GPUs and are suitable for speeding up a variety of applications.


Course Description & Content

This course will first overview the impacts of commodity parallel hardware on the research and practices of large-scale data management, including both relational and non-relational data. The second part of the course will introduce basics of OpenMP, Nvidia CUDA and Intel TBB based parallel programming techniques with a focus on high-level parallel primitives and their realizations on multi-core CPUs and GPUs. The third part of the course will focus on parallel indexing and query processing on multidimensional spatial and trajectory data, including grid- and tree-based indexing, selectivity estimation and various types of spatial joins and their optimization following the filtering-refinement scheme.
 

Learning Objectives

While students are encouraged to exercise and implement such parallel data management techniques using native parallel programming techniques, the course will focus on identifying inherent parallelisms in processing large-scale multi-dimensional data and map them to high-level parallel primitives that can be efficiently realized on modern commodity hardware, including multi-core CPUs, GPUs, and Intel MICs, to balance between portability and efficiency.
 

Course Work

Course assignments will include a few small individual projects and a relatively comprehensive group project to encourage collaborations among students with different background. Templates for individual projects will be provided to lower the learning curve of parallel techniques and allow students to focus on the topics of their interests.
 

Notes

  • Although no prerequisites other than standing in a graduate CSc program are required, the students will benefits from the prior knowledge of multidimensional indexing, theoretical parallel algorithms and distributed/parallel numeric computing by taking relevant courses.

  • The course, which focuses on the practical data management techniques on shared memory architectures that involve significant irregular data accesses, is designed to be complementary to existing parallel processing courses that have been offered (or being offered) at the Graduate Center.

  • The course WILL NOT cover MapReduce/Hadoop or Message Passing Interface (MPI) programming based parallel data processing techniques. However, the techniques that will be covered by the course (for a single computing node) can be used as building blocks for MapReduce/Hadoop or MPI based techniques in distributed computing environments.