Algorithms for Autonomous Near Earth Sensing with Robot Teams

Friday, October 18, 2019 - 02:20 pm
Storey Innovation Center, Room 1400
Speaker: Pratap Tokekar Affiliation: University of Maryland Location: Innovation Center, Room 1400 Time: Friday 10/18/2019 (2:20 - 3:10pm) Abstract: A connected network of robots and smart sensors can solve grand challenges in domains such as agronomy, oceanography, and infrastructure monitoring. Robots can collect data from hard-to-reach places at unprecedented spatio-temporal scales. In this talk, I will present our recent work on devising efficient algorithms for data collection with heterogeneous robot teams. We will use tools from stochastic optimization, Bayesian statistics, combinatorial optimization, and information theory to design these algorithms. I will present our experiments results on bridge inspection with aerial robots, precision agriculture with aerial and ground robots, monitoring marine environments with aerial robots and robotic boats. I will conclude by discussing our recent efforts that aim to bridge the gap between algorithmic and field robotics. Bio --- Pratap Tokekar is an Assistant Professor in the Department of Computer Science at the University of Maryland. Between 2015 and 2019, he was an Assistant Professor at the Department of Electrical and Computer Engineering at Virginia Tech. Previously, he was a Postdoctoral Researcher at the GRASP lab of University of Pennsylvania. He obtained his Ph.D. in Computer Science from the University of Minnesota in 2014 and Bachelor of Technology degree in Electronics and Telecommunication from College of Engineering Pune, India in 2008. He is a recipient of the NSF CISE Research Initiation Initiative award and an Associate Editor for the IEEE Robotics & Automation Letters and Transactions of Automation Science & Engineering.

AI-Enabled Learning. Anyone. Anywhere. Anytime.

Friday, October 18, 2019 - 11:00 am
Sumwaltt 305
ABSTRACT: The grand challenge for the Georgia Tech emPrize team in the XPrize AI competition is to make human learning more accessible, effective and achievable. We are developing a coordinated set of AI techniques to assist human learning, including (1) Virtual cognitive tutors for learning domain concepts and skills; (2) Virtual teaching assistants that answer questions and student introductions on online discussion forums; (3) A question-asking virtual teaching assistant for formative assessment to help learners refine their business models for spawning startups; (4) A virtual research assistant for literature review that helps engineering students locate and understand biology articles relevant to design problems, and (5) A virtual research assistant that helps biology students conduct computational experiments in ecology. I will present the current status of these technologies, including their assessment and evaluation, focusing on the virtual teaching and research assistants. I will describe how our notion of AI has changed during the course of this program from a technology into a social-technical system, and, in particular, from a tool to a medium to a social actor. BIO: Ashok Goel is Professor of Computer Science and Human-Centered Computing in the School of Interactive Computing at Georgia Institute of Technology and Chief Scientist for Georgia Tech’s Center for 21st Century Universities. He conducts research into artificial intelligence and cognitive science with a focus on computational design and creativity, and more recently, AI-powered learning and education. He is the Editor of AAAI’s AI Magazine and a Co-Chair of the 41st Annual Meeting of the Cognitive Science Society. He is a co-editor of the volume Blended Learning in Practice: A Guide for Practitioners and Researchers published by MIT Press in April 2019. dilab.gatech.edu/ashok-k-goel

Technology and Ocean Exploration

Wednesday, October 9, 2019 - 11:50 am
Bert Storey Innovation Center, Room 1400
Wednesday October 9th, 11:50 am Bert Storey Innovation Center, Room 1400 550 Assembly St., Columbia, SC Abstract: The seminar will include a segment on how to properly setup a program/project for the ocean environment. “The Navy selected Boeing and Lockheed Martin to pursue the Extra Large Unmanned Underwater Vehicle (XLUUV) program, which will test the Navy’s ability to manage a rapid-acquisition project and the Navy-industry team’s ability to develop and integrate an unmanned system that operates completely independently of manned ships........ Lance Towers, Boeing’s director of Advanced Technology Programs for Autonomous Systems, told USNI News that the extra-large UUV concept will ultimately cost the Navy less money than trying to conduct similar missions with a smaller unmanned vehicle due to the XLUUV operating completely independently of a manned ship or submarine. Smaller “tactical” UUVs not only require a manned ship to be nearby to serve as a host, but their operations can also be limited by bad weather or other factors that affect that host ship.”....... LANCE TOWERS, P.E. Director, Autonomous Maritime Site Executive, Huntington Beach, Calif., and Herndon, Va. Autonomous Systems Defense, Space & Security

Cybersecurity Issues in the Context of Cryptographic Shuffling Algorithms and Concept Drift: Challenges and Solutions

Monday, October 7, 2019 - 03:30 pm
Meeting Room 2267, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Hatim Alsuwat Advisor : Dr. Csilla Farkas and Dr. Marco Valtorta Date : Oct 7, 2019 Time : 3:30 pm Place : Meeting Room 2267, Innovation Center Abstract In this dissertation, we investigate and address two kinds of data integrity threats. We first study the limitations of secure cryptographic shuffling algorithms regarding preservation of data dependencies. We then study the limitations of machine learning models regarding concept drift detection. We propose solutions to address these threats. Shuffling Algorithms have been used to protect the confidentiality of sensitive data. However, these algorithms may not preserve data dependencies, such as functional dependencies and data-driven associations. We present two solutions for addressing these shortcomings: (1) Functional dependencies preserving shuffle, and (2) Data-driven associations preserving shuffle. For preserving functional dependencies, we propose a method using Boyce-Codd Normal Form (BCNF) decomposition. Instead of shuffling the original relation, we recommend to shuffle each BCNF decomposition. The final shuffled relation is constructed by joining the shuffled decompositions. We show that our approach is lossless and preserves functional dependencies if the BCNF decomposition is dependency preserving. For preserving data-driven associations, we generate the transitive closure of the sets of attributes that are associated. Attributes of each set are bundled together during shuffling. Concept drift is a significant challenge that greatly influences the accuracy and reliability of machine learning models. There is, therefore, a need to detect concept drift in order to ensure the validity of learned models. We study the issue of concept drift in the context of discrete Bayesian networks. We propose a probabilistic graphical model framework to explicitly detect the presence of concept drift using latent variables. We employ latent variables to model real concept drift and uncertainty drift over time. For modeling real concept drift, we propose to monitor the mean of the distribution of the latent variable over time. For modeling uncertainty drift, we suggest to monitor the change in belief of the latent variable over time, i.e., we monitor the maximum value that the probability density function of the distribution takes over time. We also propose a probabilistic graphical model framework that is based on using latent variables to provide an explanation of the detected posterior probability drift across time. Our results show that neither cryptographic shuffling algorithms nor machine learning models are robust against data integrity threats. However, our proposed approaches are capable of detecting and mitigating such threats.

On the complexity of fault-tolerant consensus

Friday, September 27, 2019 - 02:20 pm
Storey Innovation Center (Room 1400)
Speaker: Dariusz Kowalski Affiliation: Augusta University Location: Innovation Center, Room 1400 Time: Friday 9/27/2019 (2:20 - 3:10pm) Abstract: The problem of reaching agreement in a distributed message-passing system prone to failures is one of the most important and vibrant problems in distributed computing. This talk concentrates on the impact of processes’ crashes to performance of consensus. Crashes are generated by constrained adversaries – a weakly-adaptive adversary, who has to fix, in advance, the set of f crash-prone processes, and a chain-adaptive adversary, who orders all the processes into k disjoint chains and has to follow this order when crashing them. Apart from these constraints, both of them may crash processes in an adaptive way at any time, thus emulating (almost) worst-case failure scenarios in the system against a given algorithm. While commonly used strongly-adaptive adversaries model attacks and non-adaptive ones – pre-defined faults, the constrained adversaries model more realistic scenarios when there are fault-prone dependent processes, e.g., in hierarchical or dependable software/hardware systems. In this view, our approach helps to understand better the crash-tolerant consensus in more realistic executions. We propose time-efficient consensus algorithms against such adversaries. We complement our algorithmic results with (almost) tight lower bounds, and extend the one for weakly-adaptive adversaries to hold also for (syntactically) weaker non-adaptive adversaries. Together with the consensus algorithm against weakly-adaptive adversaries (which automatically translates to the non-adaptive adversaries), these results extend the state-of-the-art of the popular class of non-adaptive adversaries, in particular, the result of Chor, Meritt and Shmoys [JACM 1989], and prove separation gap between the constrained adversaries (including the non-adaptive ones) and the strongly-adaptive adversaries, analyzed by Bar-Joseph and Ben-Or [PODC 1998] and others. Pooyan Jamshidi https://pooyanjamshidi.github.io/

Women in Computing Professional Development Meeting

Wednesday, September 11, 2019 - 06:00 pm
Room 1400, IBM Innovation Center/Horizon 2
When: 6:00pm – 7:00pm, Wednesday, September 11 Where: Room 1400, IBM Innovation Center/Horizon 2 (the building next to Strom Thurmond Fitness Center that has the IBM logo on the side). Main agenda: We will be sharing tips on finding and applying for internships, interacting with recruiters, keeping resumes and LinkedIn accounts up to date, and strategies for preparing for technical interviews. Bring your resume along if you're interested in getting it reviewed!

Women in Computing Welcome Meeting

Wednesday, September 4, 2019 - 06:00 pm
Room 2277, IBM Innovation Center/Horizon 2
Women in Computing will have a welcome meeting on Wednesday, September 4. When: 6:00pm – 7:00pm, Wednesday, September 4 Where: Room 2277, IBM Innovation Center/Horizon 2 (the building next to Strom Thurmond Fitness Center that has the IBM logo on the side). Main agenda: Administrative business and elections

Learning Discriminative Features for Facial Expression Recognition

Wednesday, August 28, 2019 - 09:30 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Jie Cai Advisor : Dr. Yan Tong Date : Aug 28th, 2019 Time : 9:30 am Place : Seminar Room 2277, Innovation Center Abstract Over the past few years, deep learning, e.g., Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have shown promise on facial expression recognition. However, the performance degrades dramatically especially in close-to-real-world settings due to high intra-class variations and high inter-class similarities introduced by subtle facial appearance changes, head pose variations, illumination changes, occlusions, and identity-related attributes, e.g., age, race, and gender. In this work, we developed two novel CNN frameworks and one novel GAN approach to learn discriminative features for facial expression recognition. First, a novel island loss is proposed to enhance the discriminative power of learned deep features. Specifically, the island loss is designed to reduce the intra-class variations while enlarging the inter-class differences simultaneously. Experimental results on two posed facial expression datasets and, more importantly, two spontaneous facial expression datasets have shown that the proposed island loss outperforms the baseline CNNs with the traditional softmax loss or the center loss and achieves better or at least comparable performance compared with the state-of-the-art methods. Second, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to identity-related attributes, where the final features are less affected by the attributes. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on three posed facial expression datasets as well as three spontaneous facial expression datasets have demonstrated that the proposed PAT-CNN achieves the best performance compared with state-of-the-art methods by explicitly modeling attributes. Impressively, the PAT-CNN using a single model achieves the best performance on the SFEW test dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs. Last, we present a novel Identity-Free conditional Generative Adversarial Network (IF-GAN) to explicitly reduce high inter-subject variations caused by identity-related attributes for facial expression recognition. Specifically, for any given input facial expression image, a conditional generative model was developed to transform it to an ``average'' identity expressive face with the same expression as the input face image. Since the generated images have the same synthetic ``average'' identity, they differ from each other only by the displayed expressions and thus, can be used for identity-free facial expression classification. Experimental results on three well-known facial expression datasets have demonstrated that the proposed IF-GAN outperforms the baseline CNN model and achieves best or at least comparable performance compared with the state-of-the-art methods.

Degraded Image Segmentation, Global Context Embedding, and Data Balancing in Semantic Segmentation

Friday, August 9, 2019 - 10:30 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Dazhou Guo Advisor : Dr. Song Wang Date : Aug 9th, 2019 Time : 10:30 am Place : Seminar Room 2277, Innovation Center Abstract Recently, semantic segmentation -- assign a categorical label to each pixel in an image -- plays an important role in image understanding applications, e.g., autonomous driving, human-machine interaction, and medical imaging. Semantic segmentation has made progress by using the deep convolutional neural networks, which are surpassing the traditional methods by a large margin. Despite the success of the deep convolutional neural networks (CNNs), there remain three major challenges. The first challenge is how to segment the degraded images semantically -- degraded image semantic segmentation. In general, image degradations increase the difficulty of semantic segmentation, usually leading to decreased semantic segmentation accuracy. While the use of supervised deep learning has substantially improved the state-of-the-art of semantic image segmentation, the gap between the feature distribution learned using the clean images and the feature distribution learned using the degraded images poses a major obstacle to improve the degraded image semantic segmentation performance. We propose a novel Dense-Gram Network to more effectively reduce the gap than the conventional strategies and segment degraded images. Extensive experiments demonstrate that the proposed Dense-Gram Network yields state-of-the-art semantic segmentation performance on degraded images synthesized using PASCAL VOC 2012, SUNRGBD, CamVid, and CityScapes datasets. The second challenge is how to embed the global context into the segmentation network. As the existing semantic segmentation networks usually exploit the local context information for inferring the label of a single-pixel or patch, without the global context, the CNNs could miss-classify the objects with similar color and shapes. In this dissertation, we propose to embed the global context into the segmentation network using the object's spatial relationship. In particular, we introduce a boundary-based metric that measures the level of spatial adjacency between each pair of object classes and find that this metric is robust against object size induced biases. We develop a new method to enforce this metric into the segmentation loss. We propose a network, which starts with a segmentation network, followed by a new encoder to compute the proposed boundary-based metric, and then trains this network in an end-to-end fashion. We evaluate the proposed method using CamVid and CityScapes datasets and achieve favorable overall performance and a substantial improvement in segmenting small objects. The third challenge of the existing semantic segmentation network is how to address the problem of imbalanced data induced performance decrease. Contemporary methods based on the CNNs typically follow classic strategies such as class re-sampling or cost-sensitive training. However, for a multi-label segmentation problem, this becomes a non-trivial task. At the image level, one semantic class may occur in more images than another. At the pixel level, one semantic class may show larger size than another. Here, we propose a selective-weighting strategy to consider the image- and pixel-level data balancing simultaneously when a batch of images are fed into the network. The experimental results on the CityScapes and BRATS2015 benchmark datasets show that the proposed method can effectively improve the performance.

Person Identification with Convolutional Neural Networks

Friday, August 9, 2019 - 09:00 am
Seminar Room 2277, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Kang Zheng Advisor : Dr. Song Wang Date : Aug 9th, 2019 Time : 9:00 am Place : Seminar Room 2277, Innovation Center Abstract Person identification aims at matching persons across images or videos captured by different cameras, without requiring the presence of persons' faces. It is an important problem in computer vision community, and has many important real-world applications, such as person search, security surveillance and no-checkout stores. However, this problem is very challenging due to various factors, such as illumination variation, view changes, human pose deformation, and occlusion. Traditional approaches generally focus on hand-crafting features and/or learning distance metrics for matching to tackle these challenges. With Convolutional Neural Networks (CNNs), feature extraction and metric learning can be combined in a unified framework. In this work, we study two important sub-problems of person identification: cross-view person identification and visible-thermal person re-identification. Cross-view person identification aims to match persons from temporally synchronized videos taken by wearable cameras. Visible-thermal person re-identification aims to match persons between images taken by visible cameras under normal illumination condition and thermal cameras under poor illumination condition such as during night time. For cross-view person identification, we focus on addressing the challenge of view changes between cameras. Since the videos are taken by wearable cameras, the underlying 3D motion pattern of the same person should be consistent and thus can be used for effective matching. In light of this, we propose to extract view-invariant motion features to match persons. Specifically, we propose a CNN-based triplet network to learn view-invariant features by establishing correspondences between 3D human MoCap data and the projected 2D optical flow data. After training, the triplet network is used to extract view-invariant features from 2D optical flows of videos for matching persons. We collect three datasets for evaluation. The experimental results demonstrate the effectiveness of this method. For visible-thermal person re-identification, we focus on the challenge of domain discrepancy between visible images and thermal images. We propose to address this issue at a class level with a CNN-based two-stream network. Specifically, our idea is to learn a center for features of each person in each domain (visible and thermal domains), using a new relaxed center loss. Instead of imposing constraints between pairs of samples, we enforce the centers of the same person in visible and thermal domains to be close, and the centers of different persons to be distant. We also enforce the feature vector from the center of one person to another in visible feature space to be similar to that in thermal feature space. Using this network, we can learn domain-independent features for visible-thermal person re-identification. Experiments on two public datasets demonstrate the effectiveness of this method.