A Neurosymbolic AI Approach to Scene Understanding.

Friday, January 19, 2024 - 02:20 pm
SWGN 2A27

Abstract: Scene understanding is a major challenge for autonomous systems. It requires combining diverse information sources, background knowledge, and different sensor data to understand the physical and semantic aspects of dynamic environments. Current technology for scene understanding relies heavily on computer vision and deep learning techniques to perform tasks such as object detection and localization. However, due to the complex and dynamic nature of driving scenes, the technology's complete reliance on raw data poses challenges, especially with respect to edge cases. In this talk, I will discuss some of these challenges along with how they are currently being handled. Next, I will discuss a novel perspective that we have introduced as part of my dissertation, which leverages the use of external knowledge, representation learning, and neurosymbolic AI to address some of these challenges. Finally, I will share my thoughts on directions for future research and new applications and domains where we can apply this technology to improve machine perception in autonomous systems. 

Bio: Ruwan Wickramarachchi is a Ph.D. candidate at the AI Institute, University of South Carolina. His dissertation research focuses on introducing expressive knowledge representation and neurosymbolic AI techniques to improve machine perception and context understanding in autonomous systems. He has published several research papers, co-invented patents, and co-orgnaized multiple tutorials on neurosymbolic AI and its emerging uses in addressing scene understanding challenges in autonomous systems. Prior to joining the doctoral program, he worked as a senior engineer in the machine learning research group at London Stock Exchange Group (LSEG). 

Location: SWGN 2A27

 

We would love in-person attendance (required for registered students),
but remote attendance is possible on Zoom:

https://us06web.zoom.us/j/8440139296?pwd=b09lRCtJR0FCTWcyeGtCVVlUMDNKQT…

Meeting ID: 844 013 9296
Passcode: 12345

On Parallelization of Graph Algorithms Performance Modeling and Autonomous 3D Printable Object Synthesis

Monday, December 11, 2023 - 11:00 am
Meeting room 2267 (Innovation Building)

DISSERTATION DEFENSE

Author : Shams-Ul-Haq Syed

Advisor : Dr. Manton Matthews

Date : December 11, 2023

Time:  11 am – 1 pm

Place : Meeting room 2267 (Innovation Building)

 

Abstract

 

The degree of hardware level parallelism offered by today’s GPU architecture makes it ideal for problem domains with massive inherent parallelism potential, fields such as computer vision, image processing, graph theory and graph computations. We have identified three problem areas for purpose of this research dissertation, under the umbrella of performance improvement by harnessing the power of GPUs for novel applications. The first area is concerned with k-vertex connectivity in graph theory, the second area deals performance evaluation using extended roofline models for GPU parallel applications and finally the third problem area is related to synthesis 3D printable objects from 2D images.

In this thesis we examined k-vertex connectivity in undirected graphs, its applications and measure the performance of GPU computations using the CUDA Toolkit. Matthews and Sumner in 1984 presented the conjecture that every 4-connected claw-free graph is Hamiltonian. In the initial paper [1] it was shown that every 3-connected claw-free graph on fewer than 20 vertices is Hamiltonian. Over the years there have been several papers establishing the result for connectivity higher than 4. So, all that remains is the case for 4 connected claw-free graphs conjectured by C. Thomassen [2]. We present a new CUDA based parallel k-vertex connectivity test algorithm to determine the connectivity of any vi given claw-free graph. The parallel algorithm is several orders of magnitude faster when compared to the serial counterpart. It is a major step towards efficiently finding whether the conjecture holds for graphs with connectivity exactly equal to 4. Our parallel algorithm can also be applied to find the value of k (connectedness) for a given graph. It is validated using number of different types of graphs such as complete graphs, complete bipartite graphs, and chorded cycle graphs of sizes ranging from 𝐺𝑛,𝑘 , 20 ≤ 𝑛 ≤ 300.

For GPU architecture we proposed the unified cache aware roofline model which provides better insights by capturing details such as memory transfers between host and device. Unlike traditional roofline models that strictly focused on memory bandwidth and computation performance of either only CPU or GPU. Our model provides more holistic picture of application performance in a single view by capturing computations on CPUs and GPUs along with the data transfers from host to device including theoretical bandwidths of host and device memories.

Finally, a novel approach to synthesize 3D printable objects from a single input image source is presented. The algorithm employs probabilistic machine learning based framework along with multiple shapes and depth cues such as manifolds, contours, gradients, and prior knowledge about the shapes to generate a plausible 3D model from a single 2D image. Our algorithm intelligently combines isolated shapes into a single object while retaining their relative positions. It also considers minimal 3D printable area vii and strength while generating watertight mesh object. Consequently, the resultant 3D model is 3D printer compatible; and the actual 3D printed object is sturdy and less prone to breakage. In addition, our scalable algorithm runs in a quadratic time of the size of the image. The preliminary results have demonstrated several different 2D images turned into actual 3D printed objects that are sturdy and aesthetically pleasing.

Object Classification, Detection and Tracking in Challenging Underwater Environment

Friday, December 8, 2023 - 04:00 pm
online

DISSERTATION DEFENSE

 

Author : Md Modasshir

Advisor : Dr. Ioannis Rekleitis

Date : December 8, 2023

Time:  4 pm – 5 pm

Place : Virtual

Meeting Link : https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZWVjMjFkNzEtNDQx…

Abstract

The main contributions of this thesis is the applicability and architectural designs of deep learning algorithms in underwater imagery. In recent times, deep learning techniques for object classifcation and detection have achieved exceptional levels of accuracy that surpass human capabilities. However, the effectiveness of these techniques in underwater environments has not been thoroughly researched. This thesis delves into various research areas related to underwater environments, such as object classifcation, detection, semantic segmentation, pose regression, and semi-supervised retraining of detection models.

The frst part of the thesis studies image classifcation and detection. Image classifcation is a fundamental process that involves assigning a label to an image from a predetermined set of categories. Detection, on the other hand, refers to the process of locating an object within an image, along with its label. We have developed a coral classification model, MDNet, for object classifcation that is trained using point-annotated visual data and is capable of classifying eight species of corals. MDNet incorporates state-of-the-art convolutional neural network architectural designs that allow for acceleration on embedded devices. To further enhance its capabilities, we utilize the detection capability of MDNet along with Kernelized Correlation Filters (KCF)-based trackers to identify unique coral objects. For a given trajectory on the seafloor, we can track unique coral objects and estimate coral population density. This population estimation is a valuable tool for marine biologists to analyze the effects of climate change and water population on coral reefs over time. To deploy the system on embedded devices such as Aqua2, we have conducted a comprehensive study of available neural network accelerators based on feld-programmable gate arrays (FPGAs) and optimized MDNet to achieve real-time performance. For object devitection, we combine the output of the classifer model with a crowd-annotated dataset to develop a robust model for detecting relevant species of corals. We also test the generalization capability of models designed for underwater images in medical domain. The similar models were trained to classify and quantify nuclei from human blood neutrophils. The model achieved over 94% accuracy in differentiating different cell types.

Next part of the thesis explores and suggests on how to integrate deep learning based object detection with SLAM system to create semantic 3D map. A semantic 3D map is required for sea-floor exploration and coral reef monitoring systems. In our research, we integrate a coral reef detection algorithm with Direct Sparse Odometry (DSO), a Simultaneous Localization and Mapping system (SLAM) method. By combining the output of the detection system with DSO feature mapping, we have developed a semantic 3D map of the system that allows for effcient navigation and better coverage of the coral reef.

In the subsequent part of the thesis, we extend object detection neural networks to predict 6D pose of underwater vehicles. Pose regression, the process of predicting 6D poses, in deep learning involves using monocular images to predict the 3D location and orientation of an object. In order to facilitate cooperative localization, we have created a vision-based localization system called DeepURL for Aqua2 robots operating underwater. The DeepURL system frst detects objects in the images and then predicts their 3D positions and orientations.

Finally, in the fourth part of the thesis, we have developed a semi-supervised approach for training the detection algorithm using a dataset with labels for a subset of samples. This allows the algorithm to use unlabeled visual data from future experiments and scuba diving. We have found that this semi-supervised approach has improved the performance and robustness of the detection algorithm.

The thesis aims at developing deep learning based object understanding in underwater environments while maintaining the generalization capability of the models. We demonstrate how object classifcation and detection can be redesigned and repurposed for unviiderwater environments. We also provide intuitions behind the model design and evaluate against the state-of-the-art models.

From Neural Certificates to Certificate-carrying RL in Large-Scale Autonomy Design

Friday, December 1, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract
Learning-enabled control systems have demonstrated impressive empirical performance on challenging control problems in robotics. However, this performance often arrives with the trade-off of diminished transparency and the absence of guarantees regarding the safety and stability of the learned controllers. In recent years, new techniques have emerged to provide these guarantees by learning certificates alongside control policies — these certificates provide concise, data-driven proofs that guarantee the safety and stability of the learned control system. These methods not only allow the user to verify the safety of a learned controller but also provide supervision during training, allowing safety and stability requirements to influence the training process itself. In this talk, we present two exciting updates on neural certificates. In the first work, we explore the use of graph neural networks to learn collision-avoidance certificates that can generalize to unseen and very crowded environments. The second work presents a novel reinforcement learning approach that can produce certificate functions with the policies while addressing the instability issues in the optimization process.

Bio
Dr. Chuchu Fan is an Assistant Professor in AeroAstro and LIDS at MIT. Before that, she was a postdoc researcher at Caltech and got her Ph.D. from ECE at the University of Illinois at Urbana-Champaign. Her research group, Realm at MIT, works on using rigorous mathematics, including formal methods, machine learning, and control theory, for the design, analysis, and verification of safe autonomous systems. Chuchu is the recipient of the 2020 ACM Doctoral Dissertation Award, an NSF CAREER Award, and an AFOSR Young Investigator Program (YIP) Award.

Location:
In-person
Innovation Center Building 1400

Teams Link

Neuromorphic Computing: Bridging the gap between Nanoelectronics, Neuroscience and Machine Learning

Friday, November 10, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract: 
While research in designing brain-inspired algorithms have attained a stage where such Artificial Intelligence platforms are being able to outperform humans at several cognitive tasks, an often-unnoticed cost is the huge computational expenses required for running these algorithms in hardware. Recent explorations have also revealed several algorithmic vulnerabilities of deep learning systems like adversarial susceptibility, lack of explainability, catastrophic forgetting, to name a few. Bridging the computational and algorithmic efficiency gap necessitates the exploration of hardware and algorithms that provide a better match to the computational primitives of biological processing – neurons and synapses, and which require a significant rethinking of traditional von-Neumann based computing. This talk reviews recent developments in the domain of neuromorphic computing paradigms from an overarching system science perspective with an end-to-end co-design focus from computational neuroscience and machine learning to hardware and applications.  Such neuromorphic systems can potentially provide significantly lower computational overhead in contrast to standard deep learning platforms, especially in sparse, event-driven application domains with temporal information processing.

 

Dr. Abhronil Sengupta is an Assistant Professor in the School of Electrical Engineering and Computer Science at Penn State University and holds the Joseph R. and Janice M. Monkowski Career Development Professorship. He is also affiliated with the Department of Materials Science and Engineering and the Materials Research Institute (MRI).

Dr. Sengupta received the PhD degree in Electrical and Computer Engineering from Purdue University in 2018 and the B.E. degree from Jadavpur University, India in 2013. He worked as a DAAD (German Academic Exchange Service) Fellow at the University of Hamburg, Germany in 2012, and as a graduate research intern at Circuit Research Labs, Intel Labs in 2016 and Facebook Reality Labs in 2017.

The ultimate goal of Dr. Sengupta’s research is to bridge the gap between Nanoelectronics, Neuroscience and Machine Learning. He is pursuing an inter-disciplinary research agenda at the intersection of hardware and software across the stack of sensors, devices, circuits, systems and algorithms for enabling low-power event-driven cognitive intelligence. Dr. Sengupta has published over 85 articles in referred journals and conferences and holds 3 granted/pending US patents. He serves on the IEEE Circuits and Systems Society Technical Committee on Neural Systems and Applications, Editorial Board of IEEE Transactions on Cognitive and Developmental Systems, Scientific Reports, Neuromorphic Computing and Engineering, Frontiers in Neuroscience journals and the Technical Program Committee of several international conferences like DAC, ICCAD, ISLPED, ISQED, AICAS, ICONS, GLSVLSI, ICEE, SOCC, ISVLSI, MWSCAS and VLSID. He has been awarded the IEEE Electron Devices Society (EDS) Early Career Award (2023), IEEE Circuits and Systems Society (CASS) Outstanding Young Author Award (2019), IEEE SiPS Best Paper Award (2018), Schmidt Science Fellows Award nominee (2017), Bilsland Dissertation Fellowship (2017), CSPIN Student Presenter Award (2015), Birck Fellowship (2013), the DAAD WISE Fellowship (2012). His work on neuromorphic computing has been highlighted in media by MIT Technology Review, ZDNet, US Department of Defense, American Institute of Physics, IEEE Spectrum, Nature Materials, among others. Dr. Sengupta is a member of  the IEEE Electron Devices Society (EDS), Magnetics Society and Circuits and Systems (CAS) Society, the Association for Computing Machinery (ACM) and the American Physical Society (APS).
 

Performance Debugging, Optimization, and Modeling of Configurable Computer Systems

Friday, November 10, 2023 - 09:00 am
Innovation Building, Room 2267 & Virtual

DISSERTATION DEFENSE

Author : Md Shahriar Iqbal

Advisor : Dr. Pooyan Jamshidi

Date : November 10, 2023

Time:  9 am - 10:30 am

Place : Innovation Building, Room 2267 & Virtual

Meeting Link : Topic: Shahriar Iqbal's PhD Thesis Defense

Time: Nov 10, 2023 09:00 AM Eastern Time (US and Canada)

                                                                                                                                              Join Zoom Meeting                                                                                                                                                
                                                                                                               https://us02web.zoom.us/j/84549470782?pwd=TmpHaG44NVVMT0FLb3N1SmFZWWZBQ…                                                                                   
                                                                                                                                             Meeting ID: 845 4947 0782                                                                                                                                      
                                                                                                                                                Passcode: 5L6tHf                                                                                                                                              

                                                                                                                                                  One tap mobile                                                                                                                                           
                                                                                                                    +16694449171,,84549470782#,,,,*304119# US                                                                                                         
+16699009128,,84549470782#,,,,*304119# US (San Jose)

Dial by your location
• +1 669 444 9171 US
• +1 669 900 9128 US (San Jose)
• +1 346 248 7799 US (Houston)
• +1 719 359 4580 US
• +1 253 205 0468 US
• +1 253 215 8782 US (Tacoma)
• +1 360 209 5623 US
• +1 386 347 5053 US
• +1 507 473 4847 US
• +1 564 217 2000 US
• +1 646 558 8656 US (New York)
• +1 646 931 3860 US
• +1 689 278 1000 US
• +1 301 715 8592 US (Washington DC)
• +1 305 224 1968 US
• +1 309 205 3325 US
• +1 312 626 6799 US (Chicago)

Meeting ID: 845 4947 0782
Passcode: 304119

Find your local number: https://us02web.zoom.us/u/kHPlDm9Rt


Abstract

Modern computer systems are highly configurable, with hundreds of configuration options that interact, resulting in an enormous configuration space. Understanding and reasoning about the performance behavior of highly configurable systems is challenging. This is further worsened due to several system constraints such as large configuration evaluation time, noisy measurements, limited experimentation budget, and accessibility, restricting the capacity to troubleshoot or optimize highly configurable systems. As a result, the significant performance potential already built in many of our modern computer systems remains untapped. Unfortunately, manual configuring is labor-intensive, time-consuming, and often infeasible, even for domain experts. Recently, several search-based and learning-based automatic configuration approaches have been proposed to overcome these issues; nevertheless, critical challenges still remain as they (i) are unaware of the variations of evaluation times between certain performance goals, (ii) may produce incorrect explanation and, (iii) become unreliable in unseen environments (e.g., different hardware, workloads). The primary goal of this thesis is to overcome the aforementioned limitations by adopting a data-driven strategy to design performance debugging and optimization approaches tools that are efficient, scalable, and can be reliably used by developers across different deployment scenarios. We developed a novel cost-aware acquisition function for multi-objective optimization technique, called FlexiBO, that solves the sub-optimality for resource constrained devices. Instead of evaluating all objective functions, our optimization approach chooses the one for evaluation that has the potential to provide the maximum benefit weighted by the objective evaluation cost. Later, we also developed a performance debugging technique, known as Unicorn, which captures intricate interactions between configuration options across the software-hardware stack and describes how such interactions can impact performance variations via causal inference. Finally, we proposed CAMEO - a method that identifies invariant causal predictors under deployment environmental changes, allowing the optimization process to operate in a reduced search space, leading to faster optimization of system performance. We showed the promise of our debugging and optimization techniques through extensive and thorough evaluation on a wide range of software systems over a large design space.

 

Data Annotation for Pervasive Sensing Systems: Challenges and Opportunities

Friday, November 3, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract: 
Smart infrastructures are usually driven by intelligent context-aware services that often need strong ML (or DL) models working in the backend. However, building such sophisticated models often needs a significant amount of labeled training data. The most accepted and the conventional source of such data annotation considers human-in-the-loop. Nevertheless, this task is getting expensive and often highly tedious with multimodal data sources in place. Considering the broad use case of sensor-based human activity recognition, in this talk, we first discuss the possibility of choosing auxiliary modalities that can provide enough information to get the data annotated without human-in-the-loop. Subsequently, we also look into zero-shot approaches to fine-tune the coarse-grain annotations received from external annotators to capture more information regarding the confounded actions hidden within any complex activity of daily living.

Bio
Sandip Chakraborty is working as an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT) Kharagpur. He obtained his Ph.D. from IIT Guwahati, India in 2014, and has been a visiting DAAD Fellow at MPI Saarbrucken, Germany, in 2016. His current researches explores design of ubiquitous systems for computer human interactions, particularly on multi-modal sensing, pervasive systems development, distributed computing, etc. His works have been published in conferences like ACM CHI, ACM BuildSys, ACM RecSys, ACM MobileHCI, TheWebConfn (WWW), IEEE PerCom, IEEE INFOCOM, ACM SIGSPATIAL, etc. He is one of the founding members of ACM IMOBILE, the ACM SIGMOBILE chapter in India. He is working as an Area Editor of Elsevier Ad Hoc Networks journal and Elsevier Pervasive and Mobile Computing journal. He has received various awards and accolades including INAE Young Engineers’ Award, Fellow of National Internet Exchange of India (NIXI), and so on. He is actively involved in organizing various conferences including IEEE PerCom, IEEE SmartComp, COMSNETS, ICDCN, etc., to name a few.

Location: Innovation Center Building 1400

Teams Link

Sensing the Future: Unveiling the Benefits and Risks of Sensing in Cyber-Physical Security

Friday, October 27, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract:
With the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), we are witnessing a wealth of exciting applications that enable computational devices to interact with the physical world via an overwhelming number of sensors and actuators. However, such interactions pose new challenges to traditional approaches to security and privacy. In this talk, I will present how I utilize sensor data to provide security and privacy protections for IoT/CPS scenarios, and further introduce novel security threats arising from similar sensor data. Specifically, I will highlight some of our recent projects that leverage sensor data for attack and defense in various IoT settings. I will also introduce my future research directions such as identifying and defending against unforeseen security challenges from newer domains including smart homes, buildings, and vehicles.

Bio:
Jun Han is an Assistant Professor at Yonsei University with an appointment in the School of Electrical and Electronic Engineering. He founded and directs the Cyber-Physical Systems and Security (CyPhy) Lab at Yonsei. Prior to joining Yonsei, he was at the National University of Singapore with an appointment in the Department of Computer Science, School of Computing. His research interests lie at the intersection of sensing and mobile computing systems and security and focus on utilizing contextual information for security applications in the Internet-of-Things and Cyber-Physical Systems. He publishes at top-tier venues across various research communities spanning mobile computing, sensing systems, and security (including MobiSys, MobiCom, SenSys, Ubicomp, IPSN, S&P/Oakland, CCS, and Usenix Security). He received multiple awards including Google Research Scholar Award. He received his Ph.D. from the Electrical and Computer Engineering Department at Carnegie Mellon University as a member of the Mobile, Embedded, and Wireless (MEWS) Group. He received his M.S. and B.S. degrees in Electrical and Computer Engineering also at Carnegie Mellon University. Jun also worked as a software engineer at Samsung Electronics.

Location:
In-person
Innovation Center Building 1400

Virtual audience