Neuromorphic Computing: Bridging the gap between Nanoelectronics, Neuroscience and Machine Learning

Friday, November 10, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract: 
While research in designing brain-inspired algorithms have attained a stage where such Artificial Intelligence platforms are being able to outperform humans at several cognitive tasks, an often-unnoticed cost is the huge computational expenses required for running these algorithms in hardware. Recent explorations have also revealed several algorithmic vulnerabilities of deep learning systems like adversarial susceptibility, lack of explainability, catastrophic forgetting, to name a few. Bridging the computational and algorithmic efficiency gap necessitates the exploration of hardware and algorithms that provide a better match to the computational primitives of biological processing – neurons and synapses, and which require a significant rethinking of traditional von-Neumann based computing. This talk reviews recent developments in the domain of neuromorphic computing paradigms from an overarching system science perspective with an end-to-end co-design focus from computational neuroscience and machine learning to hardware and applications.  Such neuromorphic systems can potentially provide significantly lower computational overhead in contrast to standard deep learning platforms, especially in sparse, event-driven application domains with temporal information processing.

 

Dr. Abhronil Sengupta is an Assistant Professor in the School of Electrical Engineering and Computer Science at Penn State University and holds the Joseph R. and Janice M. Monkowski Career Development Professorship. He is also affiliated with the Department of Materials Science and Engineering and the Materials Research Institute (MRI).

Dr. Sengupta received the PhD degree in Electrical and Computer Engineering from Purdue University in 2018 and the B.E. degree from Jadavpur University, India in 2013. He worked as a DAAD (German Academic Exchange Service) Fellow at the University of Hamburg, Germany in 2012, and as a graduate research intern at Circuit Research Labs, Intel Labs in 2016 and Facebook Reality Labs in 2017.

The ultimate goal of Dr. Sengupta’s research is to bridge the gap between Nanoelectronics, Neuroscience and Machine Learning. He is pursuing an inter-disciplinary research agenda at the intersection of hardware and software across the stack of sensors, devices, circuits, systems and algorithms for enabling low-power event-driven cognitive intelligence. Dr. Sengupta has published over 85 articles in referred journals and conferences and holds 3 granted/pending US patents. He serves on the IEEE Circuits and Systems Society Technical Committee on Neural Systems and Applications, Editorial Board of IEEE Transactions on Cognitive and Developmental Systems, Scientific Reports, Neuromorphic Computing and Engineering, Frontiers in Neuroscience journals and the Technical Program Committee of several international conferences like DAC, ICCAD, ISLPED, ISQED, AICAS, ICONS, GLSVLSI, ICEE, SOCC, ISVLSI, MWSCAS and VLSID. He has been awarded the IEEE Electron Devices Society (EDS) Early Career Award (2023), IEEE Circuits and Systems Society (CASS) Outstanding Young Author Award (2019), IEEE SiPS Best Paper Award (2018), Schmidt Science Fellows Award nominee (2017), Bilsland Dissertation Fellowship (2017), CSPIN Student Presenter Award (2015), Birck Fellowship (2013), the DAAD WISE Fellowship (2012). His work on neuromorphic computing has been highlighted in media by MIT Technology Review, ZDNet, US Department of Defense, American Institute of Physics, IEEE Spectrum, Nature Materials, among others. Dr. Sengupta is a member of  the IEEE Electron Devices Society (EDS), Magnetics Society and Circuits and Systems (CAS) Society, the Association for Computing Machinery (ACM) and the American Physical Society (APS).
 

Performance Debugging, Optimization, and Modeling of Configurable Computer Systems

Friday, November 10, 2023 - 09:00 am
Innovation Building, Room 2267 & Virtual

DISSERTATION DEFENSE

Author : Md Shahriar Iqbal

Advisor : Dr. Pooyan Jamshidi

Date : November 10, 2023

Time:  9 am - 10:30 am

Place : Innovation Building, Room 2267 & Virtual

Meeting Link : Topic: Shahriar Iqbal's PhD Thesis Defense

Time: Nov 10, 2023 09:00 AM Eastern Time (US and Canada)

                                                                                                                                              Join Zoom Meeting                                                                                                                                                
                                                                                                               https://us02web.zoom.us/j/84549470782?pwd=TmpHaG44NVVMT0FLb3N1SmFZWWZBQ…                                                                                   
                                                                                                                                             Meeting ID: 845 4947 0782                                                                                                                                      
                                                                                                                                                Passcode: 5L6tHf                                                                                                                                              

                                                                                                                                                  One tap mobile                                                                                                                                           
                                                                                                                    +16694449171,,84549470782#,,,,*304119# US                                                                                                         
+16699009128,,84549470782#,,,,*304119# US (San Jose)

Dial by your location
• +1 669 444 9171 US
• +1 669 900 9128 US (San Jose)
• +1 346 248 7799 US (Houston)
• +1 719 359 4580 US
• +1 253 205 0468 US
• +1 253 215 8782 US (Tacoma)
• +1 360 209 5623 US
• +1 386 347 5053 US
• +1 507 473 4847 US
• +1 564 217 2000 US
• +1 646 558 8656 US (New York)
• +1 646 931 3860 US
• +1 689 278 1000 US
• +1 301 715 8592 US (Washington DC)
• +1 305 224 1968 US
• +1 309 205 3325 US
• +1 312 626 6799 US (Chicago)

Meeting ID: 845 4947 0782
Passcode: 304119

Find your local number: https://us02web.zoom.us/u/kHPlDm9Rt


Abstract

Modern computer systems are highly configurable, with hundreds of configuration options that interact, resulting in an enormous configuration space. Understanding and reasoning about the performance behavior of highly configurable systems is challenging. This is further worsened due to several system constraints such as large configuration evaluation time, noisy measurements, limited experimentation budget, and accessibility, restricting the capacity to troubleshoot or optimize highly configurable systems. As a result, the significant performance potential already built in many of our modern computer systems remains untapped. Unfortunately, manual configuring is labor-intensive, time-consuming, and often infeasible, even for domain experts. Recently, several search-based and learning-based automatic configuration approaches have been proposed to overcome these issues; nevertheless, critical challenges still remain as they (i) are unaware of the variations of evaluation times between certain performance goals, (ii) may produce incorrect explanation and, (iii) become unreliable in unseen environments (e.g., different hardware, workloads). The primary goal of this thesis is to overcome the aforementioned limitations by adopting a data-driven strategy to design performance debugging and optimization approaches tools that are efficient, scalable, and can be reliably used by developers across different deployment scenarios. We developed a novel cost-aware acquisition function for multi-objective optimization technique, called FlexiBO, that solves the sub-optimality for resource constrained devices. Instead of evaluating all objective functions, our optimization approach chooses the one for evaluation that has the potential to provide the maximum benefit weighted by the objective evaluation cost. Later, we also developed a performance debugging technique, known as Unicorn, which captures intricate interactions between configuration options across the software-hardware stack and describes how such interactions can impact performance variations via causal inference. Finally, we proposed CAMEO - a method that identifies invariant causal predictors under deployment environmental changes, allowing the optimization process to operate in a reduced search space, leading to faster optimization of system performance. We showed the promise of our debugging and optimization techniques through extensive and thorough evaluation on a wide range of software systems over a large design space.

 

Data Annotation for Pervasive Sensing Systems: Challenges and Opportunities

Friday, November 3, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract: 
Smart infrastructures are usually driven by intelligent context-aware services that often need strong ML (or DL) models working in the backend. However, building such sophisticated models often needs a significant amount of labeled training data. The most accepted and the conventional source of such data annotation considers human-in-the-loop. Nevertheless, this task is getting expensive and often highly tedious with multimodal data sources in place. Considering the broad use case of sensor-based human activity recognition, in this talk, we first discuss the possibility of choosing auxiliary modalities that can provide enough information to get the data annotated without human-in-the-loop. Subsequently, we also look into zero-shot approaches to fine-tune the coarse-grain annotations received from external annotators to capture more information regarding the confounded actions hidden within any complex activity of daily living.

Bio
Sandip Chakraborty is working as an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT) Kharagpur. He obtained his Ph.D. from IIT Guwahati, India in 2014, and has been a visiting DAAD Fellow at MPI Saarbrucken, Germany, in 2016. His current researches explores design of ubiquitous systems for computer human interactions, particularly on multi-modal sensing, pervasive systems development, distributed computing, etc. His works have been published in conferences like ACM CHI, ACM BuildSys, ACM RecSys, ACM MobileHCI, TheWebConfn (WWW), IEEE PerCom, IEEE INFOCOM, ACM SIGSPATIAL, etc. He is one of the founding members of ACM IMOBILE, the ACM SIGMOBILE chapter in India. He is working as an Area Editor of Elsevier Ad Hoc Networks journal and Elsevier Pervasive and Mobile Computing journal. He has received various awards and accolades including INAE Young Engineers’ Award, Fellow of National Internet Exchange of India (NIXI), and so on. He is actively involved in organizing various conferences including IEEE PerCom, IEEE SmartComp, COMSNETS, ICDCN, etc., to name a few.

Location: Innovation Center Building 1400

Teams Link

Sensing the Future: Unveiling the Benefits and Risks of Sensing in Cyber-Physical Security

Friday, October 27, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract:
With the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), we are witnessing a wealth of exciting applications that enable computational devices to interact with the physical world via an overwhelming number of sensors and actuators. However, such interactions pose new challenges to traditional approaches to security and privacy. In this talk, I will present how I utilize sensor data to provide security and privacy protections for IoT/CPS scenarios, and further introduce novel security threats arising from similar sensor data. Specifically, I will highlight some of our recent projects that leverage sensor data for attack and defense in various IoT settings. I will also introduce my future research directions such as identifying and defending against unforeseen security challenges from newer domains including smart homes, buildings, and vehicles.

Bio:
Jun Han is an Assistant Professor at Yonsei University with an appointment in the School of Electrical and Electronic Engineering. He founded and directs the Cyber-Physical Systems and Security (CyPhy) Lab at Yonsei. Prior to joining Yonsei, he was at the National University of Singapore with an appointment in the Department of Computer Science, School of Computing. His research interests lie at the intersection of sensing and mobile computing systems and security and focus on utilizing contextual information for security applications in the Internet-of-Things and Cyber-Physical Systems. He publishes at top-tier venues across various research communities spanning mobile computing, sensing systems, and security (including MobiSys, MobiCom, SenSys, Ubicomp, IPSN, S&P/Oakland, CCS, and Usenix Security). He received multiple awards including Google Research Scholar Award. He received his Ph.D. from the Electrical and Computer Engineering Department at Carnegie Mellon University as a member of the Mobile, Embedded, and Wireless (MEWS) Group. He received his M.S. and B.S. degrees in Electrical and Computer Engineering also at Carnegie Mellon University. Jun also worked as a software engineer at Samsung Electronics.

Location:
In-person
Innovation Center Building 1400

Virtual audience

Enhancing Relation Database Security with Shuffling

Wednesday, October 25, 2023 - 12:00 pm
Innovation Center, Room 2265

DISSERTATION DEFENSE

Department of Computer Science and Engineering
University of South Carolina
Author : Tieming Geng
Advisor : Dr. Chin-Tser Huang
Date : October 25, 2023
Time: 12 pm
Place : Innovation Center, Room 2265 Virtual
Meeting Link : Teams

  • Meeting ID: 287 744 722 437
  • Passcode: ZegM7A

Abstract

The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.

The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.

Real-time Computing for Cyberphysical Systems

Friday, October 13, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract:

Recently, the cyberphyical system (CPS) has gained significant traction in various engineering fields. One of the challenges for CPS is to develop lightweight, real-time computational models to enable in-situ evaluation and decision-making capabilities on mobile decentralized platforms. This seminar presents multiple research efforts being pursued along this frontier at the Integrated Multiphysics & Systems Engineering Laboratory (iMSEL) at the University of South Carolina (USC). It starts with a fundamental introduction of key methodologies to enable lightweight and real-time computation in engineering, including reduced order modeling (ROM) and data-driven modeling. Then, the extension of the data-driven method by leveraging the recent advances in deep learning will be discussed. The strategies to integrate real-time evaluation and decision-making on edge computing devices to enable field deployment of CPS will be presented. Several real-world applications of significant interest demonstrated by iMSEL to federal agencies for real-time computing, such as design automation, massive data analytics, anomaly detection, system autonomy, and others, will also be presented.

Bio:

Yi Wang is an associate Professor in mechanical engineering at the University of South Carolina (USC). He completed his PhD at Carnegie Mellon University in 2005 and obtained his B.S. and M.S. from Shanghai Jiaotong University in China in 1998 and 2000, respectively. From 2005 to 2017, he held several positions of increasing responsibility at the CFD Research Corporation (CFDRC), Huntsville, Alabama. In 2017, he joined the University of South Carolina to start his academic career. His research interests focus on computational and data-enabled science and engineering (CDS&E), including reduced order modeling, large-scale and/or real-time data analytics, system-level simulation, computer vision, and cyberphysical system and autonomy with applications in aerospace, naval perception, unmanned systems, manufacturing, and biomedical devices. His research has been sponsored by several federal funding agencies, including DoD, NIH, NASA, DOT, and industries. He has published over 150 papers in referred journals and conference proceedings. He is also the recipient of the 2021 Research Breakthrough Star Award of USC.

Virtual audience

Robust Underwater State Estimation and Mapping

Wednesday, October 11, 2023 - 03:00 pm
Innovation Center, Room 2277 & Virtual

DISSERTATION DEFENSE

Author: Bharat Joshi
Advisor: Dr. Ioannis Rekleitis
Date: October 11, 2023
Time: 3 pm - 5 pm
Place: Innovation Center, Room 2277 & Virtual

Meeting Link: 

Abstract:

 The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.
 

The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.

Codesigning Computing Systems for Artificial Intelligence

Tuesday, October 10, 2023 - 11:40 am
online

Title: 

Amir Yazdanbakhsh (Google DeepMind), Suvinay Subramanian (Google)


Teams Link


Abstract:

The rapid advancement of artificial intelligence (AI) has ushered in an era of unprecedented computational demands, necessitating continuous innovation in computing systems. In this talk, we will highlight how codesign has been a key paradigm in enabling innovative solutions and state-of-the-art performance in Google's AI computing systems, namely Tensor Processing Units (TPUs). We present several codesign case studies across different layers of the stack, spanning hardware, systems, software, algorithms, all the way up to the datacenter. We discuss how TPUs have made judicious, yet opinionated bets in our design choices, and how these design choices have not only kept pace with the blistering rate of change, but also enabled many of the breakthroughs in AI.

Bio:

Amir Yazdanbakhsh received his Ph.D. degree in computer science from the Georgia Institute of Technology. His Ph.D. work has been recognized by various awards, including Microsoft PhD Fellowship and Qualcomm Innovation Fellowship. Amir is currently a Research Scientist at Google DeepMind where he is the co-founder and co-lead of the Machine Learning for Computer Architecture team. His work focuses on leveraging the recent machine learning methods and advancements to innovate and design better hardware accelerators. He is also interested in designing large-scale distributed systems for training machine learning applications, and led the development of a massively large-scale distributed reinforcement learning system that scales to TPU Pod and efficiently manages thousands of actors to solve complex, real-world tasks. The work of our team has been covered by media outlets, including WIRED, ZDNet, AnalyticsInsight, InfoQ. Amir was inducted into the ISCA Hall of Fame in 2023.

Suvinay Subramanian is a Staff Software Engineer at Google, where he works on the architecture and codesign for Google's ML supercomputers, Tensor Processing Units (TPUs). His work has directly impacted innovative architecture and systems features in multiple generations of TPUs, and empowered performant training and serving of Google's research and production AI workloads. Suvinay received a Ph.D. from MIT, and a B.Tech from the Indian Institute of Technology Madras. He also co-hosts the Computer Architecture Podcast that spotlights cutting-edge developments in computer architecture and systems.