Towards Automotive Radar Networks for Enhanced Detection/Cognition.

Friday, September 22, 2023 - 02:20 pm
Innovation Center, Room 1400

SUMMARY: This talk will present an overview of recent research at UW FUNLab around the use of vehicular radar for advanced driver assistance systems (en route to a future vision of autonomous driving). Wideband (typically FMCW or chirp) radars are increasingly deployed onboard vehicles as key high-resolution sensors for environmental mapping or imaging and various safety features. The talk will be demarcated into two parts, centered on the evolving role of radar ‘cognition’ in complex operating environments to address two important future challenges:
 

  1. Mitigating multi-access interference among Radars (e.g., dense traffic scenario)
    This will first illustrate the impact of mutual interference on detection performance in commercial Chirp/FMCW radars and then highlight some multi-access protocol design approaches for effective resource sharing among multiple radars.
  2. Contributions to radar vision via new radar hardware (MIMO radar) + associated advanced signal processing (Synthetic Aperture) principles using Convolutional Neural Network (‘Radar Net’) based machine learning approaches for enhanced object detection/classification in challenging circumstances.

 

Trustworthy Artificial Intelligence Using Knowledge-powered CREST Framework

Friday, September 15, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract

Large Language Models (LLMs) have garnered significant attention from researchers, including clinicians, due to their ability to respond to various human queries. Innovations like ChatGPT's groundbreaking reinforcement learning with human feedback and Google's domain-specific fine-tuning in Med-PaLM have introduced two potent information-providing platforms for general health inquiries. The 2023 Gartner Hype Curve places such LLMs at the pinnacle, foreseeing translational impact in the next 2-3 years. This foresight is grounded in comprehensive assessments of recent studies that have illuminated the limitations of these LLMs.

The remarkable potential of these LLMs, when fortified with features like human-level explainability, consistency, reliability, and safety, holds the promise of making deployable systems usable and readily adaptable to various scenarios where human lives may be affected. The talk will introduce a suite of methodologies (methods+metrics) under the Knowledge-powered CREST Framework for LLMs. This practical approach harnesses declarative, procedural, and graph-based knowledge within a neurosymbolic framework to shed light on the challenges associated with LLMs. 
 

Bio

Manas Gaur is an assistant professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). At UMBC, he leads the Knowledge-infused AI and Inference (KAI2) lab. Before entering academia, he was the lead research scientist in Natural Language Processing (NLP) at the AI Center within Samsung Research America. He also held a visiting researcher role at the Alan Turing Institute. Dr. Gaur earned his Ph.D. under the guidance of Prof. Amit P. Sheth at the Artificial Intelligence Institute, University of South Carolina. Together, they played a pivotal role in the development of Knowledge-infused Learning, a paradigm that harmonizes seamlessly with NeuroSymbolic AI. He has been recognized as AAAI New Faculty for 2023 and is currently an advisor to Balm.ai, a startup on Mental Health. More details about him are at: https://manasgaur.github.io/
 

Location:

In-person

Innovation Center Building 1400

 

Online

 

Resource-Aware Approximate Dynamic Programming and Reinforcement Learning for Optimal Control of Dynamic Cyber-Physical Systems

Friday, September 8, 2023 - 02:30 pm
Online

Abstract
The “Curse of Dimensionality” issue of dynamic programming-based control approaches for large-scale state and action space of dynamic systems or agents led to the development of approximate dynamic programming (ADP). The approximate dynamic programming unifies the theory of optimal control, adaptive control, and reinforcement learning (RL) to obtain an approximate solution to the Bellman equation online and forward-in-time. In general, the value function, which is the solution to the Bellman equation in a discrete-time framework or Hamilton-Jacobi-Equation (HJB) in a continuous-time framework, is approximated using a neural network-based approximator.  The learning/adaptive nature of the solution often partially or fully relaxes the assumption of complete system information, which leads to optimal decision-making in uncertain/unknown environments. This presentation will traverse the evolution of the ADP/RL-based optimal control designs for dynamic cyber-physical systems, moving from traditional iterative solutions to those that emphasize time-based solutions. Specifically, there will be a focus on computation and communication-saving aspects of the ADP/RL-based designs.  The resource-aware ADP scheme, referred to as event-driven ADP, using Q-learning and Temporal Difference learning approaches will be discussed in detail. The event-driven approaches train the neural network approximators and update the actions at certain events only, thereby considerably minimizing the computational and communication requirements for the implementation of the learning-based control schemes over the communication network. Concluding this presentation, we will probe into some of the unresolved challenges of ADP/RL schemes, emphasizing their potential vulnerabilities in a cyber-physical framework.


Bio
Avimanyu Sahoo received his Ph.D. in Electrical Engineering from  Missouri University of Science and Technology, Rolla, MO, USA, in 2015 and a Masters of Technology (MTech) and the Indian Institute of Technology (BHU), Varanasi, India, in 2011. He is currently an Assistant Professor in the Electrical and Computer Engineering Department at the University of Alabama in Huntsville (UAH), AL. Prior to joining UAH, Dr. Sahoo was an Associate Professor in the Division of Engineering Technology at Oklahoma State University, Stillwater, OK.

 

Dr. Sahoo’s research interest includes learning-based control and its applications in lithium-ion battery pack modeling, diagnostics, prognostics, cyber-physical systems, and electric machinery health monitoring. Currently, his research focuses on developing intelligent battery management systems (BMS) for lithium-ion battery packs used onboard electric vehicles, computation, and communication-efficient distributed intelligent control schemes for cyber-physical systems using approximate dynamic programming, reinforcement learning, and distributed adaptive state estimation.


Link
 

Doing AI Research in the Age of ChatGPT - Has Anything Changed?

Friday, September 1, 2023 - 02:20 pm
Innovation Center Building 1400

Abstract

Ever since ChatGPT was launched last November, it has captured the public's imagination quickly leading to upheaval and excitement in all communities, whether business, research or government. But how is it changing the AI research community from where the technology came about? In this talk, I will take a practical perspective on the potential and challenges of working with Large Language Model (LLM)-based technologies. We are using it for core AI tasks of generating plans and knowledge graphs, and exploring its use for decision support in finance, water and elections. The TL;DR is that LLMs can be quite useful but unreliable, and this opens up exciting research opportunities for trusted AI.

Bio

Biplav Srivastava is a Professor of Computer Science at the AI Institute and Department of Computer Science at the University of South Carolina which he joined in 2020 after two decades in industrial research. He directs the 'AI for Society' group which is investigating how to enable people to make rational decisions despite the real world complexities of poor data, changing  goals and limited resources by augmenting their cognitive limitations with technology. His work in Artificial Intelligence spans the sub-fields of reasoning (planning, scheduling), knowledge extraction and representation (ontology, open data), learning (classification, deep, adversarial) and interaction (collaborative assistants), and extends to their application for Services (process automation, composition) and Sustainability (water, traffic, health, governance). In particular, he has been involved with building innovative systems for decision support in domains as diverse as governance (IJCAI 2016), astronomy (AAAI 2018 best demo award), water (AAAI 2018), smart room (ICAPS 2018 demo runner up, IJCAI 2018), career planning (commercial product), market intelligence (AAAI 2020 deployed AI award), dialogs for information retrieval (ICAPS 2021), fairness assessment (AAAI 2021),  computer games (AAAI 2022), generalized planning (IJCAI 2023), transportation, set recommendation (teaming, meals) and health. Biplav’s works have led to many science firsts and high-impact commercial innovations valued over billions of dollars, 200+ papers and 70 US patents issued, and awards for papers, demos and hacks. He is an ACM Distinguished Scientist, AAAI Senior Member, IEEE Senior Member and AAAS Leshner Fellow for Public Engagement on AI (2020-2021). More details about him are at:

https://sites.google.com/site/biplavsrivastava/

 

Computerized Psychological Testing: Designing and Developing an Efficient Test Suite using HCI and Reinforcement Learning Techniques

Monday, August 7, 2023 - 12:30 pm
Online

DISSERTATION DEFENSE 
Author : William Hoskins

Advisor : Dr. Jijun Tang

Date : August 7, 2023

Time: 12:30 pm - 1:30 pm

Place : Virtual

Meeting Link:

Abstract 


In this work we discuss the design and development of the Carolina Automated Reading Evaluation (CARE), created to facilitate the finding of deficits in the reading ability of children from four to nine years of age. Designed to automate the process of screening for reading deficits, the CARE is an interactive computer-based tool that helps eliminate the need for one-on-one evaluations of pupils to detect dyslexia and other reading deficits and facilitates the creation of new reading tests within the platform.  

While other tests collect specific data points in order to determine whether a pupil has dyslexia, they typically focus on only a few metrics for diagnosis, such as handwriting analysis or eye tracking. The CARE collects data across up to 16 different subtests, each built to test proficiency in various reading skills. These skills include reading fluency, phoneme manipulation, sound blending, and many other essential skills for reading. This wide variety of measurements allows for a more focused intervention to be created for the pupil. 

The first chapter of this work recounts the design and development process for the CARE platform, describing the creation of the test development tools and the individual subtests. The second chapter focuses on using eye tracking to optimize the teacher facing user interface. Chapter three discusses the usage of reinforcement learning to create a Computerized Adaptive Test for the CARE. 

Predicting Material Structures and Properties Using Deep Learning and Machine Learning Algorithms

Wednesday, June 7, 2023 - 11:00 am
Online

DISSERTATION DEFENSE 
Author : Yuqi Song

Advisor : Dr. Jianjun Hu

Date : June 7, 2023

Time:  11 - 12:30 pm 

Place : Virtual

Meeting Link : https://us04web.zoom.us/j/6926040413?pwd=Ab5_mUtciTlbxlZr0Lmx1ktxIi75VK.1

Abstract 

Discovering new materials and understanding their crystal structures and chemical properties are critical tasks in the material sciences. Although computational methodologies such as Density Functional Theory (DFT), provide a convenient means for calculating certain properties of materials or predicting crystal structures when combined with search algorithms, DFT is computationally too demanding for structure prediction and property calculation for most material families, especially for those materials with a large number of atoms. This dissertation aims to address this limitation by developing novel deep learning and machine learning algorithms for the effective prediction of material crystal structures and properties. Our data-driven machine learning modeling approaches allow to learn both explicit and implicit chemical and geometric knowledge in terms of patterns and constraints from known materials and then exploit them for efficient sampling in crystal structure prediction and feature extraction for material property prediction.

 

In the first topic, we present DeltaCrystal, a new deep learning based method for crystal structure prediction. This data-driven algorithm learns and exploits the abundant atom interaction distribution of known crystal material structures to achieve efficient structure search. It first learns to predict the atomic distance matrix for a given material composition based on a deep residual neural network and then employs this matrix to reconstruct its 3D crystal structure using a genetic algorithm. Through extensive experiments, we demonstrate that our model can learn the implicit inter-atomic relationships and its effectiveness and reliability in exploiting such information for crystal structure prediction. Compared to the global optimization based CSP method, our algorithm achieves better structure prediction performance for more complex crystals.

 

In the second topic, we shift our focus from individually predicting the positions of atoms in each material structure to the idea of crystal structure prediction based on structural polyhedron motifs based on the observation that these atom patterns appear frequently across different crystal materials with high geometric conservation, which has the potential to significantly reduce the search complexity. We extract a large set of structural motifs from a vast collection of material structures. Through the comprehensive analysis of motifs, we uncover common patterns and motifs that span across different materials. Our work represents a preliminary step in the exploration of material structures from the motif point of view and exploiting such motifs for efficient crystal structure prediction.

 

In the third topic, we propose a machine learning based framework for discovering new hypothetical 2D materials. It first trains a deep learning generative model for material composition generation and trains a random forest-based 2D materials classifier to screen out potential 2D material compositions. Then, a template-based element substitution structure prediction approach is developed to predict the crystal structures for a subset of the newly predicted hypothetical 2D formulas, which allows us to confirm their structural stability using DFT calculations. So far, we have predicted 101 crystal structures and confirmed 92 2D/layered materials by DFT formation energy calculation.

 

In the last topic, we focus on machine learning models for predicting material properties, including piezoelectric coefficients and noncentrosymmetricity of nonlinear optical materials, as they play important roles in many important applications, such as laser technology and X-ray shutters. We conduct a comprehensive study on developing advanced machine learning models and evaluating their performance for predicting piezoelectric modulus from materials' composition/structures. Next, we train several prediction models based on extensive feature engineering combined with machine learning models and automated feature learning based on deep graph neural networks. We use the best model to predict the piezoelectric coefficients for 12,680 materials and report the top 20 potential high-performance piezoelectric materials. Similarly, we develop machine learning models to screen potential noncentrosymmetric materials from 2,000,000 hypothetical materials generated by our material composition generative design model and report the top 80 candidate noncentrosymmetric nonlinear materials.

Extending The Convolution In Graph Neural Networks To Solve Materials Science And Node Classification Problems

Tuesday, May 2, 2023 - 01:00 pm
online

DISSERTATION DEFENSE 

Author : Steph-Yves Louis

Advisor : Dr. Jianjun Hu

Date : May 2nd

Time:  1 - 3 pm 

Place : Virtual

Meeting Link : https://us06web.zoom.us/j/83309917826?pwd=eFdBM2h3YWRmbjJUYUdJMzRUOEFmQ…

Abstract 

The usage of graph to represent one's data in machine learning has grown in popularity in both academia and the industry due to its inherent benefits. With its flexible nature and immediate translation to real life observed objects, graph representation had a considerable contribution in advancing the state-of-the-art performance of machine learning in materials.

In this dissertation, we discuss how machines can learn from graph encoded data and provide excellent results through graph neural networks (GNN). Notably, we focus our adaptation of graph neural networks on three tasks: predicting crystal materials properties, nullifying the negative impact of inferior graph node points when learning, and generating crystal structures from material formula. In the first topic, we propose and evaluate a molecule-appropriate adaptation of the original graph-attention (GAT) model for materials property prediction. With the changes of including the encoded bonds formed by atomic elements and adding a final global-attention layer, our experiments show that our approach (GATGNN) achieves great performance and provides interpretable explanation of each atom.

For the second topic, we analyze the learning process of various well-known GNNs and identify a common issue of propagating noisy information. Aiming to reduce the spread of particularly harmful information, we propose a simple, memory-efficient, and highly scalable method called NODE-SELECT. Our results demonstrate that the combination of hard attention coefficients, binary learnable selection parameter, and v parallel arrangement of the layers significantly reduce the negative impact of noise data propagation within a GNN.

In the third topic, we extend the development of our GATGNN method and apply it to simulate electrodes reaction for predicting voltages. Finally, in our last topic, we propose a conditional generative method, named StructR-Diffusion for generating crystal structures. In this approach, we employ both GNN, stable diffusion, and graph-transformers to learn 3-dimentional space positioning of the elements within a unit-vector. Various statistical tests, physical attribute predictions, and visual inspections show that our proposed graph convolutional network model has a good generative capability. Our efficient model proves that it can generate diverse structures that are optimized even prior to DFT relaxations.

Restricted Eavesdropping Analysis in Quantum Cryptography

Friday, April 28, 2023 - 10:15 am
online

Restricted Eavesdropping Analysis in Quantum CryptographyAbstract: Quantum computing is a fast developing field, but it poses threats to the modern cryptography system, thus research in quantum cryptography is of great importance for near-term applications. However, traditional security analysis assumes that the eavesdropper is omnipotent, with her "abilities" only limited by the laws of quantum physics. In this research talk I will introduce my work on "Geometrical Optics Restricted Eavesdropping Analysis of Secret Key Distillation and its applications to practical scenarios", which extended traditional secret key distillation security analysis scheme to a more realistic scenario where the eavesdropper is assumed with a limited power collection ability. Such a restricted-eavesdropping scenario is highly applicable on wireless communication links like wireless microwave or free space optics communications. We will start from a quantum wiretap channel to establish lower bounds and upper bounds based on Hashing Inequality and Relative Entropy of Entanglement. We will then apply this model to realistic channel conditions and analyze eavesdropping and defense strategies from both the eavesdropper's and communication parties' sides. Respective conclusions will be presented and discussed in detail during the presentation.

Ziwen Pan is currently a wireless systems applications engineer, mainly working with auto-testing solutions for Qualcomm chipsets on technologies such as WiFi, BT, GPS, etc. He obtained his Ph.D. degree at the Electrical & Computer Engineering department from the University of Arizona in 2022. His major research work focuses on quantum communication/cryptography, including security analysis of generic secret key distillation schemes and protocol designs for quantum key distributions. He has also worked on other projects such as quantum computation simulation and experimental work on diamond oscillator arrays and microtoroids, FPGA-embedded LDPC channel coding, and entanglement-assisted communication protocol design. He has published in and served as a reviewer for multiple IEEE, Optica (OSA), and APS journals.

Click here to join the meeting

Meeting ID: 221 306 098 190
Passcode: yGeRKj