Wireless Sensing: Material Identification and Localization

Friday, September 25, 2020 - 02:20 pm
Online
You are invited to the talk from an external speaker from Georgia Tech this Friday, 09/25/2020, from 2:20 pm to 3:10 pm EDT. The talk is a part of the CSCE 791: Seminar in Advances in Computing. Please join the virtual lecture via the link below: https://us.bbcollab.com/guest/e51c76fda53042b790ea62ff2d7b2895 Abstract: Wireless communication has truly transformed the world. It has enabled us to connect the entire globe, and made it simple to reach people separated by thousands of miles. However, what receives less attention are other interesting properties of these wireless signals. The fact that wireless signal spread out in all directions and bounce off objects, make them a power lens to look at our world through. This facilitates sensing of the world through wireless signals. In this talk I present two main ideas: Wireless localization which measures the time wireless signals take to travel between two devices, and wireless material identification which analyzes the effect of a liquid on wireless signals, in order to identify the liquid. Bio: Dr. Ashutosh Dhekne is an assistant professor in the School of Computer Science at Georgia Tech. He received his Ph.D. from the University of Illinois at Urbana Champaign, his MTech from IIT Bombay, and bachelors from the University of Pune. His research interests include Mobile Computing, Wireless Networking, Wireless Sensing, and the Internet of Things. https://www.cc.gatech.edu/~dhekne/

ACM@UofSC

Thursday, September 17, 2020 - 07:00 pm
Discord
I wanted to invite everyone down to our Association for Computing Machinery at the University of South Carolina. We are a student organization that meets weekly to discuss a wide range of topics, usually presented by students accompanied with pizza. This semester we may be unable to have our pizza, but we are continuing to have our talks every Thursday at 7. We enjoy talks from anything related to Computers or Computer Science, and we take talk proposals from all students who have an interest in any related topic. Always feel free to join us, we are currently on Discord for the semester. Our organization is also well suited for networking and in my opinion a great skill to have is to be able to present on a topic to any audience. ACM gives a great platform for developing those important skills. Our next two talks are a great example of the wide range of topics and people that give talks at our organization. On the 17th Charles Daniels, a PhD student here will be giving a talk, Introduction to Python. Next week, the 24th, Dalton Craven will be giving a talk, Interactive React, which will be an interactive activity that gives the basics to writing web applications that are pretty and responsive. Throughout the semester we have many other events, one of my personal favorites is ICPC. It is the ACM Intercollegiate Programming Competition, it is a programming competition where we travel to Charleston and compete with other schools such as Clemson, College of Charleston, University of Central Florida, and otherwise enjoy a day traveling and competing. One other is our semesterly codeathon, where we have an in-house programming competition for prizes and slots for our ICPC team. Lastly we have events such as Open Source Columbia, and Open Source 101, where we travel to conventions and are generally great networking opportunities, as well as places to learn the newest and shiniest materials in the field. Hope to see everyone this semester! ———————————————————————————— Introduction to Python Charles Daniels 17 September 2020 7:00 PM EDT (UTC-04) Discord ———————————————————————————— Interactive React Dalton Craven 24 September 2020 7:00 PM EDT (UTC-04) Discord ———————————————————————————— Below are some of our important forums. Discord: http://discord.gg/uRRAmYD ACM Website: https://acm.cse.sc.edu

WiC Meeting

Tuesday, September 1, 2020 - 06:00 pm
online
Women in Computing is an organization at the University of South Carolina aimed at creating a space to meet others within your major. WiC has meetings twice a month and we help students with their career needs. Whether that means helping you set up a GitHub, or taking your headshots, or editing your resume, WiC is there for you! WiC will have their first meeting on Tuesday, September 1st at 6-7 pm. This will be a welcome meeting and we will hold elections for board members. Please join us! Topic: WiC 9/1 Meeting Time: Sep 1, 2020 06:00 PM Eastern Time (US and Canada) Join Zoom Meeting https://us04web.zoom.us/j/75865675904?pwd=UHZmRE9tQ20rb002aXRIU2svWXU5U…

Smart Sensing Enabled Secure and Usable Pairing and Authentication

Friday, May 8, 2020 - 09:00 am
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Xiaopeng Li Advisor : Dr. Lannon Luo Date : May 8, 2020 Time : 9:00 am Place : Virtual Defense Abstract Internet of Things (IoT) technologies have made our lives more convenient and better informed by sensing and monitoring our surroundings. Security applications, such as device pairing and user authentication, are the fundamentals for building a trustworthy smart environment. A secure and convenient pairing approach is critical to IoT enabled applications, as pairing is to establish a secure wireless communication channel for devices. Besides, since a smart environment usually has multiple people (e.g., kids and adults, patients and doctors), how to authenticate users operating on the densely deployed devices and sensitive objects (e.g., a cabinet storing medical records) is also an important problem. Existing security measures either rely on special hardware, have bad usability, or are vulnerable to attacks, and thus fail to protect resource-constrained IoT devices and dumb objects. This thesis aims at addressing the above shortcomings and implementing three security applications: (1) performing secure pairing for IoT devices that lack conventional user interfaces, such as keyboards and display; (2) providing secure and applicable authentication for IoT devices; (3) validating uses of sensitive dumb objects that have no user input interfaces. First, we propose a technique, Universal Operation Sensing, which allows an IoT device to sense the user’s physical operations on it without requiring inertial sensors. Based on this technique, a user carrying a smartphone or wearing a wristband can finish pairing in seconds by touching, in the form of some very simple operations, the target device. We design a pairing protocol based on fuzzy commitment, and build a prototype system named T2Pair. The comprehensive evaluation shows that it is secure and usable. Second, we design three usable authentication gestures by asking the user to ‘pet’ (in the form of some very simple touches for about 2 seconds) on the devices. We build a secure and intuitive authentication method that authenticates device users by comparing the petting operations sensed by devices and those captured by the user wristband. The authentication method is highly secure as physical operations are required, rather than based on proximity. It is also intuitive, adopting very simple authentication operations, e.g., clicking buttons, twisting rotary knobs, and swiping touchscreens. Unlike the state-of-the-art methods, our method does not require any hardware modifications of devices, and thus can be applied to commercial off-the-shelf (COTS) devices. Finally, We present the first implicit and accurate authentication approach for dumb objects, named MoMatch. (1) It provides implicit and continuous authentication. (2) It makes fast authentication decision based on a single object interaction, e.g., pushing a door. (3) It is accurate with average area under the curve (AUC) across 10 different objects =0.97. (4) It works with objects that have zero authentication interfaces. (5) It uses zero biometrics, so does not need per-user profiling. (6) Rigorous security studies are performed, showing that MoMatch is resilient to attacks. The approach is built on a solid causal relationship: an object has a motion typically because a human hand moves it. Thus, the object’s motion and the legitimate user’s hand movement must correlate to validate the use. The main challenge is how to calculate the correlation, as conventional approaches, such as Dynamic Time Warping (DTW) and SVM, all fail to work. We propose an Imagified Curve Comparison (ICC) technique that converts the motion-data correlation evaluation problem into an image comparison problem, and resolve it using neural networks successfully.

A Machine Learning Based Approach to Accelerate Catalyst Discovery

Wednesday, March 18, 2020 - 11:03 am
Meeting Room 2265, Innovation Center
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Author : Asif Jamil Chowdhury Advisor : Dr. Gabriel A. Terejanu Date : Mar 18, 2020 Time : 11:30 am Place : Meeting Room 2265, Innovation Center Abstract Computational catalysis, in contrast to experimental catalysis, uses approximations such as density functional theory (DFT) to compute properties of reaction intermediates. But DFT calculations for a large number of surface species on variety of active site models are resource intensive. In this work, we are building a machine learning based predictive framework for adsorption energies of intermediate species, which can reduce the computational overhead significantly. Our work includes the study and development of appropriate machine learning models and effective fingerprints or descriptors to predict energies accurately for different scenarios. Furthermore, Bayesian inverse problem, that integrates experimental catalysis with its computational counterpart, uses Markov chain Monte Carlo (MCMC) methods to refine the uncertainties on the quantities-of-interest such as turnover frequency. However, large number of forward simulations required by MCMC can become a bottleneck, especially in computational catalysis, where the evaluation of likelihood functions involves finding the solution to microkinetic models. A novel and faster MCMC method is proposed to reduce the number of expensive target evaluations and to shorten the burn-in period by emulating the target along with using a better informed proposal distribution.

From Combination Puzzles to the Natural Sciences

Wednesday, March 11, 2020 - 10:15 am
Storey Innovation Center (Room 2277)
You are invited at the CSCE Colloquium on Wednesday 03/11/2020 at 10:15. Abstract: Combination puzzles, such as the Rubik’s cube, pose unique challenges for artificial intelligence. Furthermore, solutions to such puzzles are directly linked to problems in the natural sciences. In this talk, I will present DeepCubeA, a deep reinforcement learning and search algorithm that can solve the Rubik’s cube, and six other puzzles, without domain specific knowledge. Next, I will discuss how solving combination puzzles opens up new possibilities for solving problems in the natural sciences. In particular, I will describe how we are using DeepCubeA to tackle problems in chemistry. Finally, I will show how problems we encounter in the natural sciences motivate future research directions. A demonstration of our work can be seen at http://deepcube.igb.uci.edu/. Bio: Forest Agostinelli is a postdoctoral researcher at UC, Irvine. He received his B.S. from the Ohio State University, his M.S. from the University of Michigan, and his Ph.D. from UC, Irvine. His research interests include deep learning, reinforcement learning, search, bioinformatics, neuroscience, and chemistry. His homepage is located at https://www.ics.uci.edu/~fagostin/.

Generalized Task Learning for Human-Robot Collaboration

Monday, March 9, 2020 - 10:15 am
Storey Innovation Center (Room 2277)
Abstract: As human-robot collaboration in both industry and household environments becomes more prevalent, several aspects need to be further developed to allow for natural and safe collaboration. First, a generalized task learning framework must be developed in order to allow robots to perform various manipulation and assembly tasks. Second, better communication between agents is necessary to allow humans and robots to work together effectively and to teach the robot to perform tasks. This instruction can be accomplished in several ways, including verbal instruction and human demonstration. Combining these two methods into a single multimodal system will provide a more seamless interaction between two or more agents. Additionally, natural language can provide a communication channel between the human and robot which would allow the robot to inform the human of issues during learning and ask for assistance in resolving them. Finally, the human’s attention needs to be monitored and considered during decision making and task execution in order to allow humans and robots to work alongside each other safely and reliably. This type of interaction will be the basis for seamless human-robot collaboration for both industrial and household tasks. In household environments, this research would allow users to utilize robots for activities of daily living, which they are unable to perform themselves. In an industry setting, it would allow employees to train robots for new tasks themselves while ensuring they can work alongside the robots safely and reliably. Bio: Janelle is a Ph.D. Candidate and Graduate Research Fellow in Computer Science at the University of Nevada, Reno. She will be finishing her Ph.D. in May 2020. Previously, she completed her M.S. in Computer Science at the University of Nevada, Reno and her B.S. in Applied Mathematics at the University of Nevada, Reno. She was awarded a fellowship from the Nevada Space Grant Consortium in 2016. Her recent work was a Best Paper Finalist at the International Conference on Social Robotics (ICSR) in 2019. Her research interests are generalized task learning, natural language processing, and machine learning for robotics applications. These interests are motivated by the desire for creating a seamless workflow for collaborative muti-robot/human-robot teams in both industrial and household environments. You are invited at the CSCE Colloquium on Monday 03/09/2020 a t 10:15.

Trustworthy Multiagency

Friday, March 6, 2020 - 10:15 am
Storey Innovation Center (Room 2277)
Abstract: Intelligent decision making is at the heart of Artificial Intelligence (AI). A large number of real-world domains, such as autonomous vehicles, delivery robots, cyber security, and so many others, involve multiple AI decision makers, or agents, that cooperate with collective efforts in a distributed manner, where each agent's decisions are based on its local information, with often limited communication with others. This distributed nature makes it challenging to design efficient and reliable multiagency. Issues like failure to coordinate, unsafe interactions, and resource misallocation can easily arise. A promising approach to tackling these challenges is to explicitly build dependencies among cooperating agents, where one agent can be trusted to facilitate the execution of another. In this talk, I will present a framework that achieves trustworthy dependency by borrowing the notion of social commitments. Intuitively, a commitment regularizes an agent’s behavior so that it can be well anticipated and exploited by another. This talk will build up a formalism of this intuition, and discuss how multiagency commitments can be efficiently identified and faithfully fulfilled. Finally, this talk will conclude with my future agenda, covering topics of verification of trustworthy multiagency, discovery of safety-related dependency, and interpretability-performance tradeoff in multiagency. Bio: Qi Zhang is a final year Ph.D. student at the University of Michigan, advised by Edmund Durfee and Satinder Singh. His research interest is in artificial intelligence, with focuses on planning under uncertainty, reinforcement learning, and multiagent coordination. His long-term goal is to build safe, reliable, and trustworthy AI systems that retain their power and flexibility to handle complex, diverse contexts. Friday, March 6, Storey Innovation Center (Room 2277) from 10:15 am - 11:15 am.

Graph Neural Networks: A Feature and Structure Learning Approach

Monday, March 2, 2020 - 10:15 am
Storey Innovation Center (Room 2277)
In the real world, many data are naturally represented as graph data such as social networks. Deep learning methods have been very successful in various fields such as computer vision and natural language processing. However, developing deep learning methods on graph data is challenging due to the lack of locality information. In this talk, I will present my work on developing deep learning methods on graph data. My work addresses this challenge and significantly advances feature learning and structure learning on graphs in both accuracy and efficiency. Specifically, I will introduce our proposed learnable graph convolution layer and hard graph attention layer, which enables fully learnable convolution and hard attention operations on graph data while saving computational resources. Then I will discuss our developed efficient and effective graph pooling operators that significantly advance state-of-the-art performance. Besides layer-wise methods, I will talk about the first encoder-decoder network architecture on graph data. This series of research works result in a series of publications in top-tier journals and conferences. Bio: Hongyang Gao is a Ph.D. Candidate in the Department of Computer Science & Engineering at Texas A&M University in College Station, Texas. His primary research interests are machine learning and artificial intelligence with a special focus on deep learning. In particular, he mainly pays attention to the performance and efficiency of deep learning methods with applications to various data types like graphs. His research work has been recognized with a series of publications in top-tier journals and conferences. Before his Ph.D. work, Hongyang received his M.S. in Computer Science from Tsinghua University in 2012 and his B.S. from Peking University in 2009. Monday 3/2/20 at 10:15am

Human Allied Artificial Intelligence

Friday, February 28, 2020 - 11:00 am
Room 2277 Storey Innovation Center
ABSTRACT: Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce for learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. Finally, I will discuss more recent work on “closing-the-loop” where information is solicited from humans as needed that allows for seamless interactions with the human expert. I will discuss these methods in the context of supervised learning, planning, reinforcement learning and inverse reinforcement learning. BIO: Dr. Sriraam Natarajan is an Associate Professor and the Director for Center for ML at the Department of Computer Science at University of Texas Dallas. He was previously an Associate Professor and earlier an Assistant Professor at Indiana University, Wake Forest School of Medicine, a post-doctoral research associate at University of Wisconsin-Madison and had graduated with his PhD from Oregon State University. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award, Verisk Faculty Award and the IU trustees Teaching Award from Indiana University. He is the program co-chair of SDM 2020 and ACM CoDS-COMAD 2020 conferences. He is the chief editor of Frontiers in ML and AI journal, an editorial board member of MLJ, JAIR and DAMI journals and is the electronics publishing editor of JAIR.