Instructor: Ioannis Rekleitis
Semester: Spring 2018
Lecture Hours: Friday, 2:20 3:10 PM
Location: 2A31 Swearingen Engineering Center
Syllabus
Moodle Websites
CSCE 791 is a colloquium series, consisting of talks or seminars given by invited speakers, both from our department and from outside the department and university. The primary goal of this course is to expose students to the "state of the art" research and development in a variety of computing related disciplines. CSCE 791 is a great opportunity to see some of the brightest minds from academia and industry and hear their thoughts in person, as well as ask questions and interact with them.
|
Talks:
- Jan. 26 2018
- Speaker:Alberto Quattrini Li
- Affiliation: CSE, USC
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: Multirobot systems for exploration: indoor and marine environments
- Bio:
After being a postdoctoral fellow in the Autonomous Field Robotics Laboratory (AFRL) at the Computer Science and Engineering Department of the University of South Carolina, Alberto is currently working as Research Assistant Professor in the same department. He received a M.Sc. in Computer Science Engineering (2011) and a Ph.D. in Information Technology (2015) from Politecnico di Milano. From February to July 2014, he was a visiting PhD student at Research on Sensor Networks Lab at the Computer Science Department of the University of Minnesota. His main research interests include autonomous mobile robotics and underwater robotics, dealing with problems that span from multirobot exploration to visual-based state estimation.
- Extra Seminar Feb 1. 2018
- Speaker: Vijay Nair
- Affiliation: Department of Statistical Learning and Advanced Computing, Wells Fargo, Charlotte, NC
< li>Location: LeConte Room 210
- Time: 2:50 - 3:50 PM
- Title: Machine Learning Techniques in Banking
- Abstract: TBD
- Bio: TBD
- Feb. 02 2018
- Speaker: Ioannis Rekleitis
- Affiliation: CSE, USC
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: Field Work in Robotics Research
- Abstract: TBD
- Bio: Ioannis Rekleitis is an Assistant Professor at the Computer Science and Engineering Department, University of South Carolina, and an Adjunct Professor at the School of Computer Science, McGill University. 2004-2007 he was a visiting fellow at the Canadian Space Agency working on Planetary exploration and On-Orbit-Servicing of Satellites. During 2004 he was at McGill University as a Research Associate in the Centre for Intelligent Machines with Professor Gregory Dudek in the Mobile Robotics Lab (MRL). Between 2002 and 2003, he was a Postdoctoral Fellow at the Carnegie Mellon University in the Sensor Based Planning Lab with Professor Howie Choset. He was granted his Ph.D. from the School of Computer Science, McGill University, Montreal, Quebec, Canada in 2002 under the supervision of Professors Gregory Dudek and Evangelos Milios. Thesis title: "Cooperative Localization and Multi-Robot Exploration". He finished his M.Sc. in McGill University in the field of Computer Vision in 1995. He was granted his B.Sc. in 1991 from the Department of Informatics, University of Athens, Greece. His Research has focused on mobile robotics and in particular in the area of cooperating intelligent agents with application to multi-robot cooperative localization, mapping, exploration and coverage. His interests extend to computer vision and sensor networks.
- Feb. 09 2018
- Speaker:Dr. Yonghong Yan,
- Affiliation: University of South Carolina
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: Portable Parallel Programming in the Age of Architecture
- Abstract: Today's computer systems are becoming much more heterogeneous
and complex from both computer architecture and memory system. High
performance computing systems and large-scale enterprise clusters are
often built with the combination of multiple architectures including
multicore CPUs, Nvidia manycore GPUs, Intel Xeon Phi vector manycores,
and domain-specific processing units, such as DSP and deep-learning
tensor units. The introduction of non-volatile memory and 3D-stack DRAM
known as high-bandwidth memory further complicated computer systems by
significantly increasing the complexity of the memory hierarchy. For
users, parallel programming for those systems has thus become much more
challenging than ever. In this talk, the speaker will highlight the
latest development of parallel programming models for the existing and
emerging architectures for high performance computing. He will introduce
the ongoing work in his research team (http://passlab.github.io) for
improving productivity and portability of parallel programming for
heterogeneous systems with the combination of shared and discrete
memory. The speaker will conclude that this is an exciting time for
performing computer system research and also share some of his
unsuccessful experiences for studying his Ph.D.
- Bio: Dr. Yonghong Yan joined University of South Carolina as an
Assistant Professor in Fall 2017 and he is a member of OpenMP
Architectural Review Board and OpenMP Language Committee. Dr. Yan calls
himself a nerd for parallel computing, compiler technology and
high-performance computer architecture and systems. He is an NSF CAREER
awardee. His research team develop intra-/inter-node programming models,
compiler, runtime systems and performance tools based on OpenMP, MPI and
LLVM compiler, explore conventional and advanced computer architectures
including CPU, vector, GPU, MIC, FPGA, and dataflow system, and support
applications ranging from classical HPC, to big data analysis and
machine learning, and to computer imaging. The ongoing development can
be found from https://github.com/passlab. Dr. Yan received his PhD
degree in computer science from University of Houston and has a bachelor
degree in mechanical engineering.
- Feb. 16 2018
- Speaker: Lannan (Lisa) Luo
- Affiliation: University of South Carolina
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title:Semantics-Based Obfuscation-Resilient Binary Code Similarity Comparison with Applications to Software Plagiarism Detection
- Abstract: Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this talk, I will present a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.
- Bio:Lannan (Lisa) Luo is an Assistant Professor in the Department of Computer Science and Engineering at University of South Carolina. She received her B.S. in Telecommunications Engineering from Xidian University, Xi’an, China in 2009, and M.S. in Communications and Information Systems from University of Electronic Science and Technology of China in 2012. Her research mainly focuses on software and systems security, including mobile security, IoT security, malware analysis, vulnerability analysis, programming languages, software engineering, and deep learning. Her research approaches are mainly empirical in tandem with formal methods, combining symbolic execution, theorem proving, taint analysis, control flow analysis, data flow analysis, reverse engineering, data mining, and deep learning.
- Extra Seminar, CSE Colloquium Feb. 19 2018
- Speaker: Nirupam Roy
- Affiliation: University of Illinois, Urbana-Champaign (UIUC)
- Location:Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:Internet of Acoustic Things (IoAT): Challenges, Opportunities, and Threats
- Abstract: The recent proliferation of acoustic devices, ranging from voice assistants to wearable health monitors, is leading to a sensing ecosystem around us -- referred to as the Internet of Acoustic Things or IoAT. My research focuses on developing hardware-software building blocks that enable new capabilities for this emerging future. In this talk, I will sample some of my projects. For instance, (1) I will demonstrate carefully designed sounds that are completely inaudible to humans but recordable by all microphones. (2) I will discuss our work with physical vibrations from mobile devices, and how they conduct through finger bones to enable new modalities of short range, human-centric communication. (3) Finally, I will draw attention to various acoustic leakages and threats that arrive with sensor-rich environments. I will conclude this talk with a glimpse of my ongoing and future projects targeting a stronger convergence of sensing, computing, and communications in tomorrow’s IoT, cyber-physical systems, and healthcare technologies.
- Bio: Nirupam Roy is a Ph.D. candidate in Electrical and Computer Engineering at the University of Illinois, Urbana-Champaign (UIUC). His research interests are in mobile sensing, wireless networking, and embedded systems with applications to IoT, cyber-physical-systems, and security. Roy is the recipient of the Valkenburg graduate research award, the Lalit Bahl fellowship, and the outstanding thesis awards from both his Bachelor's and Master's institutes. His recent research on "Making Microphones Hear Inaudible Sounds" received the MobiSys'17 best paper award and was selected for the ACM SIGMOBILE research highlights of the year in 2017.
- Extra Seminar Feb. 20 2018
- Speaker: Miltos Alamaniotis
- Affiliation: Purdue University
- Location: SWGN 3D05
- Time: 9:00 - 10:00 AM
- Title: Machine Intelligence Solutions in Critical Energy Applications
- Abstract:Recent advancements in machine learning aim to address challenges in diverse but related critical areas of energy engineering such as power system safety and nuclear energy security. The data generated in these applications increases exponentially due to the penetration of modern information technologies as well as their sensitive nature and need for accurate and fast data processing and analysis. For instance, power systems and distribution grids are monitored 24/7 by a variety of sensors aiming at predicting or diagnosing operational malfunctions. In such environments, the limitation of human operators to follow and interpret the huge volume of data offer opportunities for machine learning to support effective and fast decision making. Machine intelligence tools such as learning kernel machines and fuzzy logic have been successfully applied in both areas. For example, in smart power systems Pareto-driven ensembles of kernel machines are shown to accurately forecast the future load demand, while fuzzy logic tools may be used to analyze gamma-ray signals and identify threats. Results from advanced power systems and nuclear energy security applications will be presented and implications for future research will be discussed.
- Bio: Miltiadis "Miltos" Alamaniotis is a research assistant professor in the School of Nuclear
Engineering at Purdue University, where he is a member of the Applied Intelligent
Systems Laboratory. He received the Diploma in Electrical and Computer Engineering (ECE) from the University
of Thessaly, Greece in 2005, and M.S. and Ph.D. in Nuclear Engineering with emphasis on Intelligent Systems
from Purdue University in 2010 and 2012, respectively. His interdisciplinary research focuses on the development
of intelligent systems and machine learning approaches applied to (1) energy systems and smart grids, and (2)
sensor networks for nuclear security. His research has been funded by the US National Research Foundation
(NSF) and the Department of Energy (DOE). He has published over a hundred (100) papers in scientific journals,
book chapters, and papers in international conference proceedings, and is the principle author of the book “Smart
Electric Power: Integrating Power with Information for Smarter Cities” (Wiley, 2018). He was invited to serve as a
leading editor in the area of intelligent energy in the International Journal of Monitoring and Surveillance Technologies
Research, a guest editor of the International Journal on Artificial Intelligence Tools, and a Program Co-Chair
of the IEEE International Tools with Artificial Intelligence 2016 Conference. He served as an external researcher at
the Argonne National Laboratory in 2010-2012, and a visiting researcher in the Energy and Power Systems Group
at Oak Ridge National Laboratory in 2016. He is the recipient of the 2017 Distinguished Alumni Award from the
ECE Department of the University of Thessaly, and is an active member of the American Nuclear Society (ANS)
and the Institute of Electrical and Electronic Engineering (IEEE).
- Feb. 23 2018
- Speaker: Marco Valtorta
- Affiliation: CSE
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: Bayesian Networks
- Abstract: The main purpose of this talk is to introduce Bayesian networks. Bayesian networks are graph-based representations of probability distributions. They are used to model and reason efficiently in domains where naïve approaches are impossibly complex, by exploiting conditional and unconditional independence relationships. Bayesian networks were invented about 30 years ago, and they have since been applied in many fields, including medical diagnosis, troubleshooting of complex artifacts, intelligent and active user interfaces, image recognition, intelligence analysis, monitoring of power plants, coding, forensics, and genetics. Hidden Markov models and Kalman (Thiele) filters were shown to be special cases of Bayesian networks, an insight closely connected to the development of dynamic (time-repeating) Bayesian networks. From an algorithmic perspective, Bayesian networks have proven to be a fertile ground for the use of graph algorithms, non-serial dynamic programming, and other advanced techniques.
As time allows, I will present some concepts and results concerning causal modeling and identifiability of causal effects in the presence of unmeasured variables.
- Bio: Marco Valtorta (Ph.D., Duke University, 1987) is a professor of Computer Science and Engineering in the College of Engineering and Computing at the University of South Carolina. He received a laurea degree with highest honors in electrical engineering from the Politecnico di Milano, Milan, Italy, where he studied with Marco Somalvico, in 1980. His research interests are in Artificial Intelligence. His first research result, known as “Valtorta's theorem" and obtained in 1980, was recently (2011) described as “seminal" and “an important theoretical limit of usefulness" for heuristics computed by search in an abstracted problem space. Most of his later research has been in the area of uncertainty in artificial intelligence. Valtorta’s theoretical and methodological contributions include results on the complexity of theory revision, algorithms for learning Bayesian networks from large data sets, algorithms for the identification of conflicts in Bayesian networks, algorithms for probability update in the presence of uncertain information and results on the identifiability of parameters in causal Bayesian networks. His applied work includes the construction of Bayesian networks and influence diagrams in medicine, agriculture, computer security, and information analysis. Valtorta is chair of the Faculty Senate at the University of South Carolina for the 2017-2019 term.
- Extra Seminar Feb. 26 2018
- Speaker: Pooyan Jamshidi
- Affiliation: Carnegie Mellon University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:Transfer Learning for Performance Analysis of Highly-Configurable Software Systems
- Abstract: A wide range of modern software-intensive systems (e.g., autonomous systems, big data analytics, robotics, deep neural architectures) are built configurable. These systems offer a rich space for adaptation to different domains and tasks. Developers and users often need to reason about the performance of such systems, making tradeoffs to change specific quality attributes or detecting performance anomalies. For instance, developers of image recognition mobile apps are not only interested in learning which deep neural architectures are accurate enough to classify their images correctly, but also which architectures consume the least power on the mobile devices on which they are deployed. Recent research has focused on models built from performance measurements obtained by instrumenting the system. However, the fundamental problem is that the learning techniques for building a reliable performance model do not scale well, simply because the configuration space is exponentially large that is impossible to exhaustively explore. For example, it will take over 60 years to explore the whole configuration space of a system with 25 binary options.
In this talk, I will start motivating the configuration space explosion problem based on my previous experience with large-scale big data systems in industry. I will then present my transfer learning solution to tackle the scalability challenge: instead of taking the measurements from the real system, we learn the performance model using samples from cheap sources, such as simulators that approximate the performance of the real system, with a fair fidelity and at a low cost. Results show that despite the high cost of measurement on the real system, learning performance models can become surprisingly cheap as long as certain properties are reused across environments. In the second half of the talk, I will present empirical evidence, which lays a foundation for a theory explaining why and when transfer learning works by showing the similarities of performance behavior across environments. I will present observations of environmental changes‘ impacts (such as changes to hardware, workload, and software versions) for a selected set of configurable systems from different domains to identify the key elements that can be exploited for transfer learning. These observations demonstrate a promising path for building efficient, reliable, and dependable software systems. Finally, I will share my research vision for the next five years and outline my immediate plans to further explore the opportunities of transfer learning.
- Bio:Pooyan Jamshidi is a postdoctoral researcher at Carnegie Mellon University, where he works on transfer learning for building performance models to enable dynamic adaptation of mobile robotics software as a part of BRASS, a DARPA sponsored project. Prior to his current position, he was a research associate at Imperial College London, where he worked on Bayesian optimization for automated performance tuning of big data systems. He holds a Ph.D. from Dublin City University, where he worked on self-learning Fuzzy control for auto-scaling in the cloud. He has spent 7 years in industry as a developer and a software architect. His research interests are at the intersection of software engineering, systems, and machine learning, and his focus lies predominantly in the areas of highly-configurable and self-adaptive systems (more details: https://pooyanjamshidi.github.io/research/).
- Extra Seminar Feb. 28 2018
- Speaker: Zsolt Kira
- Affiliation:Georgia Institute of Technology
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:Towards Continual and Fine-Grained Learning for Robot Perception
- Abstract: A large number of robot perception tasks have been revolutionized by machine learning and deep neural networks in particular. However, current learning methods are limited in several ways that hinder their large-scale use for critical robotics applications: They are often focused on individual sensor modalities, do not attempt to understand semantic information in a fine-grained temporal manner, and are beholden to strong assumptions about the data (e.g. that the data distribution is the same when deployed in the real world as when trained). In this talk, I will describe work on novel deep learning architectures for moving beyond current methods to develop a richer multi-modal and fine-grained scene understanding from raw sensor data. I will also discuss methods we have developed that can use transfer learning to deal with changes in the environment or the existence of entirely new, unknown categories in the data (e.g. unknown object types). I will focus especially on this latter work, where we use neural networks to learn how to compare objects and transfer such learning to new domains using one of the first deep-learning based clustering algorithms, which we developed. I will show examples of real-world robotic systems using these methods, and conclude by discussing future directions in this area, towards making robots able to continually learn and adapt to new situations as they arise.
- Bio:Dr. Zsolt Kira received his B.S. in ECE at the University of Miami in 2002 and M.S. and Ph.D. in Computer Science from the Georgia Institute of Technology in 2010. He is currently a Senior Research Scientist and Branch Chief of the Machine Learning and Analytics group at the Georgia Tech Research Institute (GTRI). He is also an Adjunct at the School of Interactive Computing and Associate Director of Georgia Tech’s Machine Learning Center (ML@GT). He conducts research in the areas of machine learning for sensor processing and robot perception, with emphasis on feature learning for multi-modal object detection, video analysis, scene characterization, and transfer learning. He has over 25 publications in these areas, several best paper/student paper and other awards, and has been invited to speak at related workshops in both academia and government venues.
- Attention Different Time and Location Mar. 02 2018
- Speaker: Sanjib Sur
- Affiliation: University of Wisconsin- Madison
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Bringing Millimeter-Wave Wireless to the Masses
- Abstract: Many of the emerging IoT applications --- such as wireless virtual and augmented reality, autonomous vehicles, tactile internet --- demand multiple gigabits per second wireless throughput with sub-millisecond latency guarantees. Today’s wireless infrastructure --- such as LTE or Wi-Fi --- will unlikely handle such demand. Abundant opportunity, however, exists at millimeter-wave wireless, but with two key-barriers --- directional link alignment and link blockage --- that prevent the mass deployment of millimeter-wave in today’s network. In the first part of the talk, I will present my approach to addressing these two challenges by designing solutions that span across the wireless link, protocol, and system stack. Mass deployment of millimeter-wave devices also brings opportunity to enable new IoT applications, including designing new user-device interactions and ad-hoc imaging of objects hidden from the line-of-sight. In the second part of the talk, I will briefly go through my design to address the challenges of such ad-hoc applications. Finally, I will conclude this talk with a glimpse of my future works that are shaped by the emerging mass proliferation of cheap and ubiquitous wireless systems at millimeter-wave, sub-terahertz, and terahertz.
- Bio:Sanjib Sur is a Ph.D. candidate in the Electrical and Computer Engineering department at the University of Wisconsin-Madison. His research interests are in millimeter-wave networks, wireless and mobile systems, and IoT connectivity and sensing systems. His research works have appeared on multiple flagship conferences for wireless and mobile systems. Sanjib has been recently nominated for the Wisconsin Distinguished Graduate Fellowship for an outstanding graduate research work. He received a Bachelor’s degree with the highest distinction in Computer Science and Engineering from the Indian Institute of Engineering Science and Technology, where he was awarded the President of India Gold Medal for outstanding academic achievement.
- Extra Seminar Mar. 05 2018
- Speaker: Justin Zhan
- Affiliation: University of Nevada
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: BIG DATA BRIDGE
- Abstract: Data has become the central driving force to new discoveries in science, informed governance, insight into society, and economic growth in the 21st century. Abundant data is a direct result of innovations including the Internet, faster computer processors, cheap storage, the proliferation of sensors, etc, and has the potential to increase business productivity and enable scientific discovery. However, while data is abundant and everywhere, people do not have a fundamental understanding of data. Traditional approaches to decision making under uncertainty are not adequate to deal with massive amounts of data, especially when such data is dynamically changing or becomes available over time. These challenges require novel techniques in data analytics, data-driven optimization, systems modeling and data mining. In this seminar, a number of recent funded data analytics projects will be presented to address various data analytics, mining, modeling and optimization challenges. In particular, the DataBridge, which is a novel data analytics system, will be illustrated.
- Bio: Dr. Justin Zhan is a professor at the Department of Computer Science, College of Engineering, Department of Radiology, School of Medicine, as well as Nevada Institute of Personalized Medicine. His research interests include Big Data, Information Assurance, Social Computing, Biomedical Computing and Health Informatics. He has been a steering chair of International Conference on Social Computing (SocialCom), and International Conference on Privacy, Security, Risk and Trust (PASSAT). He has been the editor-in-chief of International Journal of Privacy, Security and Integrity and International Journal of Social Computing and Cyber-Physical Systems. He has served as a conference general chair, a program chair, a publicity chair, a workshop chair, or a program committee member for over one-hundred and fifty international conferences and an editor-in-chief, an editor, an associate editor, a guest editor, an editorial advisory board member, or an editorial board member for about thirty journals. He has published more than two hundred articles in peer-reviewed journals and conferences and delivered thirty keynote speeches and invited talks. His research has been extensively funded by National Science Foundation, Department of Defense and National Institute of Health.
- Extra Seminar Mar. 07 2018
- Speaker: Anuj Karpatne
- Affiliation: University of Minnesota
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:Theory-guided Data Science: A New Paradigm for Scientific Discovery from Data
- Abstract: This talk will introduce theory-guided data science, a novel paradigm of scientific discovery that leverages the unique ability of data science methods to automatically extract patterns and models from data, but without ignoring the treasure of knowledge accumulated in scientific theories. Theory-guided data science aims to fully capitalize the power of machine learning and data mining methods in scientific disciplines by deeply coupling them with models based on scientific theories. This talk will describe several ways in which scientific knowledge can be combined with data science methods in various scientific disciplines such as hydrology, climate science, aerospace, and chemistry. To demonstrate the value in combining physics with data science, the talk will also introduce a novel framework for combining deep learning methods with physics-based models, termed as physics-guided neural networks, and present some preliminary results of this framework for an application in lake temperature modeling. The talk will conclude with a discussion of future prospects in exploiting latest advances in deep learning for building the next generation of scientific models for dynamical systems, where theory-based and data science methods are used at an equal footing.
- Bio: Anuj Karpatne is a PostDoctoral Associate at the University of Minnesota, where he develops data mining methods for solving scientific and socially relevant problems in Prof. Vipin Kumar's research group. He has published more than 25 peer-reviewed articles at top-tier conferences and journals (e.g., KDD, ICDM, SDM, TKDE, and ACM Computing Surveys), given multiple invited talks, and served on panels at leading venues (e.g., SDM and SSDBM). His research has resulted in a system to monitor the dynamics of surface water bodies on a global scale, which was featured in an NSF news story. He is also a co-author of the second edition of the textbook, "Introduction to Data Mining." Anuj received his Ph.D. in September 2017 from the University of Minnesota under the guidance of Prof. Kumar. Before joining the University of Minnesota, Anuj received his bachelor's and master's degrees from the Indian Institute of Technology Delhi.
- Attention Different Time and Location Mar. 09 2018
- Speaker: An Wang
- Affiliation:George Mason University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:Elastic and Adaptive SDN-based Defenses in Cloud with Programmable Measurement
- Abstract: The past decade has witnessed a dramatic change in the way organizations and enterprises manage their cloud and data center systems. The main drive of such transition is the Network Virtualization techniques, which have been promoted to a new level by the Software-Defined Networking (SDN) paradigm. Along with the programmability and flexibility offered by SDN, there are fundamental challenges in defending against the prevalent large-scale network attacks, such as DDoS attacks, against the SDN-based cloud systems.
This talk presents efficient and flexible solutions to address such challenges in both reactive and proactive modes of SDN. In this talk, I will first discuss the vulnerabilities in the architecture of SDN, which results in risk of congestions on the control path under the reactive mode. For the solution, I will show how the control path capacity could be elastically scaled up by taking advantages of the software switches’ abundant processing powers to handle control messages. Then, for the proactive mode, I will discuss how traffic measurement and monitoring mechanisms are necessary yet incompetent with the existing SDN solutions. To fix this issue, I will present the design and implementation of a separate monitoring plane in SDN that enables flexible and fine-grained data collections for security purposes.
- Bio:An Wang is a Ph.D. candidate in the Department of Computer Science at George Mason University. She received BS in Department of Computer Science and Technologies from Jilin University in 2012. Her research interests lie in the areas of security for networked systems and network virtualization, mainly focusing on Software-Defined Networking (SDN) and cloud systems, and large-scale network attacks.
- Extra Seminar Mar. 12 2018
- Speaker: Antonios Argyriou
- Affiliation: University of Thessaly, Greece
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title:The Role of Applications in Wireless Communication System Design and Optimization
- Abstract: The next generation of cellular wireless communication systems (WCS) aspire to become a paradigm shift and not just an incremental version of existing systems. These systems will come along with several technical and conceptual advances resulting in an ecosystem that aims to deliver orders of magnitude higher performance (throughput, delay, energy). These systems will essentially serve as conduits among service/content providers and users, and are expected to support a significantly enlarged and diversified bouquet of applications.
In this talk we will first introduce the audience to the fundamental concepts of WCS that brought us to this day. Subsequently, we will identify the application trends that drive specific design choices of future WCS. Then, we will present a new idea for designing and optimizing future WCS that puts the specific application at the focus of our choices. The discussion will be based on two key application categories namely wireless monitoring, and video delivery. In the last part of this talk we will discuss how this paradigm, that elevates the role of the applications, opens up new directions for understanding, operating, and designing future WCS.
- Bio: Dr. Antonios Argyriou received the Diploma in electrical and computer engineering from Democritus University of Thrace, Greece, in 2001, and the M.S. and Ph.D. degrees in electrical and computer engineering as a Fulbright scholar from the Georgia Institute of Technology, Atlanta, USA, in 2003 and 2005, respectively. Currently, he is an Assistant Professor at the department of electrical and computer engineering, University of Thessaly, Greece. From 2007 until 2010 he was a Senior Research Scientist at Philips Research, Eindhoven, The Netherlands where he led the research efforts on wireless body area networks. From 2004 until 2005, he was a Senior Engineer at Soft.Networks, Atlanta, GA. Dr. Argyriou currently serves in the editorial board of the Journal of Communications. He has also served as guest editor for the IEEE Transactions on Multimedia Special Issue on Quality-Driven Cross-Layer Design, and he was also a lead guest editor for the Journal of Communications, Special Issue on Network Coding and Applications. Dr. Argyriou serves in the TPC of several international conferences and workshops in the area of wireless communications, networking, and signal processing. His current research interests are in the areas of wireless communications, cross-layer wireless system design (with applications in video delivery, sensing, vehicular systems), statistical signal processing theory and applications, optimization, and machine learning. He is a Senior Member of IEEE.
- Extra Seminar Mar. 14 2018
- Speaker: Soteris Demetriou
- Affiliation: University of Illinois at Urbana-Champaign
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Security and Privacy Challenges in User-Facing, Complex, Interconnected Environments
- Abstract: In contrast with traditional ubiquitous computing, mobile devices are now user-facing, more complex and interconnected. Thus they introduce new attack surfaces which can result in severe private information leakage. Due to the rapid adoption of smart devices, there is an urgent need to address emerging security and privacy challenges to help realize the vision of a secure, smarter and personalized world.
In this talk, I will focus on the smartphone and its role in smart environments. First I will show how the smartphone's complex architecture allows third-party applications and advertising networks to perform inference attacks and compromise user confidentiality. Further, I will demonstrate how combining techniques from both systems and data sciences can help us build tools to detect such leakage. Second, I will show how a weak mobile application adversary can exploit vulnerabilities hidden in the interplay between smartphones and smart devices. I will then describe how we can leverage both strong mandatory access control and flexible user-driven access control to design practical and robust systems to mitigate such threats. I will conclude, by discussing how in the future I want to enable a trustworthy Internet of Things, focusing not only on strengthening smartphones, but also emerging intelligent platforms and environments (e.g. automobiles, smart buildings/cities), and new user interaction modalities in IoT (acoustic signals).
- Bio: Soteris Demetriou is a Ph.D. Candidate in Computer Science at the University of Illinois at Urbana-Champaign. His research interests lie at the intersection of mobile systems and, security and privacy, with a current focus on smartphones and IoT environments. He discovered side-channels in the virtual process filesystem (procfs) of the Linux kernel that can be exploited by malicious applications running on Android devices; he built Pluto, an open-source tool for detection of sensitive user information collected by mobile apps; he designed security enhancements for the Android OS which enable mandatory and discretionary access control for external devices. His work incited security additions in the popular Android operating system, has received a distinguished paper award at NDSS, and is recognized by awards bestowed by Samsung Research America and Hewlett-Packard Enterprise. Soteris is a recipient of the Fulbright Scholarship, and in 2017 was selected by the Heidelberg Laureate Forum as one of the 200 most promising young researchers in the fields of Mathematics and Computer Science.
- Extra Seminar Mar. 16 2018
- Speaker: Heewook Lee
- Affiliation: Carnegie Mellon University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Graph-centric approaches for understanding the mutational landscape of life
- Abstract: Genetic diversity is necessary for survival and adaptability of all forms of life. The importance of genetic diversity is observed universally in humans to bacteria. Therefore, it is a central challenge to improve our ability to identify and characterize the extent of genetic variants in order to understand the mutational landscape of life. In this talk, I will focus on two important instances of genetic diversity found in (1) human genomes (particularly the human leukocyte antigens—HLA) and (2) bacterial genomes (rearrangement of insertion sequence [IS] elements). I will first show that specific graph data structures can naturally encode high levels of genetic variation, and I will describe our novel, efficient graph-based computational approaches to identify genetic variants for both HLA and bacterial rearrangements. Each of these methods is specifically tailored to its own problem, making it possible to achieve the state-of-the-art performance. For example, our method is the first to be able to reconstruct full-length HLA sequences from short-read sequence data, making it possible to discover novel alleles in individuals. For IS element rearrangement, I used our new approach to provide the first estimate of genome-wide rate of IS-induced rearrangements including recombination. I will also show the spatial patterns and the biases that we find by analyzing E. coli mutation accumulation data spanning over 2.2 million generations. These graph-centric ideas in our computational approaches provide a foundation for analyzing genetically heterogeneous populations of genes and genomes, and provide directions for ways to investigate other instances of genetic diversity found in life.
- Bio: Dr. Heewook Lee is currently a Lane Fellow at Computational Biology Department at the School of Computer Science in Carnegie Mellon University, where he works on developing novel assembly algorithms for reconstructing highly diverse immune related genes, including human leukocyte antigens. He received a B.S. in computer science from Columbia University, and obtained M.S. and Ph.D in computer science from Indiana University. Prior to his graduate studies, he also worked as a bioinformatics scientist at a sequencing center/genomics company where he was in charge of the computational unit responsible for carrying out various microbial genome projects and Korean Human Genome project.
- Extra Seminar Mar. 19 2018
- Speaker: Sayed Ahmad Salehi
- Affiliation: Utah Valley University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Signal processing and other forms of computation with biomolecular (DNA) reactions
- Abstract: With the recent advances in the field of synthetic biology, molecular computing has emerged as a non-conventional computing technology and a broad range of computational processes has been considered for molecular implementation. In contrast to electronic systems where signals are represented by time-varying voltage values, in molecular computing systems signals are represented by time-varying concentrations of molecular types. The field aims for the design of custom, embedded biological "controllers" - DNA molecules that are engineered to perform useful tasks in situ, such as cancer detection and smart drug therapy.
The past few decades have seen remarkable progress in the design of integrated electronic circuits for digital signal processing and other forms of computation. Nowadays, in terms of circuit complexity, the pace of progress in biotechnology is similar to, even faster than, integrated circuits; it's like a golden age of molecular circuit design. This seminar presents how the knowledge and expertise in circuit design can be applied and extended to the domain of molecular computing. The talk has two main parts. First, some frameworks are explained for the development of molecular systems computing signal processing operations such as frequency filters. Second, a new molecular encoding, called fractional coding, is introduced to map digital stochastic computing circuits into molecular circuits. Based on this approach, molecular computation of complex mathematical functions and a single-layer neural network (perceptron) is described.
- Bio: Dr. Sayed Ahmad Salehi received his MSc. and Ph.D. in Electrical and Computer Engineering (with minor in Computer Science) from the University of Minnesota in 2015 and 2017, respectively. Dr. Salehi is currently an assistant professor in the Computer Science and Engineering Department at Utah Valley University. His research interests include low-power VLSI architectures for signal processing, embedded systems, stochastic computing, and molecular (DNA) computing. He is the recipient of the University of Minnesota Doctoral Dissertation Fellowship in 2016 and the finalist for the best paper award in DSP2015 conference.
- Extra Seminar Mar. 21 2018
- Speaker: Qiang Zeng
- Affiliation: Temple University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Cross-Area Approaches to Innovative Security Solutions
- Abstract: By applying out-of-the-box thinking and cross-area approaches, novel solutions can be innovated to solve challenging problems. In this talk, I will share my experiences in applying cross-area approaches, and present creative designs to solve two difficult security problems.
Problem 1: Decentralized Android Application Repackaging Detection. An unethical developer can download a mobile application, make arbitrary modifications (e.g., inserting malicious code and replacing the advertisement library), repackage the app, and then distribute it; the attack is called application repackaging. Such attacks have posed severe threats, causing $14 billion monetary loss annually and propagating over 80% of mobile malware. Existing countermeasures are mostly centralized and imprecise. We consider building the repackaging detection capability into apps, such that user devices are made use of to detect repackaging in a decentralized fashion. In order to protect repackaging detection code from attacks, we propose a creative use of logic bombs, which are commonly used in malware. The use of hacking techniques for benign purposes has delivered an innovative and effective defense technique.
Problem 2: Precise Binary Code Semantics Extraction. Binary code analysis allows one to analyze a piece of binary code without accessing the corresponding source code. It is widely used for vulnerability discovery, dissecting malware, user-side crash analysis, etc. Today, binary code analysis becomes more important than ever. With the booming development of the Internet-of-Things industry, a sheer number of firmware images of IoT devices can be downloaded from the Internet. It raises challenges for researchers, third-party companies, and government agents to analyze these images at scale, without access to the source code, for identifying malicious programs, detecting software plagiarism, and finding vulnerabilities. I will introduce a brand new binary code analysis technique that learns from Natural Language Processing, an area remote from code analysis, to extract useful semantic information from binary code.
- Bio: Dr. Qiang Zeng is an Assistant Professor in the Department of Computer & Information Sciences at Temple University. He received his Ph.D. in Computer Science and Engineering from the Pennsylvania State University, and his B.E. and M.E. degrees in Computer Science and Engineering from Beihang University, China. He has rich industry experiences and ever worked in the IBM T.J. Watson Research Center, the NEC Lab America, Symantec and Yahoo.
Dr. Zeng's main research interest is Systems and Software Security. He currently works on IoT Security, Mobile Security, and deep learning for solving security problems. He has published papers in PLDI, NDSS, MobiSys, CGO, DSN and TKDE.
- Attention Different Time and Location Mar. 23 2018
- Speaker: Kamal Al Nasr
- Affiliation: Tennessee State University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Protein Structure Determination as a Computational Problem
- Abstract: The development process of new drugs can cost on average $4 billion. Therefore, it is strategic to apply robust computational approaches to cover a broader chemical space like computer-aided drug design while reducing the number of compounds that must be synthesized and tested in vitro to keep costs low. Protein structural information is a crucial input for computer-aided drug design. Conventional determination techniques such as X-ray crystallography are time consuming and fail with proteins that are hard to crystallize. Likewise, traditional computational techniques are unsuccessful with many types of proteins such as membrane and macromolecular proteins that make up more than the half of contemporary drug targets. In contrast, Cryo-Electron Microscopy (cryo-EM) is a biophysics technique that generates volumetric images to determine structures of macromolecular complexes and assembles. However, it is challenging to determine the atomic structures from images generated at sub-nanometer resolution using cryo-EM. In addition, the volume of prospective sub-nanometer EM images to be analyzed has rapidly grown. Therefore, powerful computational methods such as de novo modeling are thus needed to make use of the available cryo-EM data. In this presentation, I will present some challenging problems and computational approaches to use the non-atomic images of cryo-EM to model protein structures. Novel algorithms, image processing techniques, and data analysis will be presented to overcome the hinder of resolution problem of cryo-EM.
- Bio: Dr. Kamal Al Nasr is an assistant professor of Computer Science at Tennessee State University since August 2013. He received his Bachelor's and Master's degree in Computer Science from Yarmouk University, Jordan in 2003 and 2005, respectively. Dr. Al Nasr received another Master's degree in Computer Science from New Mexico State University, Las Cruces, NM in 2011. He received his Ph.D. in Computer Science from Old Dominion University, Norfolk, VA in 2012. During his Ph.D. study, he was awarded College of Science's university fellowship on July 2010. He Joined the Department of Systems and Computer Science at Howard University, Washington, D.C. as a postdoctoral research scientist in 2012. His research interest is centered on developing efficient computational methods for protein structure prediction in de novo modeling. Specifically, he focuses on using Electron Cryo-Microscopy (cryo-EM), high performance computing, data analytics, and graph theory to design algorithms, which efficiently predict the 3-dimensional structure of proteins. During his structural bioinformatics research, Dr. Al Nasr has written several peer-reviewed papers in national and international journals and proceedings. Further, Dr. Al Nasr has two active grants from national agencies (i.e., NSF and NIH) to support his research.
- Extra Seminar Mar. 26 2018
- Speaker: Guorong Wu
- Affiliation: University of North Carolina, Chapel Hill
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Computational Brain Connectome: From Reverse Engineering the Brain to Understand Brain Connectivity
- Abstract: Neuroimaging research has developed rapidly in last decade, with various applications of brain mapping technologies that provide mechanisms for discovering neuropsychiatric disorders in vivo. The human brain is something of an enigma. Much is known about its physical structure, but how it manages to marshal its myriad components into a powerhouse capable of performing so many different tasks remains a mystery. In this talk, I will demonstrate that it is more important to understand how the brain regions are connected rather than study each brain region individually. I will introduce my recent research on human brain connectome, with the focus on revealing high-order brain connectome and functional dynamics using learning-based approaches, and the successful applications in identifying neuro-disorder subjects such as Autism and Alzheimer’s disease.
- Bio: Dr. Guorong Wu is an Assistant Professor in Department of Radiology at the University of North Carolina, Chapel Hill. His primary research interests are medical image analysis, big data mining, scientific data visualization, and computer-assisted diagnosis. He has been working on medical image analysis since he started my Ph.D. study in 2003. In 2007, he received the Ph.D. degree in computer science and engineering from Shanghai Jiao Tong University, Shanghai, China. He has developed many image processing methods for brain magnetic resonance imaging (MRI), diffusion tensor imaging, breast dynamic contrast-enhanced MRI (DCE-MRI), and computed tomography (CT) images. These cutting-edge computational methods have been successfully applied to the early diagnosis of Alzheimer’s disease, infant brain development study, and image-guided lung cancer radiotherapy. Meanwhile, Dr. Wu lead a multi-discipline research team in UNC which aims to translate the cutting-edge intelligent techniques to the imaging-based biomedical applications, for the sake of boosting translational medicine. Dr. Wu has released more than ten image analysis software packages to the medical imaging community, which count to totally more than 15,000 downloads since 2009.
Dr. Wu is the recipient of NIH Career Development Award (K01) and PI of NIH Exploratory/Developmental Research Grant Award (R21). He also serves as the Co-PI and Co-Investigator in other NSF and NIH grants.
- Attention Different Time and Location Mar. 30 2018
- Speaker: Rahmatollah Beheshti
- Affiliation: Johns Hopkins University
- Location: Innovation Center, Room 2277
- Time: 10:15 - 11:15 AM
- Title: Complex Models to Understand Complex Health Behaviors
- Abstract: Most of the health conditions are directly or indirectly resulted from humans’ decisions. These decisions are affected by a wide range of personal and environmental factors. While understanding health decision-making processes can lead to significant breakthroughs in both treatment and prevention of different diseases, due to their complex nature, our knowledge about many of these processes is very limited. Computational and data-driven techniques are increasingly considered as powerful options to fuse various types of data (such as biological and behavioral data) to understand these complexities. In this talk, Dr. Beheshti will present several projects from the areas of smoking and obesity research in which he has used complex systems and AI methods to study health behaviors. Specifically, he will talk about one of his recent projects studying the role of price in food decision-making.
- Bio: Dr. Rahmatollah Beheshti is a fourth-year postdoctoral fellow at the Johns Hopkins Bloomberg School of Public Health, with a joint appointment in the Department of Applied Math & Statistics at Johns Hopkins. He has a PhD in Computer Science and a Master in Artificial Intelligence and has been working in the area of Computational Epidemiology and Health Data Analytics for the past eight years. He has close to 20 first author full articles in these areas. Specifically, he has worked extensively on two major public health epidemics: smoking and obesity and has focused on very different aspects of these two, including the social, economic, environmental, and lately biological factors that affect those epidemics.
- Apr. 06 2018
- Speaker: TBD
- Affiliation: TBD
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: TBD
- Abstract: TBD
- Bio: TBD
- Apr. 13 2018
- Speaker: TBD
- Affiliation: TBD
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: TBD
- Abstract: TBD
- Bio: TBD
- Apr. 20 2018
- Speaker: TBD
- Affiliation: TBD
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: TBD
- Abstract: TBD
- Bio: TBD
- Apr. 27 2018
- Speaker: TBD
- Affiliation: TBD
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: TBD
- Abstract: TBD
- Bio: TBD
- Extra Seminar TBA
- Speaker: TBD
- Affiliation: TBD
- Location: SWGN 2A31
- Time: 2:20 - 3:10 PM
- Title: TBD
- Abstract: TBD
- Bio: TBD