Assistant Professor - AI Institute - Computer Science and Engineering
University of South Carolina
Email: foresta@cse.sc.edu
I am an assistant professor at the University of South Carolina. My research goal is to create artificial intelligence (AI) algorithms that can solve any pathfinding problem, where the objective of a pathfinding problem is to find a sequence of actions that forms a path from a given start state to a given goal. The methods employed in my research group include deep learning, reinforcement learning, heuristic search, and formal logic.
Pathfinding problems include robot path planning, theorem proving, chemical synthesis, program synthesis, and quantum circuit synthesis. Automating finding solutions to these problems with AI can result in rapid advancement of these fields. Furthermore, my research group seeks to incorporate explainable AI to enable collaboration with humans and to automate the discovery of new knowledge.
More broadly, it is my opinion that AI is, at its core, the study of algorithms that write algorithms (i.e. meta-algorithms). Since writing an algorithm can be posed as a pathfinding problem, I believe that solving pathfinding is key to solving AI.
I earned my Ph.D. in computer science at the University of California, Irvine, my M.S. in computer science at the University of Michigan, and my B.S. in electrical and computer engineering at The Ohio State University.
Expressive, high-level, goal specification is necessary when one knows what properties a goal must or must not have, but does not know what states meet these specifications. To accomplish this with deep neural networks, we incorporate formal logic with deep reinforcement learning to train heuristic functions that generalize over goals.
Selected Papers:
Specifying Goals for Deep Neural Networks with Answer Set Programming, ICAPS (2024)
A Conflict-Driven Approach for Reaching Goals Specified with Negation as Failure, ICAPS HAXP Workshop (2024)
Many pathfinding problems in real-world domains are not given world models and, thus, cannot make use of pathfinding algorithms. We seek to learn world models from real-world observations and use these learned world models to learn heuristic functions for pathfinding.
Selected Papers:
Learning Discrete World Models for Heuristic Search, Reinforcement Learning Conference (2024)
While deep reinforcement learning can train informative heuristics for pathfinding, training can take hours to days for a single pathfinding domain. To address this problem, we seek to train heuristic functions that can generalize over domains. Accomplishing this will make AI algorithms accessible to practitioners across a wide variety of disciplines and enable rapid adaptation of AI algorithms to dynamic real-world problems.
Selected Papers:
Towards Learning Foundation Models for Heuristic Functions to Solve Pathfinding Problems, AAAI GenPlan Workshop (2025)
PDDLFuse: A Tool for Generating Diverse Planning Domains, AAAI GenPlan Workshop (2025)
We seek to create domain-independent machine learning methods that can learn domain-specific heuristic functions given only a description of a pathfinding domain. We have created DeepCubeA, a deep reinforcement learning and search algorithm capable of solving pathfinding problems, such as the Rubik's cube and other combinatorial puzzles, without human guidance.
DeepCubeA Webserver
Selected Papers:
Q* Search: Heuristic Search with Deep Q-Networks, ICAPS PRL Workshop (2024)
Solving the Rubik's Cube with Deep Reinforcement Learning and Search, Nature Machine Intelligence (2019)
Pathfinding problems are found throughout computing, robotics, mathematics, and the natural sciences. We aim to apply DeepCubeA, and its variants, to these problems with hopes of advancing the state-of-the-art in these fields.
Selected Papers:
Finding Reaction Mechanism Pathways with Deep Reinforcement Learning and Heuristic Search, ICAPS PRL Workshop (2024)
Artificial neural networks typically have a fixed, non-linear activation function at
each neuron. We have designed a novel form of piecewise linear activation function
that is learned though gradient descent. With
this adaptive activation function, we are able to improve upon deep neural network architectures that use static activation functions.
Selected Papers:
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness, Neural Networks, (2021)
Learning Activation Functions to Improve Deep Neural Networks, ICLR Workshop, (2015)
The page was started with Mobirise template