# Schedule

8:00-8:50 Breakfast & Registration, The Jeannie Auditorium

Opening Remarks and Plenary Session I

The Jeannie Auditorium

8:50-9:00 Welcome and Opening Remarks

9:00-9:40 Satyan Devadoss (USD)

Geometry and Mystery of Unfolding Regular Polytopes

Chair: Justin Marks (Biola U)

9:45-10:25 Uduak George (SDSU)

Decoding bio-mechanical cues in branching morphogenesis

Chair: Bo Li (UCSD)

9:00-9:40 Satyan Devadoss (USD)

Geometry and Mystery of Unfolding Regular Polytopes

We explore a puzzle whose origins date back 500 years to the Renaissance master Albrecht Dürer, who first recorded examples of unfolded polyhedra. Recently, just a decade ago, it was shown that every unfolding of the Platonic solids was without self-overlap, yielding a valid net. With practical applications from airbag designs to the Burning Man sculptures, using theoretical and computational methods, we consider this property for all regular polytopes in higher dimensions, proving what works and puzzling over what doesn’t. This talk is heavily infused with visual imagery, with access to numerous unsolved problems.

9:45-10:25 Uduak George (SDSU)

Decoding bio-mechanical cues in branching morphogenesis

Many organs in mammals have complex branched structures that are similar to trees. The formation of these branched structures often begins during embryonic development and they are important for various physiological functions in the body. The branching structures of the lung airways, pancreas ducts, mammary ducts, ureteric bud, and salivary ducts are vital for fluid transport in the body. They facilitate the secretion and distribution of crucial substances and the removal of waste, supporting diverse physiological functions. Branching morphogenesis governs the formation of these branched structures. Defects in branching morphogenesis can lead to rare syndromes or common conditions such as chronic kidney failure, poor lung function, hypertension etc. How the rate of branching, branch orientation and number of branches are regulated during branching morphogenesis in different organs is a fundamental question in developmental biology. The mechanical environment of tissues is believed to have a significant impact on the regulation of branching morphogenesis and the proper formation of branching organs. However, the impact of mechanical environment on the formation of these branched organs is not fully understood nor well characterized. In this talk, I will present how we have utilized a combination of laboratory experimentation and computational modeling to elucidate how mechanical signaling regulates branching morphogenesis in the lungs and mammary gland.

10:30-10:45 Coffee Break, The Jeannie Auditorium

Morning Contributed Sessions

(Contributed Session I)

Track 1

Ridge Walk Academic Complex 0121

Chair: Shuxia Tang (Texas Tech University)

10:45-11:05 Bo Zhao

Symmetries, Flat Minima, and the Conserved Quantities of Gradient Flow

11:10-11:30 Dohyeon Kim

Recent developments in Consensus Based Optimization

11:35-11:55 Shuxia Tang

Encirclement Control for 2D and 3D Multi-Agent Systems

10:45-11:05 Bo Zhao

Symmetries, Flat Minima, and the Conserved Quantities of Gradient Flow

Empirical studies have revealed that many minima of neural networks are connected through low-loss pathways. However, little is known about the theoretical origin of these flat regions. One source of flat directions is parameter transformations that keep the loss invariant, known as symmetries. In this work, we present a general framework based on equivariance for finding continuous symmetries in neural network parameter spaces. In particular, we introduce a new class of nonlinear and data-dependent group actions. These symmetries can transform a trained model such that it performs similarly on new samples, which allows ensemble building that improves robustness under adversarial attacks. We then derive the dimension of minima induced by symmetries, and show that conserved quantities associated with symmetries parametrize these minima. The conserved quantities help reveal that using common initialization methods, gradient flow only explores a small part of the global minimum. By relating conserved quantities to convergence rate and sharpness of the minimum, we provide insights on how initialization impacts convergence and model generalizability.

11:10-11:30 Dohyeon Kim

Recent developments in Consensus Based Optimization

Consensus-based optimization algorithms are a recent family of particle methods for solving complex non-convex optimization problems. In many application settings, the objective function is not available in closed form. Additionally, derivatives may not be available, or very costly to obtain. Consensus-Based-Optimization (CBO) makes use of the Laplace principle to circumvent the use of gradients and is well suited for black-box objectives. Most of the available analysis for this recent family of algorithms studies the corresponding mean-field descriptions of the distribution of particles. Especially convergence analysis with explicit rates is of interest to assess algorithm performance and has mostly been done on the level of the mean-field PDEs. However, all results currently in the literature connecting the discrete particle system to the mean-field regime are restricted to finite time domains. In this talk, we present recent advances regarding the CBO algorithm and its variants and discuss uniform-in-time mean field limits. We focus on second-order variants of CBO as they have numerical advantages in terms of convergence and provide a conceptual bridge to Particle Swarm Optimization (PSO), one of the most widely used particle-based optimization methods.

11:35-11:55 Shuxia Tang

Encirclement Control for 2D and 3D Multi-Agent Systems

This presentation showcases recent advancements in encirclement control for multi-agent systems in both 2D and 3D spaces. (1) The model for 2D multi-agent systems is a system of first-order ODEs, specifically addressing the issue of communication delays. One key aspect of our approach is the utilization of two distributed estimators affected by communication delay to accurately estimate the location of the geometric center of targets and to estimate the maximum distance of targets to the estimated geometric center of targets. By utilizing this latter set of estimated information, we achieve the relative desired position of agents. The employment of the estimated profile of targets and the relative desired position of agents allows for improved tracking and coordination among the agents, enhancing the effectiveness of the proposed encirclement control. (2) Conversely, the model for 3D multi-agent systems is a system of reaction-advection-diffusion PDEs. The innovative rotating encirclement control strategy involves multi-step control. We initially apply a successful 3-step target-enclosing boundary control to the system. Once the enclosing is achieved, the agents begin rotating around targets, adjusting their formation to maintain the encirclement. To address the challenges in the fourth step, in-domain control is applied to the agents, ensuring compliance with all requirements. For both cases, stability analysis of the closed-loop system is conducted using the Lyapunov technique.

Track 2

Ridge Walk Academic Complex 0115

Chair: Shu Liu (UCLA)

10:45-11:05 Nathan Schroeder

Locally Critical Shapes for Steklov Eigenvalue Problems

11:10-11:30 Shu Liu

A First-order computational algorithm for reaction-diffusion type equations via primal-dual hybrid gradient method

11:35-11:55 Zhaolong Han

Compactness results for a Dirichlet energy of nonlocal gradient with applications

10:45-11:05 Nathan Schroeder

Locally Critical Shapes for Steklov Eigenvalue Problems

We consider Steklov eigenvalues on nearly spherical and nearly annular domains in $d$ dimensions. By using the Green-Beltrami identity for spherical harmonic functions, the derivatives of Steklov eigenvalues with respect to the domain perturbation parameter can be determined by the eigenvalues of a matrix involving the integral of the product of three spherical harmonic functions. By using the addition theorem for spherical harmonic functions, we determine conditions when the trace of this matrix becomes zero. These conditions can then be used to determine when spherical and annular regions are critical points while we optimize Steklov eigenvalues subject to a volume constraint. In addition, we develop numerical approaches based on particular solutions and show that numerical results in two and three dimensions are in agreement with our analytic results.

11:10-11:30 Shu Liu

A First-order computational algorithm for reaction-diffusion type equations via primal-dual hybrid gradient method

We propose an easy-to-implement iterative method for resolving the implicit (or semi-implicit) schemes arising in reaction-diffusion (RD) type equations. In our treatment, we formulate the nonlinear time implicit scheme on the space-time domain as a min-max saddle point problem and then apply the primal-dual hybrid gradient (PDHG) method. Suitable precondition matrices are applied to accelerate the convergence of our algorithm under different circumstances. Furthermore, we provide conditions that guarantee the convergence of our method for various types of RD-type equations. Several numerical examples as well as comparisons with commonly used numerical methods will also be demonstrated to verify the effectiveness and the accuracy of our method.

11:35-11:55 Zhaolong Han

Compactness results for a Dirichlet energy of nonlocal gradient with applications

We prove two compactness results for function spaces with finite Dirichlet energy of half-space nonlocal gradients. In each of these results, we provide sufficient conditions on a sequence of kernel functions that guarantee the asymptotic compact embedding of the associated nonlocal function spaces into the class of square-integrable functions. Moreover, we will demonstrate that the sequence of nonlocal function spaces converges in an appropriate sense to a limiting function space. As an application, we prove uniform Poincaré-type inequalities for sequence of half-space gradient operators. We also apply the compactness result to demonstrate the convergence of appropriately parametrized nonlocal heterogeneous anisotropic diffusion problems. We will construct asymptotically compatible schemes for these type of problems. Another application concerns the convergence and robust discretization of a nonlocal optimal control problem.

Track 3

Ridge Walk Academic Complex 0104

Chair: Jeremy Budd (Caltech)

10:45-11:05 Jeremy Budd

Graph-based learning for image reconstruction-segmentation, and deep graph-based learning for image segmentation

11:10-11:30 Blaine Quackenbush

Graph Neural Operators for Learning Geometric Representations from Point Clouds

11:35-11:55 Zhichao Wang

Signal propagation and feature learning in neural networks

10:45-11:05 Jeremy Budd

Graph-based learning for image reconstruction-segmentation, and deep graph-based learning for image segmentation

This talk will be about doing things all at once. When one reconstructs an image, one tends to want to do something with the reconstructed image, such as segment it. Why not reconstruct and segment simultaneously, using each to guide the other? This talk will show how this can be done, and how sophisticated segmentation methods, such as graph Merriman--Bence--Osher (MBO) techniques and Bhattacharyya flow techniques, can be incorporated into this framework.

Remarkably, it also turns out that the graph MBO scheme can be viewed as a neural network. Why not, therefore, use deep learning and graph-based learning at once? This talk will present ongoing work using deep learning to learn how to build the best graph on an image and parameters for graph MBO, in order to achieve the best segmentations.

11:10-11:30 Blaine Quackenbush

Graph Neural Operators for Learning Geometric Representations from Point Clouds

In this talk we show how neural operators can be developed for performing geometric tasks in the manifold setting. We develop neural operators with a network architecture designed to approximates operations in function space. This is accomplished by using layers that are generalized to linear integral operators with learnable kernels. For this purpose and to deal with unstructured input data sets, we leverage Graph Neural Operator (GNO) architectures and develop them for the non-euclidean setting. To demonstrate our methods, we show how our geometric neural operators can be used to perform tasks, such as learning solutions to PDEs on manifolds or learning estimates of geometric quantities of interest. We further demonstrate our methods by presenting results for how they can be used in Bayesian inverse problems, performing shape identification, and other tasks.

11:35-11:55 Zhichao Wang

Signal propagation and feature learning in neural networks

In this talk, I will present some results of the eigenvalue spectrum of the Conjugate Kernel (CK) defined by the nonlinear feature map of a feedforward neural network both at random initialization and early stage of training. In the first part, I will show the spectral properties of the CK matrices with random weights in a multiple-layer neural network under the proportional asymptotic limit. We will give a quantitative description of how spiked eigenstructure in the input data propagates through the hidden layers of a neural network. Secondly, we study a simple regime of representation learning where the weight matrix develops a rank-one signal component over gradient descent (GD) training and characterize the alignment of the target function with the spike eigenvector of the CK on test data. In this case, after finitely many GD steps, we will show the emergence of one outlier spike in the spectra of both weight and CK matrices. We will present two scalings of the learning rate. For a small learning rate, we compute the asymptotic risks for the ridge regression estimator on top of trained features which does not outperform the best linear model. Whereas for a sufficiently large learning rate, we prove that the ridge estimator on the trained features can go beyond this ``linear'' regime. These are some recent joint works with Zhou Fan, Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Denny Wu, and Greg Yang.

Track 4

Mosaic 0204

Chair: Rishi Sonthalia (UCSD)

10:45-11:05 Rebecca Gjini

Connecting Large-Eddy Simulations of Stratocumulus Clouds to Predator Prey Dynamics Via Feature Based Inversions

11:10-11:30 David Vishny

High-dimensional covariance estimation from a small number of samples

11:35-11:55 Rishi Sonthalia

From Classical Regression to the Modern Regime: Surprises for Linear Least Squares Problems

10:45-11:05 Rebecca Gjini

Connecting Large-Eddy Simulations of Stratocumulus Clouds to Predator Prey Dynamics Via Feature Based Inversions

How our climate changes is heavily influenced by Earth’s energy budget. Stratocumulus clouds have a great effect on Earth’s energy budget because (i) they can reflect sunlight; and (ii) they cover about thirty percent of our planet. It is thus important to understand and model stratocumulus clouds and in this talk, we make connections between two very different models. The first model is a large eddy simulation (LES), which is a cloud resolving 3D atmospheric simulation that is computationally expensive to run. The second model is a scalar delay differential equation (DDE), which is trivial to run and interprets the interactions of precipitation and cloud as a predator (rain) and prey (cloud) system. We connect these two models by estimating parameters of the predator-prey model from LES that reflect a variety of meteorological conditions. We rely on a feature-based approach to parameter estimation and numerically solve the problem using an affine invariant ensemble sampler. The result of our computations is a map of meteorological conditions to the parameter space of the predator-prey model. Interestingly, we discover a strong relationship between the nondimensional parameters of the DDE when exposed to various meteorological conditions from LES. These findings support the model’s ability to represent aspects of stratocumulus clouds, which is needed if one were to use it to parameterize stratocumulus clouds in Earth system and climate models.

11:10-11:30 David Vishny

High-dimensional covariance estimation from a small number of samples

We synthesize knowledge from numerical weather prediction, inverse theory and statistics to address the problem of estimating a high-dimensional covariance matrix from a small number of samples. This problem is fundamental in statistics, machine learning/artificial intelligence, and in modern Earth science. We create several new adaptive methods for high-dimensional covariance estimation, but one method, which we call NICE (noise-informed covariance estimation), stands out because it has three important properties: (i) NICE is conceptually simple and computationally efficient; (ii) NICE enjoys favorable theoretical guarantees; and (iii) NICE is largely tuning-free. We illustrate the use of NICE on a large set of Earth-science-inspired numerical examples, including cycling data assimilation, geophysical inversion of field data, and training of feed-forward neural networks with time-averaged data from a chaotic dynamical system. Our theory, heuristics and numerical tests suggest that NICE may indeed be a viable option for high-dimensional covariance estimation in many Earth science problems.

11:35-11:55 Rishi Sonthalia

From Classical Regression to the Modern Regime: Surprises for Linear Least Squares Problems

Linear regression is a problem that has been extensively studied. However, modern machine learning has brought to light many new and exciting phenomena due to overparameterization. In this talk, I introduce field and the new phenomena observed in recent years. Then, building on this, I present recent theory work on linear denoising. Despite the importance of denoising in modern machine learning and ample empirical work on supervised denoising, its theoretical understanding is still relatively scarce.

For this setting, we derive general test error expressions for both denoising and noisy-input regression and study when overfitting the noise is benign, tempered, or catastrophic. We show that the test error exhibits double descent under general distribution shifts, providing insights for data augmentation and the role of noise as an implicit regularizer. We demonstrate that this setting has other surprising phenomena such as underparameterized double descent. Finally, we perform experiments using real-life data, matching the theoretical predictions with under 1% MSE error for low-rank data.

Track 5

The Jeannie Auditorium

Chair: Manuchehr Aminian (CPP)

10:45-11:05 Kyle Stark

Modeling the dynamics of furrow invagination during Drosophila cellularization

11:10-11:30 Xin Su

Partially Explicit Generalized Multiscale Finite Element Methods for Poroelasticity Problem

11:35-11:55 Manuchehr Aminian

Geometric Approaches to Feature Engineering and Anomaly Detection with Telemetry Time Series in Mice

10:45-11:05 Kyle Stark

Modeling the dynamics of furrow invagination during Drosophila cellularization

During the process of Drosophila cellularization, 6000 nuclei migrate to the plasma membrane and form new cells simultaneously. To do this, the embryo’s membrane is pulled between each of the nuclei in a process known as furrow invagination. Two reservoirs of membrane are used for furrow expansion: unfolding of pre-existing microvilli and exocytosis of Golgi-derived vesicles. Thirty minutes after initiating invagination, furrow velocity abruptly jumps from ~7 nm/sec to ~24 nm/sec. Concomitantly, apical surface elasticity and viscosity drop with an unchanged ratio – thus suggesting that viscous and elastic properties are structurally coupled, and contribute to the slow-to-fast transition. Despite ample experimental observations, the governing mechanisms are experimentally occluded by the redundancy of motor proteins, microtubules and F-actin. A mathematical model can disentangle the complexity of this process and shed light on possible mechanisms. In this work, we propose a model of furrow invagination by modeling the cortical-membrane cytoskeleton composite using viscoelastic models. Further, we coupled membrane area conservation and vesicle-modified viscoelastic properties to the force balance at the tip of the furrow. The cortical-membrane’s constitutive properties were represented with a Burger-type viscoelastic body. We find that the experimentally-verified, fluid-type body cannot achieve the slow-to-fast transition without modification of material or force parameters. Hence, we examine different velocity- and time-delay-dependencies, induced by actin polymerization, and observe that they are capable of governing the slow-to-fast transition. In line with previous findings, it is necessary to structurally couple elastic and viscous properties, but also incorporate dynamic changes to the mechanical properties. We expect our theoretical model will lead to new insights into biophysical invagination mechanisms. While our current focus is on Drosophila cellularization, this model may be extended to understand other multinucleated systems, such as the formation of T-tubules in muscle cells.

11:10-11:30 Xin Su

Partially Explicit Generalized Multiscale Finite Element Methods for Poroelasticity Problem

We develop a partially explicit time discretization based on the framework of constraint en- ergy minimizing generalized multiscale finite element method (CEM-GMsFEM) for the problem of linear poroelasticity with high contrast. Firstly, dominant basis functions generated by the CEM- GMsFEM approach are used to capture important degrees of freedom and it is known to give contrast- independent convergence that scales with the mesh size. In typical situation, one has very few degrees of freedom in dominant basis functions. This part is treated implicitly. Secondly, we design and in- troduce an additional space in the complement space and these degrees are treated explicitly. We also investigate the CFL-type stability restriction for this problem, and the restriction for the time step is contrast independent.

11:35-11:55 Manuchehr Aminian

Geometric Approaches to Feature Engineering and Anomaly Detection with Telemetry Time Series in Mice

I will present our ongoing work analyzing mouse time series to associate large-scale telemetry time series features with genetic variation in the Collaborative Cross in the context of infection. Among other things, we employ Radial Basis Functions to both learn per-mouse time series signatures for summarization as well as anomaly detection. Pictures will be shown.

Track 6

Ridge Walk Academic Complex 0103

Chair: Jiajie (Jerry) Luo (UCLA)

10:45-11:05 Jiajie (Jerry) Luo

Bounded-Confidence Models of Opinion Dynamics with Adaptive Confidence Bounds

11:10-11:30 Rose Yu

Automatic Integration for Neural Spatiotemporal Point Processes

11:35-11:55 Ram Dyuthi Sristi

Contextual Feature Selection with Conditional Stochastic Gates

10:45-11:05 Jiajie (Jerry) Luo

Bounded-Confidence Models of Opinion Dynamics with Adaptive Confidence Bounds

People’s opinions change with time as they interact with each other. In a bounded-confidence model (BCM) of opinion dynamics, individuals (which are represented by the nodes of a network) have continuous-valued opinions and are influenced by neighboring nodes whose opinions are sufficiently similar to theirs (i.e., are within a confidence bound). In this talk, we formulate and analyze discrete-time BCMs with heterogeneous and adaptive confidence bounds. We introduce two new models: (1) a BCM with synchronous opinion updates that generalizes the Hegselmann--Krause (HK) model and (2) a BCM with asynchronous opinion updates that generalizes the Deffuant--Weisbuch (DW) model. We analytically and numerically explore our adaptive BCMs' limiting behaviors, including the confidence-bound dynamics, the formation of clusters of nodes with similar opinions, and the time evolution of an "effective graph", which is a time-dependent subgraph of a network with edges between nodes that are currently receptive to each other. For a variety of networks and a wide range of values of the parameters that control the increase and decrease of confidence bounds, we demonstrate numerically that our adaptive BCMs result in fewer major opinion clusters and longer convergence times than the baseline (i.e., nonadaptive) BCMs. We also show that our adaptive BCMs can have adjacent nodes that converge to the same opinion but are not receptive to each other. This qualitative behavior does not occur in the associated baseline BCMs.

11:10-11:30 Rose Yu

Automatic Integration for Neural Spatiotemporal Point Processes

Learning continuous-time point processes is essential to many discrete event forecasting tasks. However, integration poses a major challenge, particularly for spatiotemporal point processes (STPPs), as it involves calculating the likelihood through triple integrals over space and time. Existing methods for integrating STPP either assume a parametric form of the intensity function, which lacks flexibility; or approximating the intensity with Monte Carlo sampling, which introduces numerical errors. Recent work by Omi et al. [2019] proposes a dual network approach for efficient integration of flexible intensity function. However, their method only focuses on the 1D temporal point process. In this paper, we introduce a novel paradigm: AutoSTPP (Automatic Integration for Spatiotemporal Neural Point Processes) that extends the dual network approach to 3D STPP. While previous work provides a foundation, its direct extension overly restricts the intensity function and leads to computational challenges. In response, we introduce a decomposable parametrization for the integral network using ProdNet. This approach, leveraging the product of simplified univariate graphs, effectively sidesteps the computational complexities inherent in multivariate computational graphs. We prove the consistency of AutoSTPP and validate it on synthetic data and benchmark real-world datasets. AutoSTPP shows a significant advantage in recovering complex intensity functions from irregular spatiotemporal events, particularly when the intensity is sharply localized.

11:35-11:55 Ram Dyuthi Sristi

Contextual Feature Selection with Conditional Stochastic Gates

Feature selection is a crucial tool in machine learning and is widely applied across various scientific disciplines. Traditional supervised methods generally identify a universal set of informative features for the entire population. However, feature relevance often varies with context, while the context itself may not directly affect the outcome variable. Here, we propose a novel architecture for contextual feature selection where the subset of selected features is conditioned on the value of context variables. Our new approach, Conditional Stochastic Gates (c-STG), models the importance of features using conditional Bernoulli variables whose parameters are predicted based on contextual variables. We introduce a hypernetwork that maps context variables to feature selection parameters to learn the context-dependent gates along with a prediction model. We further present a theoretical analysis of our model, indicating that it can improve performance and flexibility over population-level methods in complex feature selection settings. Finally, we conduct an extensive benchmark using simulated and real-world datasets across multiple domains demonstrating that c-STG can lead to improved feature selection capabilities while enhancing prediction accuracy and interpretability.

12:00-13:30 Lunch and Poster Session, The Jeannie Auditorium and Sixth College East Lawn

Plenary Session II

The Jeannie Auditorium

13:30-14:10 Wilfrid Gangbo (UCLA)

Can computational math help settle down Morrey's and Iwaniec's conjectures?

Chair: Yuhua Zhu (UCSD)

14:15-14:55 Matthias Morzfeld (UCSD)

Markov chain Monte Carlo and high-dimensional, nonlinear inverse problems in Earth Science

Chair: Boris Kramer (UCSD)

13:35-14:15 Wilfrid Gangbo (UCLA)

Can computational math help settle down Morrey's and Iwaniec's conjectures?

In 1987, D. L. Burkholder proposed a very simple looking and explicit energy functionals $I_{p}$ defined on $\mathbb{S}$, the set of smooth functions on the complex plane. A question of great interest is to know whether or not $\sup_{\mathbb{S}} I_{p} \leq 0$. Since, the function $I_{p}$ is homogeneous of degree $p$, it is very surprising that it remains a challenge to prove or disprove that $\sup_{\mathbb{S}} I_{p} \leq 0$. Would $\sup_{\mathbb{S}} I_{p} \leq 0$, the so-called Iwaniec's conjecture on the Beurling–Ahlfors Transform in harmonic analysis would hold. Would $\sup_{\mathbb{S}} I_{p} > 0$, the so-called Morrey's conjecture in elasticity theory would hold. Therefore proving or disproving that $\sup_{\mathbb{S}} I_{p} \leq 0$ is equally important. Since the computational capacity of computers has increased exponentially over the past decades, it is natural to hope that computational math could help settle down these two conjectures at once.

14:20-15:00 Matthias Morzfeld (UCSD)

Markov chain Monte Carlo and high-dimensional, nonlinear inverse problems in Earth Science

Earth science nearly always requires estimating models, or model parameters, from data. This could mean to infer the state of the southern ocean from ARGO floats, to compute the state of our atmosphere based on atmospheric observations of the past six hours, or to construct a resistivity model of the Earth’s subsurface from electromagnetic data. All these problems have in common that the number of unknowns is large (millions to hundreds of millions) and that the underlying processes are nonlinear. The problems also all have in common that they can be formulated as the problem of drawing samples from a high-dimensional Bayesian posterior distribution.

Due to the nonlinearity, Markov chain Monte Carlo (MCMC) is a good candidate for the numerical solution of geophysical inverse problems. But MCMC is known to be slow when the number of unknowns is large. In this talk, I will argue that an unbiased solution of nonlinear, high-dimensional problems remains difficult, but one can construct efficient and accurate biased estimators that are feasible to apply to high-dimensional problems. I will show examples of biased estimators in action and invert electromagnetic data using an approximate MCMC sampling algorithm called the RTO-TKO (randomize-then-optimize -- technical-knock-out).

15:00-15:20 Conference Picture & Coffee Break, The Jeannie Auditorium and Sixth College East Lawn

Afternoon Contributed Sessions

(Contributed Session II)

Track 1

Ridge Walk Academic Complex 0121

Chair: Justin Marks (Biola U)

15:20-15:40 Jesús Abraham Rodríguez Arellano

Experimental Robust control of wheeled mobile robots through prescribed time, PID, and H_infinity methodologies with kinematic uncertainties

15:45-16:05 Pau Batlle

Frequentist Confidence Intervals via optimization: Resolving the Burrus conjecture

16:10-16:30 Justin Marks

Maximizing Stable Matches in the Stable Marriage Problem

15:20-15:40 Jesús Abraham Rodríguez Arellano

Experimental Robust control of wheeled mobile robots through prescribed time, PID, and H_infinity methodologies with kinematic uncertainties

Wheeled Mobile Robots (WMRs) are complex systems with various applications in industry and society, such as autonomous driving, exploration, and surveillance. Due to these systems being inherently subject to different disturbances, such as skidding, slipping, noisy measurements, and state estimation, recent advances in this area involve trajectory tracking and trajectory generation. Therefore, we have proposed solutions to the trajectory tracking problem by employing prescribed time, PID, and H_infinity robust control strategies, improving this research topic. A trajectory generation method was proposed by merging intelligent and computer vision techniques to generate a feasible trajectory for a WMR by employing an onboard camera. These methodologies were experimentally tested using Matlab-Simulink, a Gazebo simulator, and a scaled autonomous car. These methods were then compared against existing controllers found in the literature. Our methodologies outperform others by achieving lower tracking errors and superior system responses in disturbed and undisturbed conditions. In addition, the trajectory generation methodology was assessed by employing the Gazebo Simulator; then, the resulting trajectory was fed into the controllers. The results demonstrated that the proposed approach for generating trajectories is viable for tracking in disturbed and undisturbed conditions.

15:45-16:05 Pau Batlle

Frequentist Confidence Intervals via optimization: Resolving the Burrus conjecture

We introduce a method for creating confidence intervals with frequentist guarantees for functionals in constrained inverse problems. The extremes of the intervals are given by solving optimization problems. This family of methods, with literature for the Gaussian-linear case dating back to the 1960s, allows for uncertainty quantification in ill-posed inverse problems without needing a prior. We show that previously proposed intervals can be understood as coming from a likelihood ratio test inversion, and we use this connection to disprove a long-standing conjecture proposed by Burrus in 1965 and to generalize the intervals beyond the original Gaussian-linear model.

16:10-16:30 Justin Marks

Maximizing Stable Matches in the Stable Marriage Problem

The Nobel Prize-awarded Stable Marriage Problem of order n involves matching n men with n women. The central data structure is a square n × n matrix of ordered pairs of integers in the range 1 to n, called a ranking table, representing the preferences of the participants. Using a Monte Carlo tree search with the software MiniZinc, we seek ranking tables which maximize the number of stable matchings for orders n = 6 and n = 7. The maximum number of stable matchings is known for orders n = 1, ..., 5. Practical applications include roommate matching, residency matching for graduating medical students, and more. This research is at the intersection of combinatorics, statistics, and computational algorithm design, and extends the work of beloved Biola University mathematics professor emeritus Ed Thurber.

Track 2

Ridge Walk Academic Complex 0115

Chair: Yousaf Habib (UCSD)

15:20-15:40 Sankaran Ramanarayanan

Uncovering the physics of vibration-induced gaseous lubrication: A testament to the enduring utility of classical perturbation methods

15:45-16:05 Yousaf Habib

Unraveling the B-Series Tapestry: Group Theory, Graph Theory, and Numerical Analysis of Differential Equations

16:10-16:30 Cuncheng Zhu

Active nematic ﬂuids on Riemannian 2manifolds

15:20-15:40 Sankaran Ramanarayanan

Uncovering the physics of vibration-induced gaseous lubrication: A testament to the enduring utility of classical perturbation methods

This presentation outlines an exercise in the application of singular perturbation theory to a contemporary problem in continuum mechanics.

Consider a rigid plate vibrating along its normal axis near a parallel wall, inducing oscillatory airflow in the thin film of air separating the two. A time-averaged overpressure is generated within the film, providing a steady force repelling the plate from the wall. For decades, this “squeeze-film” effect (SFE) has been exploited as a means for bearing lubrication. Interestingly, recent studies report that reducing carefully the plate stiffness and oscillation frequency produces a transition to attractive forces, of great interest in the design of wall-climbing robots and contactless grippers.

SFE belongs to a family of oscillatory fluid flows that involve steady streaming, a constant time-averaged flow field generated due to non-sinusoidal motion of a bounding surface and/or the nonlinear effects of convective inertia and gaseous compressibility. It is well known that computation of the steady flow can be simplified substantially with use of asymptotic methods that exploit the limit of relatively small oscillations.

The problem of SFE is approached here with the initial assumption of time-sinusoidal, uniform-amplitude plate vibration. The method of matched asymptotic expansions is used to relate the gradual variation of pressure along the film with the rapid variation existing across a small region surrounding its perimeter. The reduced formulation allows efficient delineation of the parametric conditions required for a transition to attraction, and exhibits favorable agreement with high-fidelity computational simulations. The formulation is then generalized incrementally to account for (i) elastic deformations of the plate, (ii) non-sinusoidal deformation due to fluid–structure coupling, and (iii) laterally asymmetrical deformations that induce additionally a locomotive force. Results provide detailed insights into the physical mechanisms underlying the attractive squeeze-film force and may guide its practical implementation in the near future.

15:45-16:05 Yousaf Habib

Unraveling the B-Series Tapestry: Group Theory, Graph Theory, and Numerical Analysis of Differential Equations

The B-series method has long been an indispensable tool in the numerical analysis of differential equations, offering a systematic approach to approximating solutions through series expansions. In this talk, we delve deep into the intricate interplay between the B-series method, group theory, and graph theory. By examining the underlying structures and connections, we uncover fascinating insights into how these seemingly disparate areas of mathematics converge to enhance our understanding and computational capabilities in solving differential equations.

16:10-16:30 Cuncheng Zhu

Active nematic ﬂuids on Riemannian 2manifolds

Recent advances in cell biology and experimental techniques using reconstituted cell extracts have generated signiﬁcant interest in understanding how geometry and topology inﬂuence active ﬂuid dynamics. In this work, we present a comprehensive continuous theory and computational method to explore the dynamics of active nematic ﬂuids on arbitrary surfaces without topological constraints. The ﬂuid velocity and nematic order parameter are represented as the sections of the complex line bundle of a 2-manifold. We introduce the Levi-Civita connection and surface curvature form within the framework of complex line bundles. By adopting this geometric approach, we introduce a gauge-invariant discretization method that preserves the continuous localto-global theorems in differential geometry. We establish a nematic Laplacian on complex functions that can accommodate fractional topological charges through the covariant derivative on the complex nematic representation. We formulate advection of the nematic ﬁeld based on a unifying deﬁnition of the Lie derivative, resulting in a stable geometric semi-Lagrangian discretization scheme for transport by the ﬂow. In general, the proposed surface-based method offers an efﬁcient and stable means to investigate the inﬂuence of local curvature and global topology on the 2D hydrodynamics of active nematic systems. Moreover, the complex line representation of the nematic ﬁeld and the unifying Lie advection present a systematic approach for generalizing our method to active k-atic systems.

Track 3

Ridge Walk Academic Complex 0104

Chair: Ray Zirui Zhang (UCI)

15:20-15:40 Ray Zirui Zhang

BiLO: Bilevel Local Operator learning for PDE inverse problems

15:45-16:05 Yimeng Zhang

A neural network kernel decomposition for learning multiple steady states in parameterized dynamical systems

16:10-16:30 Johnny (Jingze) Li

Quantifying Emergence through Homological Algebra and Its Applications to Artificial and Biological Neural Networks

15:20-15:40 Ray Zirui Zhang

BiLO: Bilevel Local Operator learning for PDE inverse problems

We propose a new neural network based method for solving inverse problems for partial differential equations (PDEs) using Bi-level Local Operator (BiLO) Learning.

We formulate the PDE inverse problem as a bilevel optimization problem. At the upper level, we minimize the data loss with respect to the PDE parameters. At the lower level, we train a neural network to locally approximate the PDE solution operator at given PDE parameters, which enables approximation of the descent direction for the upper level optimization problem. The lower level loss function includes the L2 norms of both the residual and its derivative with respect to the PDE parameters. We apply gradient descent simultaneously on both the upper and lower level optimization problems, leading to a effective and fast algorithm. The method is extended to infer unknown functions in the PDEs through an auxiliary variable. We demonstrate that our method enforces strong PDE constraints, is robust to sparse and noisy data, and eliminates the need to balance the residual and the data loss, which is required by soft PDE constraints in the Physics-Informed Neural Networks (PINNs) framework.

15:45-16:05 Yimeng Zhang

A neural network kernel decomposition for learning multiple steady states in parameterized dynamical systems

We develop a machine learning approach to identifying parameters with steady-state solutions, locating such solutions, and determining their linear stability for systems of ordinary differential equations and dynamical systems with parameters. Our approach begins with the construction of target functions that can be used to identify parameters with steady-state solution and the linear stability of such solutions. We design a parameter-solution neural network (PSNN) that couples a parameter neural network and a solution neural network to approximate the target function, and develop efficient algorithms to train the PSNN and to locate steady-state solutions. We also present a theory of approximation of the target function by our PSNN based on the neural network kernel decomposition. Numerical results are reported to show that our approach is robust in identifying the phase boundaries separating different regions in the parameter space corresponding to no solution or different numbers of solutions and in classifying the stability of solutions. These numerical results also validate our analysis. Although the primary focus in this study centers on steady states of parameterized dynamical systems, our approach is applicable generally to finding solutions for parameterized nonlinear systems of algebraic equations. Some potential improvements and future work are discussed.

16:10-16:30 Johnny (Jingze) Li

Quantifying Emergence through Homological Algebra and Its Applications to Artificial and Biological Neural Networks

Emergent effect is crucial to the understanding of the properties of complex systems that do not appear in their basic units, but there has been a lack of theories to measure and understand its mechanisms. In this paper, we established a framework based on Adam (2017), using homological algebra to encode emergence as cohomology and then applied it to network models to develop a computational measure of emergence. This framework ties the emergence of a system to its network topology and local structures, paving the way to predict and understand the cause of emergent effects. We also show how this measure of emergence can lead to the design of network architectures that have better performance.

Track 4

Mosaic 0204

Chair: Yizhe Zhu (UCI)

15:20-15:40 Yizhe Zhu

Kernel Ridge Regression in the Quadratic Regime

15:45-16:05 Max Collins

On the Concentration and Variance of Randomized Iterative Methods

16:10-16:30 Nikki Kuang

Posterior sampling with delayed feedback for reinforcement learning

15:20-15:40 Yizhe Zhu

Kernel Ridge Regression in the Quadratic Regime

Kernel regressions are a popular class of machine learning models that have become an important tool for understanding deep learning. Random matrix theory has allowed us to understand the behavior of certain kernel matrices for high-dimensional data. Much of the focus has been on studying the proportional asymptotic regime, $n \asymp d$, where $n$ is the number of training samples and $d$ is the dimension of the dataset. In this regime, under certain conditions on the data distribution, the kernel matrix behaves like a linear kernel. In this work, we extend this study to the quadratic asymptotic regime, where $n\asymp d^2$. We show that in this regime, a large class of inner product kernels behave like a quadratic kernel. Specifically, we show an operator norm bound on the kernel matrix generated by the original kernel and the quadratic kernel on the training dataset under a moment-matching assumption on the data distribution compared to the Gaussian distribution with a covariance structure. The new approximation results are used to characterize the asymptotic training and test error for kernel ridge regression in the quadratic regime.

15:45-16:05 Max Collins

On the Concentration and Variance of Randomized Iterative Methods

Stochastic iterative methods are useful in a variety of large-scale numerical linear algebraic, machine learning, and statistical problems, in part due to their low-memory footprint. They are frequently used in a variety of applications, and thus it is imperative to have a thorough theoretical understanding of their behavior. For stochastic methods, this motivates providing bounds on the variance and concentration of their error, which can be used to generate confidence intervals around the bounds on their expected error. In this talk, we provide both upper and lower bounds for the concentration of the error and an upper bound on the variance of the error of a general class of stochastic iterative methods, including the randomized Kaczmarz method and the randomized Gauss-Seidel method.

16:10-16:30 Nikki Kuang

Posterior sampling with delayed feedback for reinforcement learning

Recent studies in reinforcement learning (RL) have made significant progress by leveraging function approximation to alleviate the sample complexity hurdle for better performance. Despite the success, existing provably efficient algorithms typically rely on the accessibility of immediate feedback upon taking actions. The failure to account for the impact of delay in observations can significantly degrade the performance of real-world systems due to the regret blow-up. In this work, we tackle the challenge of delayed feedback in RL with linear function approximation by employing posterior sampling, which has been shown to empirically outperform the popular UCB algorithms in a wide range of regimes. We first introduce \textit{Delayed-PSVI}, an optimistic value-based algorithm that effectively explores the value function space via noise perturbation with posterior sampling. We provide the first analysis for posterior sampling algorithms with delayed feedback in RL and show our algorithm achieves $\widetilde{O}(\sqrt{d^3H^3 T} + d^2H^2 \E[\tau])$ worst-case regret in the presence of unknown stochastic delays. Here $\E[\tau]$ is the expected delay. To further improve its computational efficiency and to expand its applicability in high-dimensional RL problems, we incorporate a gradient-based approximate sampling scheme via Langevin dynamics for \textit{Delayed-LPSVI}, which maintains the same order-optimal regret guarantee with $\widetilde{O}(dHK)$ computational cost. Empirical evaluations are performed to demonstrate the statistical and computational efficacy of our algorithms.

Track 5

The Jeannie Auditorium

Chair: Badal Joshi (CSUSM)

15:20-15:40 Siyang Wei

Age-structured models of opinion dynamics: using data to uncover the mechanisms underlying decadal trends in opinion spread

15:45-16:05 Badal Joshi

Chemical mass-action systems as analog computers: implementing arithmetic computations at specified speed

16:10-16:30 Mykhailo Potomkin

Traffic jams in motor protein transport along inhomogeneous microtubules

15:20-15:40 Siyang Wei

Age-structured models of opinion dynamics: using data to uncover the mechanisms underlying decadal trends in opinion spread

The distribution of opinions among people constantly evolves over time. This can be seen, for example, from the American National Election Studies (ANES) database, which contains results of surveys on public opinions that have been conducted regularly since 1948. While the statistics of the opinions can be determined from this longitudinal dataset, the laws that govern opinion spread and the driving forces of opinion change are largely unknown. In order to shed light on some of these questions, we built a class of discrete models for age-structured opinion dynamics, which depends on the interaction matrices among different age groups and the probability of people to change their opinion state, based on the attractiveness of the opinions. We fit different attractiveness models to the ANES survey data, focusing on 15 yes-or-no questions and 22 “thermometer” questions in the database. Interestingly, the most powerful model for a large majority of questions is based on an age-dependent opinion attractiveness, which reflects cohort (internal) effects, as opposed to time-dependent attractiveness, which would reflect period (external) forces. Further, we found that in all the questions investigated, there is a significant positive correlation between age and the polarization of opinions, with at least one polarizing transition rate—from a neutral opinion to either a positive or negative stance—increasing with age. This methodology provides a quantitative way to uncover the mechanisms underlying decadal trends in opinion dynamics.

15:45-16:05 Badal Joshi

Chemical mass-action systems as analog computers: implementing arithmetic computations at specified speed

The broad goal of the nascent field of chemistry-based computation is to implement computation in a wet cellular environment using the available materials inside a cell. Recent technological advances, such as DNA-strand displacement, enable implementing arbitrary nonlinear chemical reaction networks in a cell. This allows us to view chemical mass-action systems as a programming language for analog computation. The inputs to the computation are encoded as initial values of certain chemical species and the outputs are the limiting values of other chemical species.

There are numerous works that design reaction networks that carry out basic arithmetic. However, in general, these constructions have not accounted for speed of computation (i.e. the rate of convergence). This often results in computational speed depending on the inputs to the computation, making it unusably slow. In this talk, I will discuss how we designed a full suite of “elementary” chemical systems that carry out arithmetic computations (such as inversion, addition, roots, multiplication, rectified subtraction, absolute difference, etc.) over the real numbers, and that have speeds of computation that are independent of the inputs to the computations. Moreover, we proved that finite sequences of such elementary modules, running in parallel, can carry out composite arithmetic over real numbers, also at a rate that is independent of inputs. I will close with a number of open questions and directions for future work. The relevant paper can be found here: https://arxiv.org/abs/2404.04396

16:10-16:30 Mykhailo Potomkin

Traffic jams in motor protein transport along inhomogeneous microtubules

Microtubule networks are key to the transport of material inside a biological cell. It was hypothesized that defects of active transport along microtubules may be related to many neurodegenerative diseases such as Alzheimer’s disease and Amyotrophic Lateral Sclerosis. One area of immediate need is the scenario where the microtubule used by motor proteins becomes congested, obstructed, or defective. In this talk, I will present the agent-based model of motor protein transport with an inhomogeneity describing such defects. First, I will show how the mean-field partial differential equation description was derived from the agent-based model using multi-scale analysis. Next, an analytic approach to the solution of the derived boundary-value problem will be presented. Finally, I will compare the results of Monte-Carlo simulations with analytic solutions. This work was done jointly with Shawn D. Ryan (Cleveland State University), Zachary McCarthy (York University, Canada), and Chase Evans (UC Riverside).

Track 6

Ridge Walk Academic Complex 0103

Chair: Sam Shen (SDSU)

15:20-15:40 Theo bourdais

Computational Hypergraph Discovery

15:45-16:05 Sam Shen

4D space-time data visualization tools and AI workforce development

16:10-16:30 Haixiao Wang

Unlocking Exact Recovery in Semi-Supervised Learning: Analysis of Spectral Method and Graph Convolution Network

15:20-15:40 Theo bourdais

Computational Hypergraph Discovery

Most scientific challenges can be framed into one of the following three levels of complexity of function approximation. Type 1: Approximate an unknown function given input/output data. Type 2: Consider a collection of variables and functions, some of which are unknown, indexed by the nodes and hyperedges of a hypergraph (a generalized graph where edges can connect more than two vertices). Given partial observations of the variables of the hypergraph (satisfying the functional dependencies imposed by its structure), approximate all the unobserved variables and unknown functions. Type 3: Expanding on Type 2, if the hypergraph structure itself is unknown, use partial observations of the variables of the hypergraph to discover its structure and approximate its unknown functions. While most Computational Science and Engineering and Scientific Machine Learning challenges can be framed as Type 1 and Type 2 problems, many scientific problems can only be categorized as Type 3. Despite their prevalence, these Type 3 challenges have been largely overlooked due to their inherent complexity. Although Gaussian Process (GP) methods are sometimes perceived as well-founded but old technology limited to Type 1 curve fitting, their scope has recently been expanded to Type 2 problems.

We introduce an interpretable GP framework for Type 3 problems, targeting the data-driven discovery and completion of computational hypergraphs. Our approach is based on a kernel generalization of (1) Row Echelon Form reduction from linear systems to nonlinear ones and (2) variance-based analysis. Here, variables are linked via GPs, and those contributing to the highest data variance unveil the hypergraph’s structure. We illustrate the scope and efficiency of the proposed approach with applications to network discovery (gene pathways, chemical, and mechanical), and raw data analysis.

15:45-16:05 Sam Shen

4D space-time data visualization tools and AI workforce development

This presentation shows some products generated by the NSF ExpAI2ES project (NSF Award # 2324008) which is a part of the AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. It is a project of research, educational activities, and workforce development in AI basic research and AI applications to Environmental Science (AI2ES) in two Hispanic-Serving Institutions: San Diego State University and UC-Irvine. The research develops 4-dimensional data visualization tools, such as the 4-dimensional visual delivery (4DVD) software, that can support a wider application of progressive education pedagogy for mathematics, atmospheric sciences, and AI. The research and educational advances are necessary to effectively address imminent challenges in climate change, environmental sustainability, and the costly weather extremes and hazards while engaging a diverse and untapped pool of talent. The Progressive Education for Atmospheric Science (PEAS) framework established in this project will help develop new course material and improve the educational offerings of AI courses. The engagement with real environmental data combined with the modern video-based 4DVD AI tools will inspire students towards STEM careers to work on the pressing problems of climate science and environmental sustainability. We will demonstrate how to use 4DVD as a tool for teaching courses in mathematics, statistics, and others, such as history and literature: www.4dvd.org.

16:10-16:30 Haixiao Wang

Unlocking Exact Recovery in Semi-Supervised Learning: Analysis of Spectral Method and Graph Convolution Network

We delve into the challenge of semi-supervised node classification on the Contextual Stochastic Block Model (CSBM) dataset. Here, nodes from the two-cluster stochastic block model (SBM) are coupled with feature vectors, which are derived from a Gaussian Mixture Model (GMM) that corresponds to their respective node labels. With only a subset of the CSBM node labels accessible for training, our primary objective becomes the accurate classification of the remaining nodes. Venturing into the transductive learning landscape, we, for the first time, pinpoint the information-theoretical threshold for the exact recovery of all test nodes in CSBM. Concurrently, we design an optimal spectral estimator inspired by Principal Component Analysis (PCA) with the training labels and essential data from both the adjacency matrix and feature vectors. We also evaluate the efficacy of graph ridge regression and Graph Convolutional Networks (GCN) on this synthetic dataset. Our findings underscore that graph ridge regression and GCN possess the ability to achieve the information threshold of exact recovery in a manner akin to the optimal estimator when using the optimal weighted self-loops. This highlights the potential role of feature learning in augmenting the proficiency of GCN, especially in the realm of semi-supervised learning.

16:35-16:45 Coffee Break, The Jeannie Auditorium

Afternoon Contributed Sessions

(Contributed Session III)

Track 1

Ridge Walk Academic Complex 0121

Chair: Xianjin Yang (Caltech)

16:45-17:05 Haoyu Zhang

An interacting particle consensus method for constrained global optimization

17:10-17:30 Lisang Ding

Efficient Algorithms for Sum-of-Minimum Optimization

17:35-17:55 Xianjin Yang

Decoding mean field games from population and environment observations by Gaussian processes

16:45-17:05 Haoyu Zhang

An interacting particle consensus method for constrained global optimization

This talk presents a particle-based optimization method designed for addressing minimization problems with equality constraints, particularly in cases where the loss function exhibits non-differentiability or non-convexity. A rigorous mean-field limit of the particle system is derived, and the convergence of the mean-field limit to the constrained minimizer is established.

17:10-17:30 Lisang Ding

Efficient Algorithms for Sum-of-Minimum Optimization

In this work, we propose a novel optimization model termed "sum-of-minimum" optimization. This model computes the sum (or average) of $N$ values where each is the minimum result obtained by applying a separate objective function to the same set of $k$ variables, and the model seeks to minimize the sum over those $k$ variables. This is a clustering model that unifies many applications in machine learning and related fields.

We develop efficient algorithms for sum-of-minimum optimization, motivated by a randomized initialization algorithm for classic $k$-means and Lloyd's algorithm. We establish a new tight bound for the generalized initialization algorithm and prove a gradient-descent-like convergence rate for the generalized Lloyd's algorithm.

The efficiency of our algorithms is numerically examined on multiple tasks including generalized principal component analysis, mixed linear regression, and small-scale neural network training. Our approach compares favorably to previous ones that are based on simpler but less-precise optimization reformulations.

17:35-17:55 Xianjin Yang

Decoding mean field games from population and environment observations by Gaussian processes

This talk presents a Gaussian Process (GP) framework, a non-parametric technique widely acknowledged for regression and classification tasks, to address inverse problems in mean field games (MFGs). By leveraging GPs, we aim to recover agents' strategic actions and the environment's configurations from partial and noisy observations of the population of agents and the setup of the environment. Our method is a probabilistic tool to infer the behaviors of agents in MFGs from data in scenarios where the comprehensive dataset is either inaccessible or contaminated by noises.

Track 2

Ridge Walk Academic Complex 0115

Chair: Scott Little (CPP)

16:45-17:05 Scott Little

Koopman Operator KAM Torus for D2-Brane DBI Action

17:10-17:30 Matthieu Darcy

Kernel methods for rough partial differential equations

17:35-17:55 Nhat Thanh

Fourier-Mixed Window Attention: An Application to Long Sequence Time Series

16:45-17:05 Scott Little

Koopman Operator KAM Torus for D2-Brane DBI Action

Koopman Operator Theory (KOT) is an Ergodic theory and application of nonlinear dynamics based on the elegant theorem developed by Koopman and Von Neumann in the 1930’s. More recently the KOT has been incorporated into Dynamic and Chaotic Systems Theory by M´ezic. The linear Koopman Operator Theory includes a state space of infinite dimensions to control a finite dimensional nonlinear dynamic system. Stochastic string theory is referred to as “postmodern” string theory. The strings are treated not as discrete objects but as probabilistic spaces to account for quantum uncertainties and nonlinear effects.

The first and second papers in this series contained a proof relating the Anti-de-Sitter Spacetime Conformal Field Theory Correspondence or AdS/CFT Duality to a Feynman-Kac stochastic string solution in Mellin Transform Space.

The third paper focused on a proof correlating the Stochastic Feynman-Kac AdS/CFT solution to the Boltzmann Machine. The fourth paper focused on a proof of the KOT KvN Integral coupled to the previous Feynman-Kac stochastic string solutions. This paper is a continuation of my work presented at the Southern California Applied Mathematics Symposium UCI, April 23, 2023. We correlate the Koopman Operator to the AdS/CFT and Boltzmann Machine analogy mapped to KAM tori wrapped 2D-branes of the Dirac-Born-Infeld magnetic field string action.

Applications included in this paper and previous papers are Stochastic PDEs, Koopman non-linear chaotic complexity, quantum gravity/stochastic strings, plasma/fusion, fluid dynamics, cloaking/ electromagnetic black holes, cosmic strings/gravity waves, gamma rays/pulsars, machine learning-AI neural networks/Boltzmann Machines, AdS/CFT, Feynman-Kac path integral, Schrödinger equation, Black Shoals economics/finance.

17:10-17:30 Matthieu Darcy

Kernel methods for rough partial differential equations

Following the promising success of kernel methods in solving non-linear partial differential equations (PDEs), we investigate the application of Gaussian process methods to solve PDEs with rough right-hand side. We introduce an optimal recovery scheme defined by a Reproducing Kernel Hilbert Space (RKHS) of functions of greater regularity than that of the PDE’s solution.

We present the resulting theoretical framework and its convergence guarantees for the recovery of solutions to the PDE. We illustrate its application to problems arising from stochastic partial differential equations through numerical experiments.

17:35-17:55 Nhat Thanh

Fourier-Mixed Window Attention: An Application to Long Sequence Time Series

Attention mechanism is an essential component of the Transformer architecture, forming the backbone of numerous large language models like ChatGPT. However, its quadratic complexity poses a hindrance for long sequence time-series forecasting. In this talk, we introduce a fast local-global window-based attention method named FWin, which leverages window attention to mitigate the complexity, alongside employing Fourier transform as a means to incorporate global token information. Moreover, we provide a mathematical definition of FWin attention and demonstrate its equivalence to the canonical full attention under certain conditions. Additional experiments illustrate the effectiveness of our model when applied to non-stationary data such as power grids and dengue.

Track 3

Ridge Walk Academic Complex 0104

Chair: Yiwei Wang (UCR)

16:45-17:05 Jocelyn Ornelas-Munoz

From Observations to Theoretical Consistency: Decoder Recovery in Coded Aperture Imaging Using Convolutional Neural Networks

17:10-17:30 Yilan Chen

Analyzing Neural Networksthrough Equivalent kernels

17:35-17:55 Yiwei Wang

Energetic Variational Neural Network Discretizations of Gradient Flows

16:45-17:05 Jocelyn Ornelas-Munoz

From Observations to Theoretical Consistency: Decoder Recovery in Coded Aperture Imaging Using Convolutional Neural Networks

Coded aperture imaging, crucial for low-light imaging in challenging conditions, requires specific decoders for image reconstruction. Traditional image reconstruction methods can be complex and focus on reconstruction rather than learning the underlying decoder. Our work introduces a one-layer CNN network for interpretable decoder recovery, without prior knowledge of encoding or decoding arrays. Using observed detector images, the network produces reconstructed images using a learned decoder. We train our network using the MNIST dataset and report high accuracy in image reconstruction even for images with high signal to noise ratio. To validate the generalizability of the method, we show that the MNIST-trained CNN-learned decoder is able to accurately reconstruct images from the grayscale FashionMNIST dataset.

17:10-17:30 Yilan Chen

Analyzing Neural Networksthrough Equivalent kernels

Recent research on deep learning theory has shown that ultra-wide NNs trained by gradient descent with squared loss are equivalent to kernel regression using the Neural Tangent Kernel (NTK). However, the current understanding is still limited in several ways: 1) The equivalence is only known for kernel regression, while the equivalence with other kernel machines, such as Support Vector Machines (SVMs), remains unknown. 2) Existing theoretical analysis only focuses on infinite-width or ultra-wide NNs, which deviates from practical neural network architectures. 3) The NTK, as a fixed kernel, lacks the ability for feature learning and has a performance gap compared to practical NNs.

In this talk, we present our work addressing these challenges: 1) We establish the equivalence between ultra-wide NNs and a family of L2-regularized kernel machines, including SVMs, going beyond the previously known kernel regression equivalence. 2) For finite-width practical NNs, we establish a new equivalence between its loss dynamics of gradient flow and general kernel machines by proposing a new kernel, called Loss Path Kernel (NN-LPK equivalence). 3) Based on the NN-LPK equivalence, we derive a new generalization upper bound that applies to general neural network architectures and can guide the design of neural architecture search (NAS) methods. Our findings represent significant advancements in the understanding of neural network dynamics and their connections to kernel machines, with important implications for both theoretical analysis and practical neural network design.

17:35-17:55 Yiwei Wang

Energetic Variational Neural Network Discretizations of Gradient Flows

Numerous applications in physics, material science, biology, and machine learning can be modeled as gradient flows. In this talk, we present a structure-preserving Eulerian algorithm for solving L2-gradient flows and a structure-preserving Lagrangian algorithm for solving generalized diffusions by employing neural networks as tools for spatial discretization. Unlike most existing methods that construct numerical discretizations based on the strong or weak form of the underlying PDE, the proposed schemes are constructed based on the energy-dissipation law directly. This guarantees the monotonic decay of the system's energy, which avoids unphysical states of solutions and is crucial for the long-term stability of numerical computations. To address challenges arising from nonlinear neural-network discretization, we adopt a temporal-then-spatial discretization approach on these variational systems. The proposed neural-network-based schemes are mesh-free, allowing us to solve gradient flows in high dimensions.

Track 4

Mosaic 0204

Chair: Matheus B Guerrero (CSUF)

16:45-17:05 Yiyun He

Online Differentially Private Synthetic Data Generation

17:10-17:30 Robert Webber

Novelty sampling for fast, effective data reduction

17:35-17:55 Matheus B Guerrero

Statistics of Extremes for Neuroscience: A New Lens for EEG Analysis and Brain Connectivity

16:45-17:05 Yiyun He

Online Differentially Private Synthetic Data Generation

We present a polynomial-time algorithm for online differentially private synthetic data generation. For a data stream within the hypercube $[0,1]^d$ and an infinite time horizon, we develop an online algorithm that generates a differentially private synthetic dataset at each time $t$. This algorithm achieves a near-optimal accuracy bound of $O(t^{-1/d}\log(t))$ for $d\geq 2$ and $O(t^{-1}\log^{4.5}(t))$ for $d=1$ in the 1-Wasserstein distance. This result generalizes the previous work on the continual release model for counting queries to include Lipschitz queries. Compared to the offline case, where the entire dataset is available at once, our approach requires only an extra polylog factor in the accuracy bound.

17:10-17:30 Robert Webber

Novelty sampling for fast, effective data reduction

We present a new algorithm for reducing a large data set to a small number of landmark data points. The landmarks are randomly selected, yet they account for nearly all the "novelty" in the data. To generate landmarks, we randomly propose data points and accept/reject with probabilities depending on the previous selections. After the generation step, the landmarks can be used to quickly make predictions and find clusters in the data. Landmark-based learning has a memory footprint that is independent of the data size, so the approach is suitable for distilling large data sets with N >= 10^9 data points.

17:35-17:55 Matheus B Guerrero

Statistics of Extremes for Neuroscience: A New Lens for EEG Analysis and Brain Connectivity

Neuroscience has long sought to unravel the complexities of brain dynamics, particularly during instances of extreme cognitive tasks or neurological disturbances, such as epileptic seizures. Traditional statistical approaches have primarily focused on the central tendencies of brain electrical signals, often neglecting the hidden information in the tails of the signal's distribution. This talk introduces a novel perspective by employing Extreme Value Theory (EVT) to analyze electroencephalography (EEG) data, significantly augmenting our understanding of brain activity and connectivity. We delve into the methodology and findings from an exploratory analysis that utilizes univariate and multivariate statistics of extremes, demonstrating the capability of EVT to offer nuanced insights into the tail behavior of brain signals. By modeling univariate extremes of EEG channels, assessing extremal dependence between channels, and exploring conditional extremal modeling, we reveal patterns of extremal brain activity that classical methods may overlook. Our analysis showcases the practical relevance of EVT in identifying abnormal brain activity patterns, particularly in the context of epileptic seizures, thereby opening avenues for further research in applying EVT within neuroscience. This approach complements existing methodologies and paves the way for innovative diagnostic and predictive tools in understanding complex neurological phenomena.

Track 5

The Jeannie Auditorium

Chair: Kristin Kurianski (CSUF)

16:45-17:05 Kristin Kurianski

Exploring the influence of vaccine ideology on infectious disease dynamics using compartment models

17:10-17:30 Yuyao Wang

Learning treatment effects under covariate dependent left truncation and right censoring

17:35-17:55 Yanxiang Zhao

Phase Field Modeling of Dictyostelium Discoideum Chemotaxis

16:45-17:05 Kristin Kurianski

Exploring the influence of vaccine ideology on infectious disease dynamics using compartment models

The population's willingness to receive vaccines during the COVID-19 pandemic greatly impacted the dynamics of the disease spread. Ledder (2022) introduced the PUIRU model that incorporated vaccine ideology by partitioning the susceptible population of the standard SIR model into two subpopulations: Pre-vaccinated (willing to obtain the vaccine but not yet vaccinated) and Unvaccinated (unable or unwilling to receive a vaccine). The PUIRU model assumes that vaccine ideologies are fixed, i.e., those in the Pre-vaccinated compartment will always receive the vaccine and those in the Unvaccinated compartment never will. This talk presents a modification of the PUIRU model in which there is a nonlinear transition between the Pre-vaccinated and Unvaccinated populations allowing for individuals to change their perception of the vaccine based on the disease prevalence. We present stability analysis of the disease-free and endemic disease equilibria for two forms of the ideological transition function and highlight the presence of stable limit cycles in the disease dynamics.

17:10-17:30 Yuyao Wang

Learning treatment effects under covariate dependent left truncation and right censoring

In aging studies or prevalent cohort studies with follow-up, causal inference for time-to-event outcomes can be challenging. The challenges arise because, in addition to the potential confounding bias from observational data, the collected data often also suffers from the selection bias due to left truncation, where only subjects with time to event (such as death) greater than the enrollment times are included, as well as bias from informative right censoring. To assess the treatment effects on time-to-event outcomes in such settings, inverse probability weighting (IPW) is often employed. However, IPW is sensitive to model misspecifications, which makes it vulnerable, especially when faced with three sources of biases. Moreover, IPW is inefficient. To overcome these issues, we propose a general framework to handle dependent left truncation and informative right censoring for causal inference problems. The proposed approach enjoys model double robustness and rate double robustness. Our work represents the first attempt to construct doubly robust estimators that account for all three sources of biases: confounding bias, selection bias from covariate-induced dependent left truncation, and bias from informative right censoring.

17:35-17:55 Yanxiang Zhao

Phase Field Modeling of Dictyostelium Discoideum Chemotaxis

A phase field approach is proposed to model the chemotaxis of Dictyostelium discoideum. In this framework, motion is controlled by active forces as determined by the Meinhardt model of chemical dynamics which is used to simulate directional sensing during chemotaxis. Then, the movement of the cell is achieved by the phase field dynamics, while the reaction-diffusion equations of the Meinhardt model are solved on an evolving cell boundary. This task requires the extension of the usual phase-field formulation to allow for components that are restricted to the membrane. The coupled system is numerically solved by an efficient spectral method under periodic boundary conditions. Numerical experiments show that our model system can successfully mimic the typically observed pseudopodia patterns during chemotaxis.

Track 6

Ridge Walk Academic Complex 0103

Chair: Bohan Zhou (UCSB)

16:45-17:05 Djordje Nikolic

Multispecies Optimal Transport

17:10-17:30 Xiangyi Zhu

Non-backtracking eigenvector delocalization for random regular graphs

17:35-17:55 Bohan Zhou

Acceleration for MCMC methods on discrete states

16:45-17:05 Djordje Nikolic

Multispecies Optimal Transport

The discovery of linear optimal transport by Wang et al., in 2013 improved the computational efficiency of optimal transport algorithms for grayscale image classification. Our main goal is to classify special kinds of multicolor images, arising in collider events. We will introduce the basics of optimal transport theory, linear optimal transport and the multispecies distance. This is a work in progress with Katy Craig and Nicolás García Trillos.

17:10-17:30 Xiangyi Zhu

Non-backtracking eigenvector delocalization for random regular graphs

The non-backtracking operator of a graph is a powerful tool in spectral graph theory and random matrix theory. Most existing results for the non-backtracking operator of a random graph concern only eigenvalues or top eigenvectors. In this paper, we take the first step in analyzing its bulk eigenvector behaviors. We demonstrate that for the non-backtracking operator \( B \) of a random \( d \)-regular graph, its eigenvectors corresponding to nontrivial eigenvalues are completely delocalized with high probability. Additionally, we show complete delocalization for a reduced \( 2n \times 2n \) non-backtracking matrix \( \tilde{B} \). By projecting all eigenvalues of \( \tilde{B} \) onto the real line, we obtain an empirical measure that converges weakly in probability to the Kesten-McKay law for fixed \( d\geq 3 \) and to a semicircle law as \( d \to\infty \) with \( n \to\infty \).

17:35-17:55 Bohan Zhou

Acceleration for MCMC methods on discrete states

In this research, we propose a Nesterov type method to accelerate the Markov Chain Monte Carlo (MCMC) algorithm on finite graphs. The MCMC method on a finite graph can be viewed as the gradient flow of a divergence functional. By leveraging the idea from the Nesterov acceleration method, we introduce "momentum" to the MCMC algorithm and propose a second order ODE in the probability space, which can be treated as the accelerated version of MCMC process. We provide Lyapunov analysis to justify the convergence of the algorithm. Some numerical examples are also provided to verify the efficiency of the method.

Poster Awards and Closing Remarks

The Jeannie Auditorium

18:00-18:10 Poster Awards and Closing Remarks