Kevin You's Undergraduate Research Projects

[Back to main page]

This is a high-level overview of some research I have done while in my undergrad times. Also check out my CV!
  1. Multiplication in fractional Sobolev spaces with Giovanni Leoni at CMU
  2. Zeros and a-values of approximations for a class of L-fucntions with Arindam Roy at UNC Charlotte
  3. Intersection of doubling measures with Theresa Anderson at CMU
  4. Next event estimation for walk on spheres with Ioannis Gkioulekas at CMU
  5. Panel methods for computer graphics with Bo Zhu, Takeo Igarashi, Haoran Xie at University of Tokyo
  6. Design for soft body swimmers with Tao Du at Tsinghua university

For some projects, I have included both a simplified and a technical description. Others may have only one description.




Multiplication in fractional Sobolev spaces

Fractional Sobolev spaces are important due to their role for measuring regularity and arise naturally for example via the trace operator (restriction of a function to the boundary of its domain). There are multiple ways of defining the fractional Sobolev space, such as with the Fourier transform and Littlewoord-Paley theory or as interpolation spaces between integer Sobolev spaces. We instead consider a more elementary approach using the intrinsic seminorm. For \( 0 < s < 1 \), the fractional Sobolev space \(W^{s,p} (\mathbb{R}^n)\) is the space of all \( u \in L^p(\mathbb{R}^n) \) such that \[ \vert u \vert_{W^{s,p}} := \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{ \vert u(x) - u(y) \vert^p }{\vert x - y \vert^{1+sp}} dx dy < \infty. \] with norm \( \Vert u \Vert_{W^{s,p}} = \Vert u \Vert_{L^p} + \vert u \vert_{W^{s,p}} \). Working with this definition, in the case \(n = 1\), we derive some necessary and sufficient conditions on the parameters \(s_1,s_2,s,p_1,p_2,p\) for the continuous embedding \[ W^{s_1,p_1} \times W^{s_2, p_2} \hookrightarrow W^{s,p} \] of pointwise multiplication of fractional Sobolev spaces, improving on the results known in the general case.

A major development in analysis in the 20th century is expanding the notion of calculus concepts like differentiability to larger spaces of functions called Sobolev spaces. Functions in Sobolev spaces are only examined via their average behavior and allows certain kinds of singularities. Sobolev spaces has found to be effective in both pure math and in applications like the finite element method.

Consider the absolute value function \(f(x) = \vert x \vert\). Though the function fails to be classically differentiable at \( x = 0\), as a function in a Sobolev space its derivative is the sign function \(f'(x) = sgn(x)\), and the fundumental theorem of calculus holds. We can then differentiate \(sgn(x)\) again, and intuitively our result will be the Dirac delta \( f''(x) = \delta_0 \), which is no longer a function and must be interpreted as a so-called distribution. Hence, \( f \) is once differentiable and it lives in the Sobolev space \( H^1 \).

One may then ask if it is possible to define a fractional amount of derivatives. It turns out that classical differentiability can be interpreted as a bound on the amount of oscillation a function has at small scales, as seen through the Fourier transform, and moreover this notion of differentiability can be extended to the fractional case. There are the fractional Sobolev spaces \( H^s \), \( 0 < s < 1 \), where larger \( s \) spaces are smaller and contain only the more regular functions that are \( s \) times differentiable. For example, it turns out that although \( sgn(x) \) doesn't have a full derivative, it has almost half a derivative, and it lives in \( H^s \) for \( s < 1/2 \). Correspondingly, our original \( \vert x \vert \) has almost one and a half derivative.

Finally, we may ask, given two functions, how much of this regularity does their product have. Unlike ordinary calculus, where a product is as differentiable as the two terms due to the product rule, for fractional differentiability the product tends to be less differentiable. In my work, I study exactly how much less, in the case of one dimension.


Zeros and a-values of approximations for a class of L-functions

The famous Riemann zeta function is important in number theory due to its connection with prime numbers \[ \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \prod_{p \text{ prime}} \left( 1 - \frac{1}{p^s} \right)^{-1}. \] The well-known Riemann hypothesis states that all non-trivial zeros of the Riemann zeta function \(\zeta(s)\) lie on the critical line \(\operatorname{Re}(s) = \frac{1}{2}\). A weaker form of the hypothesis says that in the limit that \( T \rightarrow \infty \), one hundred percent of zeros with \( 0 < \operatorname{Im}(s) < T \) lie on the critical line. Currently it is known that asymptotically at least two-fifths of such zeros lie on the critical line. Instead of asking for \( \zeta(s) = 0 \), we may also consider for a fixed non-zero complex number \( a \), where does \( \zeta(s) = a \), called a-values, occur. Unlike zeros, we know that these a-values like very close to the critical line, but it is conjectured by Selberg that zero percent of them lie exactly on the line. Currently it is known that asymptotically at most half of a-values lie on the critical line. Instead of working with \( \zeta \) itself, we work with approximations of the zeta function \[ \zeta_N(s) = \sum_{i=1}^N n^{-s} + \chi(s) \sum_{n=1}^N n^{s-1} \] which arrises from the Hardy-Littlewood approximate functional equation. We prove that these approximations satisfy the property that zero percent of all a-values lie on the critical line. Our central tool is to upper and lower bound various analytic functions and use Jensen's formula to control the number of zeros of the function.


Intersection of doubling measures

Check out this poster!

A measure \( \mu \) defined on \( [0,1] \) is said to be doubling if there exists \( C \geq 1 \) such that for any adjacent intervals \( I_1, I_2 \) of equal length, \[ \mu(I_1) \leq C \mu(I_2). \] Being doubling limits how much \( \mu \) can oscillate or spike on all scales. It is often difficult to consider all intervals, and in harmonic analysis it is desirable to instead consider n-adic intervals of the form \( [ \frac{k-1}{n^m}, \frac{k}{n^m} ) \) for \(m, k \in \mathbb{Z} \). This leads to the definition of n-adic doubling measures, measures where the above inequality holds for n-adic intervals \( I_1, I_2 \). A folkloric question asks whether being n-adic doubling for all \( n \geq 2 \) implies doubling. We show that the answer is no by constructing a counter-example. To do this we need to use number theory to find n-adic intervals over multiple \( n \) that align closely, then construct spikes near the endpoints of these intervals, and demonstrating that these examples fail the doubling condition.

Our techniques are also adapted for the space of bounded mean oscillation BMO, that is, functions \( u \) whose norm \[ \Vert u \Vert_{BMO} = \frac{1}{\vert I \vert} \int_{I} \vert u(y) - \left( \int_I u(x) dx \right) \vert dy < \infty. \] If we restrict \( I \) to n-adic intervals, we get the spaces \( BMO_n \). Our proof shows that \[ \bigcap_{n \geq 2} BMO_n \neq BMO. \]

Once again, we want to ask how nice a function is. We want to make sure that our function doesn't change too much locally, or doesn't have any large spikes. We say that a function is doubling if there is a constant C such that, among any two adjacent intervals of equal length, the weight of the function (area under the curve, or integral) on the two intervals are similar, that is, their ratio is bounded by C. The adject aspect ensures that we are looking at a local feature, and the equal length part is needed to make a fair comparison. With this definition, a constant function is doubling with constant 1. The exponential function \(f(x) = 2^x \) is doubling with constant 2. However, \( f(x) = x^x \) is not doubling since it spikes too quickly.

It is hard to check whether a function is doubling or not, since there are too many intervals to possibly check. In harmonic analysis we instead prefer n-adic intervals. Start with the interval \( [0,1) \). Divide it into n pieces. Then divide each of the n pieces into n more pieces. Repeat ad infinitum. All these intervals are n-adic intervals. Would it be enough if we only test doubling on these nice n-adic intervals? We prove that the answer is no even if we test all natural numbers n by constructing a counter-example that is doubling on all n-adic intervals, but not generally doubling. To do this we need to use number theory to find n-adic intervals over multiple n that align closely, and then construct spikes near the endpoints of these intervals. See poster for a picture.


Next event estimation for walk on spheres

Among the most important of all PDEs in mathematics and physics is Laplace's equation \[ \begin{aligned} \Delta u &= 0 \text{ if } x \in \Omega \\ u &= g \text{ if } x \in \partial \Omega \end{aligned} \] Numerically, Laplace's equation is usually solved with finite element methods. but these methods heavily depend on the quality of the volumetric mesh. Instead, Monte Carlo methods such as the walk on spheres method may be preferable since it only requires a surface mesh. Harmonic functions (solutions to Laplace's equation) satisfy a mean-value property where \( u(x_0) \) is equal to the average of \( u(y) \) over all \( y \in \partial B(x_0,r_0) \), assuming \( B(x_0,r_0) \subseteq \Omega \). Thus, an estimator of \(u(x_0)\) is \( u(x_1) \) if we pick \(x_1 \in B(x_0,r_0) \) uniformly at random. We can repeat this scheme and estimate \( u(x_1) \) with some \( x_2 \in B(x_1,r_1) \), continuing ad infinitum. Suppose that we pick \( x_n \) by a random walk and take its intersections with the balls. Then the sequence \( x_n \) converges almost surely to some point \( z \in \partial \Omega \), and an estimator of \( u(x) \) is \( u(z) = g(z) \). In fact, any \( x_0 \) induces an associated Poisson kernel or harmonic measure on \( \partial \Omega \), and the above random walk process samples that measure.

While the walk on spheres algorithm is very elegant, it has much unused potential. Firstly, a single random walk is costly due to the many geometric queries performed to determine the many radii \( r_0,r_1,\ldots \), but only returns one estimate of the solution. Secondly, the walk has no knowledge of the boundary conditions, which is problematic when the boundary conditions are irregular. Interpreting the equation as steady state heat diffusion, if there is only one very hot and small heater and the rest of the walls are cold, it is unlikely for a random walk to encounter the heater, which leads to noisy estimates. Drawing inspiritions from physics-based rendering, we devised new mechanisms of utilizing next event estimation and incorporating boundary data via multiple importance sampling to address the two issues.

A central problem in computational sciences is solving partial differential equations, which are ubiquitous in modeling physical phenomenon in all areas of science and engineering. We consider one of the most important partial differential equation, Laplace's equation \( \Delta u = 0 \), which models various systems at equilibrium, for example a steady state heat distribution \( u \) in a room due to known temperatures at the walls and uniform heat diffusion in the room. Laplace's equation is linear, meaning that separate heat sources contribute independently to the heat. Moreover, the equation is averaging in the sense that \( u(x_0) \) cannot be larger or smaller than neighbhoring values, and in fact must be equal to the average of the neighbhoring \( u(x_1) \), where \( x_1 \) belongs on a sphere centerred at \( x_0 \). We may pick \( x_1 \) by random, where \( u(x_1) \) is an unbiased estimator of \( u(x_0) \).

If we know \( u(x_1) \) on the sphere, then we would be done. But we don't. Nonetheless, \( u(x_1) \) itself must be equal to the average of \( u(x_2) \), where \( x_2 \) belongs on a sphere centerred at \( x_1 \). We can repeat this scheme ad infinitum, and by randomness, we eventually must get some \( x_n \) that happens to land on the wall, where we do know \( u(x_n) \). This gives an estimator of our original \( u(x_0) \). As a sanity check, does this scheme still depend on where we started, \( x_0 \)? Yes, because the probability of us hitting various locations on the wall depends on our original point \( x_0 \). The walk on spheres algorithm runs the above procedure many times and averages the many resulting \( u(x_n) \) to estimate \( u(x_0) \).

While the walk on spheres algorithm is very elegant, it has much unused potential. Determining a valid random walk that does not leave the room requires many geometric queries regarding the room's shape. Secondly, the walk has no knowledge of the heat sources. If there is only one very hot and small heater and the rest of the walls are cold, it is unlikely for a random walk to encounter the heater, which leads to noisy estimates. Drawing inspiritions from physics-based rendering, we devised new mechanisms of utilizing next event estimation and incorporating boundary data via multiple importance sampling to address the two issues.


Panel methods for computer graphics

The vorticity form of the incompressible Navier-Stokes equations states that \[ \frac{D \omega}{Dt} = \omega \cdot (\nabla u) + \nu \nabla^2 \omega. \] where \( \nu \) is the kinematic viscosity. It is sometimes desirable to simulate the vorticity \( \omega = \nabla \times u \) directly since it captures the complexity of the fluid flow. From the boundary, vorticity diffuses into the domain with length scale \( \sim (\nu t)^{1/2} \). At very large Reynolds numbers, a very thin sheet of vorticity forms in the so-called boundary layers. Instabilities and singularities occur in the boundary layer, which causes fluid material and consequently vorticity to be erupted from the boundary layer in spikes. This phenomenon of vorticity separation is not yet well-understood.

Panel methods in aerodynamics model the vorticity separation at the sharp trailing edge of the wing as a flat sheet of wake, and the forces due to the wake can correctly account for the drag forces on the wing and explain d'Alembert's paradox. These traditional methods work for cusped edges in 2D, and only recently has the vorticity separation model been extended to non-cusped edges. In our work, we revisit the vorticity separation model for non-cusped edges and also investigate whether it can be extended to 3D for arbitrary geometric meshes. If possible, this will allow for efficient simulation of solid-fluid coupling at vanishing viscosities.


Design for soft body swimmers

We are interested in designing robotic swimmers using reinforcement learning. There are many work in the recent decade on using reinforcement learning for robot locomotion, and typically these work examine terrestrial locomotion via rigid limbs. The dynamics of a soft body swimmer in fluids is vastly different, since the swimmer must engage its entire body in a continuous deformation for propulsion, and the simulation of fluids is also more difficult than rigid bodies.

While controller design with reinforcement is well studied, it is typically assumed that the shape or morphology of the robot is given. A significantly more difficult question asks if we can also design and optimize for the shape of the robot with evolutionary algorithms or reinforcement learning techniques, and unifty the inner loop of controller design with the outer loop of morphology. In this research, we attempt to combine the two loops with techniques from differential geometry. Some preliminary experiments have been conducted for simple jellyfish-like swimmers.




Last updated April 2026