-
Notifications
You must be signed in to change notification settings - Fork 136
Open
Description
❓ Questions and Help
Hi Theseus team,
I am implementing an inverse mesh fairing optimization problem where I need to work with sparse matrices (Laplacian and mass matrices of an input triangle mesh). The problem involves minimizing:
min_{V'} (1/2) ||V - H_t(V')||²_M
where V' is the optimization variable, V is an auxiliary variable, H_t = (M + tL)^{-1} is the heat diffusion operator, M is the mass matrix, and L is the Laplacian. Both L and M are sparse.
Issue: I need to solve a linear system (M + tL) x = M V' at every optimization iteration to compute H_t(V'). Since M and L do not change and (M + tL) is symmetric positive definite, I would prefer to:
- Factorize it once using Cholesky factorization before optimization
- Reuse the factorization for multiple solves during optimization
This would be much more efficient than calling torch.linalg.solve() at each iteration.
Current inefficient approach:
import torch
import theseus as th
# Simulate geometry processing matrices (sparse in practice)
n_verts = 1000
dim = 3
mass = torch.eye(n_verts).to_sparse() # Typically sparse diagonal
laplacian = torch.randn(n_verts, n_verts).to_sparse() # Sparse Laplacian
t = 0.01
heat_operator = (mass + t * laplacian).to_dense() # <<<<< Have to densify
mass = mass.to_dense()
def fairing_error(optim_vars, aux_vars):
verts_var, = optim_vars # V' (shape: 1, n_verts * 3)
verts_target, heat_op, mass_mat = aux_vars
n_verts = mass_mat.tensor.shape[1]
# Compute M @ V'
rhs = mass_mat.tensor @ verts_var.tensor.reshape(n_verts, -1)
# Solve (M + tL) @ smoothed = rhs
# This factorizes from scratch at EVERY iteration - very inefficient!
smoothed = torch.linalg.solve(heat_op.tensor, rhs)
diff = verts_target.tensor - smoothed.reshape(1, -1)
return diff
# Setup optimization
verts_var = th.Vector(dof=n_verts * dim, name="verts_var")
verts_target = th.Variable(tensor=torch.randn(1, n_verts * dim), name="verts_target")
heat_var = th.Variable(tensor=heat_operator.unsqueeze(0), name="heat_op")
mass_var = th.Variable(tensor=mass.unsqueeze(0), name="mass")
cost_fn = th.AutoDiffCostFunction(
optim_vars=[verts_var],
err_fn=fairing_error,
dim=n_verts * dim,
aux_vars=[verts_target, heat_var, mass_var],
name="fairing"
)
objective = th.Objective()
objective.add(cost_fn)
optimizer = th.GaussNewton(objective, max_iterations=50)
layer = th.TheseusLayer(optimizer)
inputs = {
"verts_var": torch.randn(1, n_verts * dim),
"verts_target": torch.randn(1, n_verts * dim),
"heat_op": heat_operator.unsqueeze(0),
"mass": mass.unsqueeze(0),
}
with torch.no_grad():
solution, info = layer.forward(inputs, optimizer_kwargs={"verbose": True})Questions:
- What's the recommended way to use pre-factorized matrices in Theseus cost functions?
- Does Theseus have built-in support for sparse linear solvers with factorization caching?
- Is there recommended way to work with sparse matrices in Theseus?
Thank you!
Metadata
Metadata
Assignees
Labels
No labels