Figure 1

With neural diffusion models, we can generate samples from the unconditional distribution p(xτ0)p(x). To solve inverse problems, however, we need to sample from the posterior p(xτ0y).

There are lots of ways we might try to condition, differing sometimes only in emphasis.

1 Notation

First, let us fix notation. I’ll use a slight variant of the notation from the denoising diffusion SDE notebook. Because I need t for other things, we’ll talk about τ for the pseudo-time grid, and use τ0=0<τ1<<τT=1 to index the discrete pseudo-time grid. We write xτi for the state at time τi.

For simplicity, we’ll assume a variance-preserving (VP) diffusion SDE. We corrupt data xτ0=x0pdata(x) by the variance-preserving SDE

dxτ=12β(τ)xτdτ+β(τ)dWτ,

or in discrete form for each step τiτi+1:

xτi+1=1βi+1xτi+βi+1ε,εN(0,I). Write Δτ:=τiτi1 (uniform unless stated otherwise).

We also define the convenience terms α¯(τ)=exp(0τβ(s)ds),soσ(τ)2=1α¯(τ). and βi:=1α¯(τi)α¯(τi1).

1.1 Score Network & Training

We train sθ(x,τ) to approximate the time-indexed score xτlogpτ(xτ) by minimizing the denoising loss

L(θ)=Exτlogpτ(xτ)sθ(xτ,τ)2.

where xτ=α¯(τ)x0+1α¯(τ)ε.

noise vs score

Equivalently, we can parametrize sθ directly to predict the noise. In a VP diffusion the conditional distribution of the noise ε given the noisy point xτ is Gaussian with mean E[εxτ]=σ(τ)α¯(τ)xτlogpτ(xτ)xτlogpτ(xτ)=α¯(τ)σ(τ)ε. Hence predicting the noise (the “εθ” parametrisation) or predicting the score (the “sθ” parametrisation) carries the same information up to the known factor α¯(τ)/σ(τ)=σ(τ).

1.2 Reverse-Time Sampling

The reverse SDE is as follows. Integrate the reverse SDE from τT=1 down to τ0=0

dxτ=[12β(τ)xτβ(τ)xτlogpτ(xτ)]dτ+β(τ)dW¯τ, using sθ(x,τ)logpτ(x). (W¯τ is an independent time reversal of the Wiener process Wτ.)

Alternatively, we can use the deterministic / probability-flow ODE:

dxτ=[12β(τ)xτβ(τ)xτlogpτ(xτ)]dτ,

with initial draw $x_{_T}N(0,I). This yields the same marginals pτ without injecting extra noise at each step (which is insane and not at all obvious to me).

On our τ grid, the discrete time DDPM reverse update becomes, for i=T,,1:

xτi1=11βi(xτiβisθ(xτi,τi))+β~iζ,ζN(0,I). We introduced here β~i:=βi(1α¯(τi1)1α¯(τi)) (the “posterior” variance).

The DDIM variant removes ζ for a deterministic two-step inversion.

2 Generic conditioning

Here is a quick rewrite of Rozet and Louppe (). Note I have updated the notation to match the rest of this notebook.

We could train a conditional score network sϕ(xτi,τiy) to approximate the posterior score xτilogp(xτiy) and plug it into the reverse SDE. But this requires (x,y) pairs during training and re‐training whenever the observation model p(yx) changes.

Instead, many have observed (; ; ; ; ) that by Bayes’ rule the posterior score decomposes as xτilogp(xτiy)=xτilogp(xτi)+xτilogp(yxτi). Since the prior score xτilogp(xτi) is well‐approximated by a single score network sϕ(xτi,τi), the remaining task is to estimate the likelihood score xτilogp(yxτi).

Assuming a differentiable measurement operator A and Gaussian observations p(yx)=N(yA(x),Σy), Chung et al. () propose approximating p(yxτi)=p(yx)p(xxτi)dxN(yA(x^(xτi)),Σy), where the denoised mean x^(xτi)=E[xxτi] is given by Tweedie’s formula (; ): E[xxτi]=E[x0xτ]=xτ1α¯(τ)sθ(xτ,τ)α¯(τ) Because the log‐likelihood of a multivariate Gaussian is analytic and sϕ(xτi,τi) is differentiable, we can compute xτilogp(yxτi) in a zero‐shot fashion—without training any network beyond the unconditional score model sϕ.

Note that this last assumption is strong; probably too strong for the models I would bother using diffusions on. Don’t worry we can get fancier and more effective.

3 Ensemble Score Conditioning

A simple trick that sometimes works () but is biased. TBC.

4 Sequential Monte Carlo

This seems to be SOTA?

LLM-aided summary of Wu et al. ():

We recall standard SMC / Particle Filtering:

  1. Goal: sample from a sequence of distributions {νi}i=0T, ending in some target ν0.

  2. Particles: maintain K samples (particles) {xik}k=1K with weights {wik}.

  3. Iterate for i=T0:

    • Resample particles according to wi+1k to focus on high-probability regions.

    • Propose xikri(xixi+1k).

    • Weight each by

      wik=target density at xikproposal density at xikimportance weight.

  4. Convergence: as K, the weighted ensemble approximates the true target exactly.

In a diffusion model, we can view the reverse noising chain

pθ(x0:T)=p(xT)i=1Tpθ(xτi1xτi)

as exactly such a sequential model over xτTx0, where νi is the joint pθ(x0,,x1) marginalized forward to τi.

To sample conditionally pθ(x0y), we treat the conditioning as part of the final target and apply SMC. However, if we naïvely run SMC with the unconditional transition kernels

ri(xτi1xτi)=pθ(xτi1xτi)

and only tack on a final weight w0p(yx0), we need an astronomical number of particles, since most will get near-zero weight whenever the prior pθ(x0) is substantially unlikely compared to the conditional pθ(x0y).

Twisting is a classic SMC technique which solves this problem, introducing a sequence of auxiliary functions {p~θ(yxτi)}i=0T to re-weight proposals at every time-step, not just at the end. The ideal optimal choice at step i would be

ri(xτi1xτi)pθ(xτi1xτi)pθ(yxτi1),

which—if we could sample it—would make SMC exact with a single particle. However, pθ(yxτi1) is itself intractable.

TDS replaces the optimal twisting pθ(yxi) with a tractable surrogate based on the denoising network x^0(xi), which estimates the denoised x0 from the noisy xi.

p~θ(yxτi)=p(yx^0(xτi)),

i.e. we evaluate the observation-likelihood at the diffusion denoiser’s one-step posterior mean estimate x^0. Since x^0(xτ)E[x0xτ], this becomes increasingly accurate as τ0. Define σi2:=σ(τi)2=1α¯(τi).

  1. Twisted proposal from τiτi1:

    r~i(xτi1xτi,y)=N(xτi1;xτi+σi2si(xτi,y)“guided” driftmean,σi2I),

    where

    si(xτi,y)=sθ(xτi,τi)+xτilogp~θ(yxτi).

  2. Twisted weight for each particle:

    wτi1=pθ(xτi1xτi)p~θ(yxτi1)p~θ(yxτi)r~i(xτi1xτi,y).

  3. Twisted Sister:

This corrects for using the surrogate twisting and ensures asymptotic exactness as K.

In early steps (iT) the surrogate p~θ(yxτi) may be very broad — twisting is mild. Then, in late steps (i0) x^0(xτi) is accurate, so p~θ(yxτi)pθ(yxτi) and the proposals are nearly optimal. Resampling in between keeps the particle cloud focused on regions consistent with both the diffusion prior and the conditioning y.

In practice, we often need a surprisingly small number of particles; even 2–8 particles often suffice to outperform heuristic conditional samplers (like plain classifier guidance or “replacement” inpainting).

5 (Conditional) Schrödinger Bridge

Shi et al. () introduced the Conditional Schrödinger Bridge (CSB), which is a natural extension of the Schrödinger Bridge.

We seek a path-measure π(x0:Ty) minimising

KL(π(y)p(x0:T))

subject to

  • Start at τT: πτT(xTy)=N(xT;0,I),
  • End at τ0: πτ0(x0y)=p(x0y).

Here p(x0:T)=p(xT)ip(xτi1xτi) is the unconditional forward noising chain.

5.1 Amortized IPF Algorithm

We parameterize two families of drift networks that take y as input:

Bin(x,y)andFin(x,y)for i=1,,T,n=0,,L.

We alternate two KL-projection steps:

  1. Backward half-step (τTτ0, enforce prior):

    π2n+1(y)=argminπKL(π(y)π2n(y))s.t.πτT(xTy)=N(0,I).

    Fit Bin+1(x,y) by matching the backward SDE induced by π2n+1.

  2. Forward half-step (τ0τT, enforce posterior):

    π2n+2(y)=argminπKL(π(y)π2n+1(y))s.t.πτ0(x0y)=p(x0y).

    Fit Fin+1(x,y) by matching the forward SDE induced by π2n+2.

After L IPF iterations we obtain networks BiL(x,y),FiL(x,y) whose composed bridge π(y) exactly matches both endpoints for any y.

We can imagine that the backward step “pins” the Gaussian prior; the forward step “pins” the conditional p(x0y). We now define the composed conditional bridge as the concatenation of both of the aforeπiclrmentioned:

  1. Initial draw at τT:

    xτTp(xτT)=N(0,I).

  2. Backward transitions for i=T,T1,,1:

    xτi1N(xτi+ΔτBiL(xτi,y),ΔτI)πback,i(xτi1xτi,y).

  3. Forward transitions for i=1,2,,T:

    xτiN(xτi1+ΔτFiL(xτi1,y),ΔτI)πfor,i(xτixτi1,y).

Hence the full path-measure is

π(x0:Ty)=p(xτT)i=T1πback,i(xτi1xτi,y)×i=1Tπfor,i(xτixτi1,y),

which by construction satisfies both endpoint constraints for any y. CSB finds the “smoothest” (in the sense of being entropy-regularized) stochastic flow between noise and data-posterior. This seems intuitively fine, but my reasoning here is vibes-based. I need to read the paper better to get a proper grip on it.

5.2 Sampling with the Learned Conditional Bridge

To draw x0p(x0y):

  1. Initialize xτTN(0,I).

  2. Backward integrate the learned SDE (or its probability-flow ODE) from τ=T down to 0:

    dxτ=BiL(xτ,y)dτ+ΔτdWτfor τ[τi1,τi].

  3. (Optionally) Forward integrate with FiL(x,y) to refine or to compute likelihoods.

Because BiL and FiL both depend on y, the same trained model applies to arbitrary observations.

Amortized CSB encodes the observation y directly into every drift net Bi(x,y) and Fi(x,y). There is no per-instance retraining or importance weights — once the joint IPF training over (x0,y) is done, we can plug in any new y and run the sampler.

6 Computational Trade-offs Of Those Last Two

Aspect Twisted SMC (TDS) Amortized CSB
Training Only train denoiser x^0 Train 2×L drift nets on (x,y)
Inference cost K particles × T steps Single trajectory over T steps
Exactness As K, exact as IPF perfectly trained, exact

7 Consistency models

Song et al. ()

8 Inpainting

If we want coherence with part of an existing image, we call that inpainting and there are specialized methods for it (; ; ; ; ; ; ).

9 Reconstruction/inversion

Perturbed and partial observations; misc methods therefor (; ; ; ; ; ; ; ; ; ).

10 References

Adam, Coogan, Malkin, et al. 2022. Posterior Samples of Source Galaxies in Strong Gravitational Lenses with Score-Based Priors.”
Ajay, Du, Gupta, et al. 2023. Is Conditional Generative Modeling All You Need for Decision-Making? In.
Albergo, Goldstein, Boffi, et al. 2023. Stochastic Interpolants with Data-Dependent Couplings.”
Bao, Feng, Cao, Meir, et al. 2016. A First Order Scheme for Backward Doubly Stochastic Differential Equations.” SIAM/ASA Journal on Uncertainty Quantification.
Bao, Tianshu, Chen, Johnson, et al. 2022. Physics Guided Neural Networks for Spatio-Temporal Super-Resolution of Turbulent Flows.” In Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence.
Bao, Feng, Chipilski, Liang, et al. 2024. Nonlinear Ensemble Filtering with Diffusion Models: Application to the Surface Quasi-Geostrophic Dynamics.”
Bao, Feng, Zhang, and Zhang. 2024a. An Ensemble Score Filter for Tracking High-Dimensional Nonlinear Dynamical Systems.”
———. 2024b. A Score-Based Filter for Nonlinear Data Assimilation.” Journal of Computational Physics.
Choi, Kim, Jeong, et al. 2021. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models.” In.
Chou, Bahat, and Heide. 2023. Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions.”
Chung, Kim, Mccann, et al. 2023. Diffusion Posterior Sampling for General Noisy Inverse Problems.” In.
Efron. 2011. Tweedie’s Formula and Selection Bias.” Journal of the American Statistical Association.
Grechka, Couairon, and Cord. 2024. GradPaint: Gradient-Guided Inpainting with Diffusion Models.” Computer Vision and Image Understanding.
Haitsiukevich, Poyraz, Marttinen, et al. 2024. Diffusion Models as Probabilistic Neural Operators for Recovering Unobserved States of Dynamical Systems.”
Heng, De Bortoli, Doucet, et al. 2022. Simulating Diffusion Bridges with Score Matching.”
Kawar, Elad, Ermon, et al. 2022. Denoising Diffusion Restoration Models.” Advances in Neural Information Processing Systems.
Kawar, Vaksman, and Elad. 2021. SNIPS: Solving Noisy Inverse Problems Stochastically.” In.
Kim, and Ye. 2021. Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising Without Clean Images.” In.
Liang, Tran, Bao, et al. 2025. Ensemble Score Filter with Image Inpainting for Data Assimilation in Tracking Surface Quasi-Geostrophic Dynamics with Partial Observations.”
Lipman, Chen, Ben-Hamu, et al. 2023. Flow Matching for Generative Modeling.” In.
Liu, Niepert, and Broeck. 2023. Image Inpainting via Tractable Steering of Diffusion Models.”
Lugmayr, Danelljan, Romero, et al. 2022. RePaint: Inpainting Using Denoising Diffusion Probabilistic Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Nair, Mei, and Patel. 2023. AT-DDPM: Restoring Faces Degraded by Atmospheric Turbulence Using Denoising Diffusion Probabilistic Models.” In.
Peng, Qiu, Wynne, et al. 2024. CBCT-Based Synthetic CT Image Generation Using Conditional Denoising Diffusion Probabilistic Model.” Medical Physics.
Rozet, and Louppe. 2023a. Score-Based Data Assimilation.”
———. 2023b. Score-Based Data Assimilation for a Two-Layer Quasi-Geostrophic Model.”
Sharrock, Simons, Liu, et al. 2022. Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models.”
Shi, Bortoli, Deligiannidis, et al. 2022. Conditional Simulation Using Diffusion Schrödinger Bridges.”
Song, Dhariwal, Chen, et al. 2023. Consistency Models.”
Song, Shen, Xing, et al. 2022. Solving Inverse Problems in Medical Imaging with Score-Based Generative Models.” In.
Song, Sohl-Dickstein, Kingma, et al. 2022. Score-Based Generative Modeling Through Stochastic Differential Equations.” In.
Sui, Ma, Zhang, et al. 2024. Adaptive Semantic-Enhanced Denoising Diffusion Probabilistic Model for Remote Sensing Image Super-Resolution.”
Tzen, and Raginsky. 2019. Theoretical Guarantees for Sampling and Inference in Generative Models with Latent Diffusions.” In Proceedings of the Thirty-Second Conference on Learning Theory.
Wu, Trippe, Naesseth, et al. 2024. Practical and Asymptotically Exact Conditional Sampling in Diffusion Models.” In.
Xie, and Li. 2022. Measurement-Conditioned Denoising Diffusion Probabilistic Model for Under-Sampled Medical Image Reconstruction.” In Medical Image Computing and Computer Assisted Intervention – MICCAI 2022.
Xu, Ma, and Zhu. 2023. Dual-Diffusion: Dual Conditional Denoising Diffusion Probabilistic Models for Blind Super-Resolution Reconstruction in RSIs.” IEEE Geoscience and Remote Sensing Letters.
Zamir, Arora, Khan, et al. 2021. Multi-Stage Progressive Image Restoration.”
Zhang, Ji, Zhang, et al. 2023. Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models.” In Proceedings of the 40th International Conference on Machine Learning. ICML’23.
Zhao, Bai, Zhu, et al. 2023. DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion.” In.