We can generalize Gaussian belief propagation to use general elliptical laws by using Mahalanobis distance without presuming the Gaussian distribution (Agarwal et al. 2013; Davison and Ortiz 2019), making it into a kind of elliptical belief propagation.
If we use a robust Huber loss instead of a Gaussian log-likelihood, then the resulting algorithm is usually referred to as a robust factor or as dynamic covariance scaling (Agarwal et al. 2013; Davison and Ortiz 2019). The nice thing here is that we can imagine the transition from quadratic to linear losses gives us an estimate of which observations are outliers.
Surely this is around? Certainly there is a special case in the t-process. It is mentioned, I think, in Lan et al. (2006) and possibly Proudler et al. (2007) although the latter seems to be something more ad hoc.