I want to use Gibbs sampling in Julia Turing.jl and sample as an example a distribution where the variance is sampled with HMC() and the mean with PG(). But if I sample the whole model with PG() or HMC() the outcome is not the same. Aka s is not the same for Gibbs sampling and just HMC. Also m is not the same for Gibbs sampling and just PG. Why is that?
My code:
using StatsPlots, Turing, Random
# set a seed
Random.seed!(153)
# Define a strange model.
@model function gdemo(x)
s ~ InverseGamma(2, 3) # s^2
m ~ Normal(0, sqrt(s))
bumps = sin(m) + cos(m)
m = m + 4*bumps
for i in eachindex(x)
x[i] ~ Normal(m, sqrt(s))
end
return s, m
end;
# define data points
x = [1.2, 0.9, 9.4, 2.7, 0.6]
model = gdemo(x)
#sampling
c_gibbsPG = sample(model, Gibbs(HMC(0.01, 5, :s), PG(20, :m)), 1000)
c_PG = sample(model, PG(20), 1000)
c_HMC = sample(model, HMC(0.01, 5), 1000)
#plotting
density(c_gibbsPG; legend=:best, label="Gibbs(HMC :s, PG :m)", c=:blue)
density!(c_PG; legend=:best, label="PG()", c=:green)
density!(c_HMC; legend=:best, label="HMC()", c=:red)