GaussianEP.TermType

This type represents an interaction term in the energy function of the form

$β_i (\frac12 x'Ax + x'y + c) + M_i \log β_i$

The complete energy function is given by

$∑_i β_i (\frac12 x' A_i x + x' y_i + c_i) + M_i \log β_i$

as is represented by an Vector{Term}. Note that c and M are only needed for paramenter learning

source
GaussianEP.expectation_propagationMethod
expectation_propagation(H::Vector{Term{T}}, P0::Vector{Prior}, F::AbstractMatrix{T} = zeros(0,length(P0)), d::Vector{T} = zeros(size(F,1));
    maxiter::Int = 2000,
    callback = (x...)->nothing,
    # state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]),
    damp::T = 0.9,
    epsconv::T = 1e-6,
    maxvar::T = 1e50,
    minvar::T = 1e-50,
    inverter::Function = inv) where {T <: Real, P <: Prior}

EP for approximate inference of

$P( \bf{x} )=\frac1Z exp(-\frac12\bf{x}' A \bf{x} + \bf{x'} \bf{y}))×\prod_i p_{i}(x_i)$

Arguments:

  • A::Array{Term{T}}: Gaussian Term (involving only x)
  • P0::Array{Prior}: Prior terms (involving x and y)
  • F::AbstractMatrix{T}: If included, the unknown becomes $(\bf{x} ,\bf{y} )^T$ and a term $\delta(F \bf{x}+\bf{d}-\bf{y})$ is added.

Optional named arguments:

  • maxiter::Int = 2000: maximum number of iterations
  • callback = (x...)->nothing: your own function to report progress, see ProgressReporter
  • state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]): If supplied, all internal state is updated here
  • damp::T = 0.9: damping parameter
  • epsconv::T = 1e-6: convergence criterion
  • maxvar::T = 1e50: maximum variance
  • minvar::T = 1e-50: minimum variance
  • inverter = inv: inverter method

Example

julia> t=Term(zeros(2,2),zeros(2),1.0)
Term{Float64}([0.0 0.0; 0.0 0.0], [0.0, 0.0], 0.0, 1.0, 0.0, 0)

julia> P=[IntervalPrior(i...) for i in [(0,1),(0,1),(-2,2)]]
3-element Array{IntervalPrior{Int64},1}:
 IntervalPrior{Int64}(0, 1)
 IntervalPrior{Int64}(0, 1)
 IntervalPrior{Int64}(-2, 2)

julia> F=[1.0 -1.0];

julia> res = expectation_propagation([t], P, F)
GaussianEP.EPOut{Float64}([0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.489862, 0.489862, 3.66599e-15], [334.018, 334.018, 0.204341], :converged, EPState{Float64}([9.79055 -0.00299477; -0.00299477 9.79055], [0.0, 0.0], [0.102139 3.12427e-5; 3.12427e-5 0.102139], [0.489862, 0.489862], [0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.490876, 0.490876, -1.86785e-17], [0.489862, 0.489862, 3.66599e-15], [0.100288, 0.100288, 403.599], [334.018, 334.018, 0.204341]))
source
GaussianEP.gradientMethod
gradient(p0::T, μ, σ) -> nothing

update parameters with a single learning gradient step (learning rate is stored in p0)
source
GaussianEP.momentsMethod

$p = \frac1{(ℓ+1)((1/ρ-1) e^{-\frac12 (μ/σ)^2 (2-\frac1{1+ℓ})}\sqrt{1+\frac1{ℓ}}+1)}$

source
GaussianEP.momentsMethod
moments(p0::T, μ, σ) where T <:Prior -> (mean, variance)

input: ``p_0, μ, σ``

output: mean and variance of

`` p(x) ∝ p_0(x) \mathcal{N}(x;μ,σ) ``
source

Gaussian EP Documentation

GaussianEP.expectation_propagationFunction
expectation_propagation(H::Vector{Term{T}}, P0::Vector{Prior}, F::AbstractMatrix{T} = zeros(0,length(P0)), d::Vector{T} = zeros(size(F,1));
    maxiter::Int = 2000,
    callback = (x...)->nothing,
    # state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]),
    damp::T = 0.9,
    epsconv::T = 1e-6,
    maxvar::T = 1e50,
    minvar::T = 1e-50,
    inverter::Function = inv) where {T <: Real, P <: Prior}

EP for approximate inference of

$P( \bf{x} )=\frac1Z exp(-\frac12\bf{x}' A \bf{x} + \bf{x'} \bf{y}))×\prod_i p_{i}(x_i)$

Arguments:

  • A::Array{Term{T}}: Gaussian Term (involving only x)
  • P0::Array{Prior}: Prior terms (involving x and y)
  • F::AbstractMatrix{T}: If included, the unknown becomes $(\bf{x} ,\bf{y} )^T$ and a term $\delta(F \bf{x}+\bf{d}-\bf{y})$ is added.

Optional named arguments:

  • maxiter::Int = 2000: maximum number of iterations
  • callback = (x...)->nothing: your own function to report progress, see ProgressReporter
  • state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]): If supplied, all internal state is updated here
  • damp::T = 0.9: damping parameter
  • epsconv::T = 1e-6: convergence criterion
  • maxvar::T = 1e50: maximum variance
  • minvar::T = 1e-50: minimum variance
  • inverter = inv: inverter method

Example

julia> t=Term(zeros(2,2),zeros(2),1.0)
Term{Float64}([0.0 0.0; 0.0 0.0], [0.0, 0.0], 0.0, 1.0, 0.0, 0)

julia> P=[IntervalPrior(i...) for i in [(0,1),(0,1),(-2,2)]]
3-element Array{IntervalPrior{Int64},1}:
 IntervalPrior{Int64}(0, 1)
 IntervalPrior{Int64}(0, 1)
 IntervalPrior{Int64}(-2, 2)

julia> F=[1.0 -1.0];

julia> res = expectation_propagation([t], P, F)
GaussianEP.EPOut{Float64}([0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.489862, 0.489862, 3.66599e-15], [334.018, 334.018, 0.204341], :converged, EPState{Float64}([9.79055 -0.00299477; -0.00299477 9.79055], [0.0, 0.0], [0.102139 3.12427e-5; 3.12427e-5 0.102139], [0.489862, 0.489862], [0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.490876, 0.490876, -1.86785e-17], [0.489862, 0.489862, 3.66599e-15], [0.100288, 0.100288, 403.599], [334.018, 334.018, 0.204341]))
source