GaussianEP.BinaryPrior
— TypeBinary Prior
p0(x) ∝ ρ δ(x-x0) + (1-ρ) δ(x-x_1)
GaussianEP.EPOut
— TypeOutput of EP algorithm
GaussianEP.EPState
— TypeInstantaneous state of an expectation propagation run.
GaussianEP.IntervalPrior
— TypeInterval prior
Parameters: l,u
$p_0(x) = \frac{1}{u-l}\mathbb{I}[l\leq x\leq u]$
GaussianEP.Prior
— TypeAbstract Univariate Prior type
GaussianEP.SpikeSlabPrior
— TypeSpike-and-slab prior
Parameters: ρ,λ
$p_0(x) ∝ (1-ρ) δ(x) + ρ \mathcal{N}(x;0,λ^{-1})$
GaussianEP.Term
— TypeThis type represents an interaction term in the energy function of the form
$β_i (\frac12 x'Ax + x'y + c) + M_i \log β_i$
The complete energy function is given by
$∑_i β_i (\frac12 x' A_i x + x' y_i + c_i) + M_i \log β_i$
as is represented by an Vector{Term}. Note that c and M are only needed for paramenter learning
GaussianEP.ThetaPrior
— TypeA θ(x) prior
GaussianEP.expectation_propagation
— Methodexpectation_propagation(H::Vector{Term{T}}, P0::Vector{Prior}, F::AbstractMatrix{T} = zeros(0,length(P0)), d::Vector{T} = zeros(size(F,1));
maxiter::Int = 2000,
callback = (x...)->nothing,
# state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]),
damp::T = 0.9,
epsconv::T = 1e-6,
maxvar::T = 1e50,
minvar::T = 1e-50,
inverter::Function = inv) where {T <: Real, P <: Prior}
EP for approximate inference of
$P( \bf{x} )=\frac1Z exp(-\frac12\bf{x}' A \bf{x} + \bf{x'} \bf{y}))×\prod_i p_{i}(x_i)$
Arguments:
A::Array{Term{T}}
: Gaussian Term (involving only x)P0::Array{Prior}
: Prior terms (involving x and y)F::AbstractMatrix{T}
: If included, the unknown becomes $(\bf{x} ,\bf{y} )^T$ and a term $\delta(F \bf{x}+\bf{d}-\bf{y})$ is added.
Optional named arguments:
maxiter::Int = 2000
: maximum number of iterationscallback = (x...)->nothing
: your own function to report progress, seeProgressReporter
state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2])
: If supplied, all internal state is updated heredamp::T = 0.9
: damping parameterepsconv::T = 1e-6
: convergence criterionmaxvar::T = 1e50
: maximum varianceminvar::T = 1e-50
: minimum varianceinverter = inv
: inverter method
Example
julia> t=Term(zeros(2,2),zeros(2),1.0)
Term{Float64}([0.0 0.0; 0.0 0.0], [0.0, 0.0], 0.0, 1.0, 0.0, 0)
julia> P=[IntervalPrior(i...) for i in [(0,1),(0,1),(-2,2)]]
3-element Array{IntervalPrior{Int64},1}:
IntervalPrior{Int64}(0, 1)
IntervalPrior{Int64}(0, 1)
IntervalPrior{Int64}(-2, 2)
julia> F=[1.0 -1.0];
julia> res = expectation_propagation([t], P, F)
GaussianEP.EPOut{Float64}([0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.489862, 0.489862, 3.66599e-15], [334.018, 334.018, 0.204341], :converged, EPState{Float64}([9.79055 -0.00299477; -0.00299477 9.79055], [0.0, 0.0], [0.102139 3.12427e-5; 3.12427e-5 0.102139], [0.489862, 0.489862], [0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.490876, 0.490876, -1.86785e-17], [0.489862, 0.489862, 3.66599e-15], [0.100288, 0.100288, 403.599], [334.018, 334.018, 0.204341]))
GaussianEP.Posterior2Prior
— TypeThis is a fake Prior that can be used to fix experimental moments Parameters: μ, v (variance, not std)
GaussianEP.ProgressReporter
— TypeProgressReporter
A function object to report on a running expectation_propagation.
GaussianEP.gradient
— Methodgradient(p0::T, μ, σ) -> nothing
update parameters with a single learning gradient step (learning rate is stored in p0)
GaussianEP.moments
— Method$p = \frac1{(ℓ+1)((1/ρ-1) e^{-\frac12 (μ/σ)^2 (2-\frac1{1+ℓ})}\sqrt{1+\frac1{ℓ}}+1)}$
GaussianEP.moments
— Methodmoments(p0::T, μ, σ) where T <:Prior -> (mean, variance)
input: ``p_0, μ, σ``
output: mean and variance of
`` p(x) ∝ p_0(x) \mathcal{N}(x;μ,σ) ``
Gaussian EP Documentation
GaussianEP.expectation_propagation
— Functionexpectation_propagation(H::Vector{Term{T}}, P0::Vector{Prior}, F::AbstractMatrix{T} = zeros(0,length(P0)), d::Vector{T} = zeros(size(F,1));
maxiter::Int = 2000,
callback = (x...)->nothing,
# state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2]),
damp::T = 0.9,
epsconv::T = 1e-6,
maxvar::T = 1e50,
minvar::T = 1e-50,
inverter::Function = inv) where {T <: Real, P <: Prior}
EP for approximate inference of
$P( \bf{x} )=\frac1Z exp(-\frac12\bf{x}' A \bf{x} + \bf{x'} \bf{y}))×\prod_i p_{i}(x_i)$
Arguments:
A::Array{Term{T}}
: Gaussian Term (involving only x)P0::Array{Prior}
: Prior terms (involving x and y)F::AbstractMatrix{T}
: If included, the unknown becomes $(\bf{x} ,\bf{y} )^T$ and a term $\delta(F \bf{x}+\bf{d}-\bf{y})$ is added.
Optional named arguments:
maxiter::Int = 2000
: maximum number of iterationscallback = (x...)->nothing
: your own function to report progress, seeProgressReporter
state::EPState{T} = EPState{T}(sum(size(F)), size(F)[2])
: If supplied, all internal state is updated heredamp::T = 0.9
: damping parameterepsconv::T = 1e-6
: convergence criterionmaxvar::T = 1e50
: maximum varianceminvar::T = 1e-50
: minimum varianceinverter = inv
: inverter method
Example
julia> t=Term(zeros(2,2),zeros(2),1.0)
Term{Float64}([0.0 0.0; 0.0 0.0], [0.0, 0.0], 0.0, 1.0, 0.0, 0)
julia> P=[IntervalPrior(i...) for i in [(0,1),(0,1),(-2,2)]]
3-element Array{IntervalPrior{Int64},1}:
IntervalPrior{Int64}(0, 1)
IntervalPrior{Int64}(0, 1)
IntervalPrior{Int64}(-2, 2)
julia> F=[1.0 -1.0];
julia> res = expectation_propagation([t], P, F)
GaussianEP.EPOut{Float64}([0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.489862, 0.489862, 3.66599e-15], [334.018, 334.018, 0.204341], :converged, EPState{Float64}([9.79055 -0.00299477; -0.00299477 9.79055], [0.0, 0.0], [0.102139 3.12427e-5; 3.12427e-5 0.102139], [0.489862, 0.489862], [0.499997, 0.499997, 3.66527e-15], [0.083325, 0.083325, 0.204301], [0.490876, 0.490876, -1.86785e-17], [0.489862, 0.489862, 3.66599e-15], [0.100288, 0.100288, 403.599], [334.018, 334.018, 0.204341]))