next up previous
Next: Today's Methods of Texture Up: Markov-Gibbs Random Field Models Previous: Auto-normal Models

FRAME model

FRAME (short for Filters, Random Fields, and Maximum Entropy) [113] is an MGRF model constructed from the empirical marginal distributions of filter responses based on the MaxEnt principle. The FRAME model is derived based on Theorem. 5.2.1, which asserts that the joint probability distribution of an image is decomposable into the empirical marginal distributions of related filter responses. The proof of the theorem is given in [113].

Theorem 5.2.1   Let $ Pr({g})$ be the $ \vert\mathbf{R}\vert$-dimensional continuous joint probability distribution of a texture. Then $ Pr({g})$ is a linear combination of $ Pr^{(\xi)}$, the latter being the marginal distributions of the linear filter responses $ F^{(\xi)}*{{g}}$.

The theorem suggests that the joint probability distribution $ Pr({g})$ could be inferred by constructing a probability $ Pr^{\ast}({g})$ whose marginal distributions $ Pr^{(\xi)}$ match with the same distributions of $ Pr({g})$. Although, theoretically, an infinite number of empirical distributions (filters) are involved in the decomposition of a joint distribution, the FRAME model assumes that only a relative small number of important filters (a filter bank), $ \mathbf{F}=\{F^{(\alpha)}:\alpha= 1,...,k\}$, are sufficient to the model distribution $ Pr({g})$. Within an MaxEnt framework, by Eq. (5.2.8), the modelling is formulated as the following optimisation problem,

$\displaystyle Pr^{*}({g})= \operatornamewithlimits{arg max}_{Pr} \left \{- \int Pr({g})\log Pr({g})d{g}\right\}$ (5.2.21)

subject to constraints:

$\displaystyle {\mathcal{E}}_{Pr}[\delta(z-{g}^{(\alpha)}(v))]=Pr^{(\alpha)}(z):...
...l z\in \mathbf{Q},\quad \forall \alpha = 1,...,k,\quad \forall v \in \mathbf{R}$ (5.2.22)

and

$\displaystyle \int Pr({g})d{g}=1$ (5.2.23)

In Eq. (5.2.22), $ Pr^{(\alpha)}(z)$ denotes the marginal distribution of $ Pr({g})$ with respect to the filter $ F^{(\alpha)}$ at location $ v$, and by definition,

$\displaystyle Pr^{(\alpha)}_{v}(z)=\int\;\int_{{g}^{(\alpha)}(v)=z}{Pr({g})d{g}}={\mathcal{E}}_{Pr}[\delta(z-{g}^{(\alpha)}(v))]$ (5.2.24)

where $ \delta(.)$ is the Dirac delta function and $ {\mathcal{E}}_{Pr}[.]$ denotes the mathematical expectation of a particular function under the probability $ Pr({g})$. By the assumption of translation invariance, the marginal distribution $ Pr^{(\alpha)}_{v}(z)$ is independent of the location $ v$. Therefore, the first constraints are obtained by replacing $ Pr^{(\alpha)}_{v}(z)$ with $ Pr^{(\alpha)}(z)$ in Eq. (5.2.24).

The second constraint in Eq. (5.2.23) is the normalising condition of the joint probability distribution $ Pr({g})$.

The MaxEnt distribution $ Pr^*({g})$ are found by maximising the entropy using Lagrange multipliers,

  1. Construct a Lagrange function $ \mathcal{L}( Pr({g}))$ as follows:
    $\displaystyle \mathcal{L}( Pr({g}))$ $\displaystyle =$ $\displaystyle -\int Pr({g}) \log Pr({g})\:d{{g}}$  
        $\displaystyle + \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot\left(
{\mathcal{E}}_{Pr}[\delta(z-{g}^{(\alpha)}(v))]-Pr^{(\alpha)}(z)\right)$  
        $\displaystyle + \lambda^{(k+1)}
\cdot \left(\int Pr({g})\:d{{g}} -1\right)$  

  2. Differentiate $ \mathcal{L}( Pr({g}))$ regarding $ Pr({g})$:
    $\displaystyle \frac {\partial \mathcal{L}(Pr({g}))}{\partial Pr({g})}$ $\displaystyle =$ $\displaystyle \frac {\partial \mathcal{L}(Pr({g}))}{\partial
{g}} \cdot \frac{1}{\frac{dPr({g})}{d{g}}}$  
    $\displaystyle % &=& \frac{1}{Pr^{'}({g})} \cdot \left (- Pr({g}) \log
$ $\displaystyle =$ $\displaystyle \frac{Pr({g})}{Pr^{'}({g})} \cdot \left( - \log Pr({g}) + \sum_{\...
...k}
\lambda^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(v))+ \lambda^{(k+1)}\right)$  

  3. By solving the equation $ \frac {\partial \mathcal{L}(Pr({g}))}{\partial Pr({g})}=0$ with respect to $ Pr({g})$, the resulting MaxEnt distribution is:
    $\displaystyle Pr^*({g})$ $\displaystyle =$ $\displaystyle \exp\left\{ \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(v))+ \lambda^{(k+1)}\right\}$  
      $\displaystyle =$ $\displaystyle \exp\left\{ \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(v))\right\} \cdot \exp \left\{\lambda^{(k+1)}\right\}$  
      $\displaystyle =$ $\displaystyle \frac{1}{Z} \cdot \exp\left\{ \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(v))\right\}$  


    where $ \Lambda_k=\{\lambda^{(1)},\lambda^{(2)},...\lambda^{(k)}\}$ are Lagrange factors. The partition function $ Z$ is derived from the second constraints as follows:

    $\displaystyle \int {Pr^*({g})\:d{g}}=1 \Rightarrow
$

    $\displaystyle \frac{1}{Z} \cdot \int {\exp\left\{ \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(v))\right\} \:d{g}}=1
\Rightarrow$

    $\displaystyle Z(\Lambda_k)=\left. \int \right. \exp\left\{ \sum_{\alpha =1}^{k}
\lambda^{(\alpha)} \cdot
\delta\left(z-{g}^{(\alpha)}(v)\right)\right\}\:d{{g}}
$

The discrete form of $ Pr^*({g}\mid \Lambda_k)$ is derived by the following transformations,


$\displaystyle Pr^*({g}\mid \Lambda_k)$ $\displaystyle =$ $\displaystyle \frac{1}{Z(\Lambda_k)} \cdot \exp\left\{-
\sum_{\vec{v}} \sum_{\a...
...}
\int \lambda^{(\alpha)}(z) \cdot
\delta(z-{g}^{(\alpha)}(\vec{v}))dz \right\}$  
  $\displaystyle =$ $\displaystyle \frac{1}{Z(\Lambda_k)} \cdot \exp\left\{- \sum_{\vec{v}}
\sum_{\a...
...z=0}^{Q-1} \lambda_z^{(\alpha)} \cdot
\delta(z-{g}^{(\alpha)}(\vec{v}))\right\}$  
  $\displaystyle =$ $\displaystyle \frac{1}{Z(\Lambda_k)} \cdot \exp\left\{-
\sum_{\alpha =1}^{k}
\s...
...bda_z^{(\alpha)} \cdot
\sum_{\vec{v}} \delta(z-{g}^{(\alpha)}(\vec{v}))\right\}$  
  $\displaystyle =$ $\displaystyle \frac{1}{Z(\Lambda_k)} \cdot \exp\left\{-
\sum_{\alpha =1}^{k}
\sum_{z=0}^{Q-1} \lambda_z^{(\alpha)} \cdot
H_z^{(\alpha)}\right\}$  
  $\displaystyle =$ $\displaystyle \frac{1}{Z(\Lambda_k)} \cdot \exp\left\{-
\sum_{\alpha =1}^{k}
\langle
\lambda^{(\alpha)} \cdot
H^{(\alpha)}\rangle\right\}$ (5.2.25)

Here, the vector of piecewise functions, $ H^{(\alpha)}=\{H_0^{(\alpha)},H_1^{(\alpha)},...,H_{Q-1}^{(\alpha)}
\}$ and $ \lambda^{(\alpha)}=\{\lambda_0^{(\alpha)},\lambda_1^{(\alpha)},...,\lambda_{Q-1}^{(\alpha)}
\}$, represent the histogram of filtered image $ {g}^{(\alpha)}$ and the Lagrange parameters, respectively.

As shown in Eq. (5.2.25), the FRAME model is specified by a Gibbs distribution with the marginal empirical distribution (histogram) of filter responses as its sufficient statistics. The Lagrange multipliers $ \Lambda_k$ are model parameters to estimate for each particular texture. Typically, the model parameters are learnt via stochastic approximation which updates parameter estimates iteratively based on the following equation,

$\displaystyle \lambda_{[t]}^{(\alpha)}=\lambda_{[t-1]}^{(\alpha)}+c
(H^{(\alpha)}({g}^{[t]})-H^{(\alpha)}({g}^{[0]}))
$

where $ ({g}^{[t]})$ is an image drawn at random from the distribution $ Pr({g};\Lambda_{[t-1]})$ using, e.g., a Gibbs sampler. Since the FRAME model involves $ k \times Q$ parameters, the computational complexity of parameters estimation depends on both the number of selected filters and signal levels in an image.


next up previous
Next: Today's Methods of Texture Up: Markov-Gibbs Random Field Models Previous: Auto-normal Models
dzho002 2006-02-22