next up previous
Next: Stochastic Models Up: Learning from experience Previous: Convergence rate

   
Weighted average

The weighted average can be defined by:

$\hat{V}=\sum_{i=1}^{N}\alpha_{i}V_{i}$, where the constants $\alpha_{i}$ have the properties that,
$\alpha_{i}\geq 0$ and $\sum_{i=1}^{N}\alpha_{i}=1$

The expected value of the weighted average is,

$E[\hat{V}]=E[\sum_{i=1}^{N}\alpha_{i}V_{i}]=
\sum_{i=1}^{N}\alpha_{i}E[V_{i}]=\bar{v}$

and the variance is :

$Var(\hat{V})=E[(\sum_{i}\alpha_{i}V_{i}-\bar{v})^{2}] = E[(\sum_{i}\alpha_{i}(V_{i}-\bar{v}))^{2}]$
$=\sum_{i}\alpha_{i}^{2}E(V_{i}-\bar{v})^{2}+\sum_{i\neq %
j}\alpha_{i}\alpha_{j}E[(V_{i}-\bar{v})(V_{j}-\bar{v})]$

for $i \neq j:$
$E[(V_{i}-\bar{v})(V_{j}-\bar{v})] = E(V_{i}-\bar{v})E(V_{j}-\bar{v}) $
since sample i is independent of sample j. We know that $ E(V_{i}-\bar{v}) = 0 $ for all i, so :

$Var(\hat{V})=\sum_{i}\alpha_{i}^{2} E(V_{i}-\bar{v})^{2}=\sum_{i}\alpha_{i}^{2}\sigma^{2}$,

where $\sigma^{2}$ is the variance of a single variable.

If $\lim_{N \rightarrow \infty} \sum_{i}\alpha_{i}^{2} = 0$ then $\lim_{N \rightarrow \infty} Var(\hat{V_{N}}) = 0$.



Yishay Mansour
1999-12-16