In some cases, one-way clustering may be adequate: with errors clustered by firms and by year, the latter error correlations might be completely due to common shocks. In that case, the i[r]

82 Read more

By robust T 2 control charting we mean that the location and **covariance** **matrix** **estimates** in Phase I are determined based on robust method while process monitoring in Phase II is conducted in usual manner. Consequently, if we compare this control charting method with the classical (non-robust) T 2 control charting [6], the former is more effective in detecting the shift in mean vector than the later [7]. In this paper, we use the Fast Minimum **Covariance** Determinant (FMCD) since it ensures that the **estimates** are of high breakdown point. Since the stopping rule in FMCD makes high computational complexity in data concentration process, we introduce a new stopping rule which will reduce the computational complexity.

Show more
penalty on the entries of the inverse **covariance** **matrix** when maximizing the normal log-likelihood and therefore encourages some of the entries of the estimated inverse **covariance** **matrix** to be exact zero. Similar approaches are also taken by Banerjee, El Ghaoui and d’Aspremont (2008). One of the main challenges for this type of methods is computation which has been recently addressed by d’Aspremont, Banerjee and El Ghaoui (2008), Friedman, Hastie and Tibshirani (2008), Rocha, Zhao and Yu (2008), Rothman et al. (2008) and Yuan (2008). Some theoretical properties of this type of methods have also been developed by Yuan and Lin (2007), Ravikumar et al. (2008), Roth- man et al. (2008) and Lam and Fan (2009) among others. In particular, the results from Ravikumar et al. (2008) and Rothman et al. (2008) suggest that, although better than the sample **covariance** **matrix**, these methods may not perform well when p is larger than the sample size n. It remains unclear to what extent the sparsity of inverse **covariance** **matrix** entails well-behaved **covariance** **matrix** **estimates**.

Show more
26 Read more

on the performance of the interval estimation, that is, the bigger sample sizes lead to the smaller widths of the confidence intervals and larger coverage probabilities. This simply indicates that the estimators of the parameters are consistent. The coverage probabilities do not provide any pattern with respect to change in the true parametric values. However, it is good to see that coverage probabilities regarding all the confidence intervals are greater than 0.95 (which are greater than concerned confidence coefficient) that indicates the reliability of the interval estimation. The confidence intervals for parameter ( ) are skewed to right, while the intervals regarding parameter ( ) are left aligned. As a natural consequence, the increased censoring rate results in: slower convergence of **estimates**, inflated MSEs, wider confidence intervals and smaller coverage probabilities. However, it has been observed that the affects of the left censored observations are not that much severe in case of bigger sample sizes. Further for fixed sample size and censoring rate, the higher actual values of the parameters impose a negative impact on the performance (in terms of MSEs, convergence rate and widths of confidence intervals) of the **estimates**. It leads to the conclusion that the estimation of extremely large values of the parameters of the Burr type iii distribution may become difficult and the Fisher information **matrix** may be the decreasing function of the parameters. But the moderate to huge sample sizes can face off this problem.

Show more
16 Read more

Decomposable Regularizers: Recent works have considered model decomposition based on observed samples into desired parts through convex relaxation approaches. Typically, each part is represented as an algebraic variety, which are based on semi-algebraic sets, and conditions for recovery of each component are characterized. For instance, decomposition of the inverse **covariance** **matrix** into sparse and low-rank varieties is considered in Chan- drasekaran et al. (2009, 2010a); Cand`es et al. (2009) and is relevant for latent Gaussian graphical model. The work in Silva et al. (2011) considers finding a sparse-approximation using a small number of positive semi-definite (PSD) matrices, where the “basis” or the set of PSD matrices is specified a priori. In Negahban et al. (2010), a unified framework is provided for high-dimensional analysis of the so-called M -estimators, which optimize the sum of a convex loss function with decomposable regularizers. A general framework for de- composition into a specified set of algebraic varieties was studied in Chandrasekaran et al. (2010b).

Show more
43 Read more

In this paper, we estimate GMVP for the high dimensional data by the spectral corrected Methodology. Here we propose the spectral cor- rected **covariance** as the population **covariance** es- timator and plug it into (2). In this paper, we com- pare the spectral corrected estimation with the classic estimation, the linear shrinkage estimation and the nonlinear shrinkage estimation and find the performance of the spectral corrected estima- tion is best in the simulation study.

10 Read more

been a focus in high-dimensional **covariance** estimation. Wu and Pourahmadi (2003) considered banding the Cholesky factor **matrix** via the kernel smooth- ing estimation, which was further developed by Rothman, Levina and Zhu (2010). Bickel and Levina (2008a) proposed banding the sample **covariance** **matrix** directly for estimating Σ and banding the Cholesky factor **matrix** for estimating Σ −1 . They demonstrated that both estimators are consistent to Σ and Σ −1 , respectively, for some “bandable” classes of **covariance** matrices. Cai, Zhang and Zhou (2010) proposed a tapering estimator, which can be viewed as a soft banding on the sample **covariance**, which was designed to improve the banding estimator of Bickel and Levina. They demonstrated that the tapering estimator attains the optimal minimax rates of conver- gence for estimating the **covariance** **matrix**. Wagaman and Levina (2009) developed a method for discovering meaningful orderings of variables such that banding and tapering can be applied. Both the banding and tapering methods for **covariance** estimation are well connected to the regularization method considered in Huang et al (2006), Bickel and Levina (2008b), Fan, Fan and Lv (2008) and Rothman, Levina and Zhu (2009).

Show more
33 Read more

The behavior of the power function of autocorrelation tests such as the Durbin-Watson test in time series regressions or the Cli¤-Ord test in spatial regression models has been inten- sively studied in the literature. When the correlation becomes strong, Krämer (1985) (for the Durbin-Watson test) and Krämer (2005) (for the Cli¤-Ord test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these …ndings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his re- sults and proofs su¤er from serious ‡aws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hy- pothesis on the disturbance **covariance** **matrix** in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests.

Show more
65 Read more

As the issue of robustness of face recognition based on depth image sets, we propose that multiple Kinect images is being as a set of images, and depth data captured is used to automatically estimate poses and crop face area. Firstly, divide image sets into c subsets, and divide the images in all the subsets into image blocks of 4×4. Then, simulate images in sets as a form of image blocks, dividing in accordance with posture. Each set is represented using **covariance** **matrix**. Finally, the simulation of images in subsets is on Riemannian manifold. In order to classify, separately learnt SVM models for each image subset on the Lie group of Riemannian manifold and introduce a fusion strategy to combine results from all image subsets. We have verified the effectiveness of the proposed method on the three largest public Kinect face data sets Curtin Faces, Biwi Kinect and UWA Kinect. Compared to other advanced methods, the recognition rate has improved greater, the standard deviation is kept low, with robust to the number of image sets, image sub-setting number and spatial resolution.

Show more
By using both height detection, and face-recognition system together Criminal detection can be very easier. This provides a new method for the height detection through calibrated camera. In video body height estimation of people has many important applications, as body height can be used to identify individuals, either uniquely or partially. Height has been long used in forensic measures for detecting the suspects, it is however, not distinctive enough to be used in biometric identification. Hence, detecting the heights of the tracked person using any camera and distance could provide us with an important additional feature .In this, we focus on solving the patterns of criminal identity based on records and suggest an algorithmic approach to revealing proper identities. We introduce our system for single moving object detection and tracking using a static webcam inside the building like corridor or at the parking area .The background is extracted from the video scene by learning a statistical model of the background, and subtracting it from the original frame. The system presented in this paper can detect and track moving objects in a video sequence. Similarly for face recognition purpose we used PCA method. It is based on the approach that breaks the face images into a small set of characteristic feature images. All images in the data are represented as linear combination of weighted eigenvectors called eigenfaces. These eigenvectors are obtained from **covariance** **matrix** of a image from the data. The weights are found out after selecting a set of most relevant Eigenfaces. Recognition is done by projecting a selected image onto the subspace spanned by the eigenfaces and then classification is done by measuring minimum Euclidean distance.

Show more
Stochastic point kinetics was first introduced using the SPCA (Stochastic Piecewise Constant Approximation) and MC (Monte Carlo) methods [1], in this publication there is a **matrix** formulation consisting of the product of the square root of the variance **matrix** and a vector of independent Brownian motion. Later works used the same **covariance** **matrix** but using the EM (Euler- Maruyama) method and the T 1.5 (Taylor 1.5) method [2,3], in a subsequent work -without calculating the **covariance** **matrix**- a Markov process is assumed to obtain a form called SSPK (Simplified Stochastic Point Kinetics Equations) [4]. Subsequently, other methods were considered making different approaches in the **covariance** **matrix** AEM (Analytical Exponential Model) [5], Double DDM (Double Diagonalization-Decomposition Method) [6], ESM (Efficient Stochastic Model) [7], IEM (Implicit Euler-Maruyama) [8], and the recent article published Milstein method from Itô Lemma [9].

Show more
diagonal **matrix** with all the eigenvalues on the diagonal, and E is the **matrix** consisting of all the eigenvectors. 4. Multiply the training data with the whitening **matrix**. Obviously, the whitening requires the Eigen decomposition of the **covariance** **matrix** of the training sample. This process is even more computation hungry than the ICA itself. According to experiment, the 2-tag collision resolution result is satisfactory even without data whitening because the simplicity of RFID signal formation. However, that is not true for 3-tag collision situation: the algorithm cannot converge after large number of iterations. In the current design, this problem is circumvented by using a universal whitening **matrix** instead of calculating the instant whitening **matrix** at every reading. The eigenvalues and eigenvectors of the **covariance** **matrix** of the collision signal represent the characteristics of the mixing **matrix**, thus if the mixing **matrix** is approximately fixed each time and the tags are from the same manufacturer, the **covariance** **matrix** of the acquired collision signals from receiving channels are statistically stable. Therefore, the experiment can be carried out by fixing the position of the reader and the three tags each time, and measuring a group of **covariance** matrices of the data, in order to calculate the expectation. This expectation **matrix** is then decomposed for the eigenvalues and eigenvectors, and used thereafter to calculate the whitening **matrix** as a universal whitening **matrix**. The obtained universal whitening **matrix** can be used to whiten the following collision signals from target tags at the fixed positions. This process is equivalent to a calibration before ICA processing. Future work can also be developed to incorporate the direct calculation of the whitening **matrix** in a more advanced hardware device.

Show more
The additive genetic covariance function 59 plays the same role in the evolution of growth trajectories that the additive genetic covariance matrix does in the stan[r]

15 Read more

Based on previous exact analysis of the variance of sam- ple space-time **covariance** **matrix** estimation, this paper has presented an empirical approach to the estimation of the support for such matrices. Inaccessible quantities such as the exact space-time **covariance** **matrix**, on which an optimum support selection would be based, are replaced by estimated quantities as approximations. A drawback of the proposed approach requires the computation of **estimates** of the space time **covariance** **matrix** over substantially more lags than will ultimately be selected as support. Also, a statistically- motivated determination of the threshold has not been derived yet.

Show more
Abstract— Spectrum sensing is a key task for cognitive radio. Our motivation is to increase the probability of detection of spectrum sensing in cognitive radio. The spectrum-sensing algorithms are proposed based on the statistical methods like EVD,CVD of a **covariance** **matrix**. In this Two test statistics are then extracted from the sample **covariance** **matrix**. The decision on the signal presence is made by comparing the two test statistics.The Detection probability and the associated threshold are found based on the statistical theory. In this paper, we study the collaborative sensing as a means to improve the performance of the proposed spectrum sensing technique and show their effect on cooperative cognitive radio network. Simulations results and performances evaluation are done in Matlab and the results are tabulated.

Show more
ization on the off-diagonal **matrix** only. This is refereed to as the graphical Lasso (GLasso) due to the connection of the precision **matrix** to Gaussian Markov graphical models. In this GLasso framework, Ravikumar et al. (2008) provides sufficient conditions for model selection consistency, while Rothman et al. (2008) provides the convergence rate { ((p + s)/n) log p } 1/2 in the Frobenius norm and { (s/n) log p } 1/2 in the spectrum norm, where s is the number of nonzero off-diagonal entries in the precision **matrix**. Concave penalty has been studied to reduce the bias of the GLasso (Lam and Fan, 2009). Similar convergence rates have been studied under the Frobenius norm in a unified framework for penalized estimation in Negahban et al. (2012). Since the spectrum norm can be controlled via the Frobenius norm, this provides a sufficient condition (s/n) log p → 0 for the convergence to the unknown precision **matrix** under the spectrum norm. However, in the case of p ≥ n, this condition does not hold for banded precision matrices, where s is of the order of the product of p and the width of the band.

Show more
34 Read more

This work has presented several advances in the field of data assimilation and predictive model calibration, and has illustrated the significance and applicability of these advances by using the experimental results from the Lady Godiva, Jezebel and LCT critical assemblies, to calibrate cross-sections within the neutron transport code Denovo to obtain best-estimate predictions for these reactor physics problems. An important aspect of the novel contributions presented in this work is the development of highly parallel and scalable algorithms for application of data adjustment and assimilation to large (peta)- scale systems, thereby significantly extending the practical feasibility and applicability of predictive model calibration activities. As shown in Chapter 2, these new algorithms also include mathematical verification procedures for identi- fying non-physical **covariance** matrices, as well as quantifying the consistency of computational and experimental information. Very importantly, the new consistency verification criteria intro- duced in Chapter 2 have identified unphysical deficiencies in the 44-group evaluated **covariance** files of the widely used ORNL’s SCALE code package.

Show more
176 Read more

M IMO (multiple-input multiple-output) radar systems have attracted the interest of the research community due to their capability to significantly increase their perfor- mance compared to the traditional monostatic and multistatic systems. While by general definition MIMO can be viewed as a type of multistatic radar, in this work the distinctive differ- ence between the two arises from the distinction of waveforms attributed to each transmitter and the joint processing that MIMO is used on [1]. Following this definition, MIMO radars can be classified depending on the spatial allocation of their antennas with the two extremes of collocated and widely dis- tributed configurations posing different advantages discussed in [2] and [3] respectively. Additionally as described in [2] and [4] the systems can also be categorised by the coherency of their operating waveforms with the special cases of fully orthogonal and coherent signals. Moreover the importance of the target model in MIMO systems was discussed in [1] and [5] where it was shown how the correlation of the transmitter- target-receiver channel **matrix** is dependent on the geometry of the system and the dimensions of the target.

Show more
We forecast the one-day-ahead VaR and CVaR of equally weighted long only and short only Phelix Baseload and Peakload portfolios. The competing models are spec- iﬁed in Table 3.6.2. We also employ the RiskMetrics (RM) model with smoothing constant λ = 0.94 as a naive benchmark. To backtest the models we use a period of 250 trading days that corresponds approximately to one year of trading. The fore- casting period for the Baseload portfolio corresponds to the period from January 25, 2011 to February 07, 2012 and that of the Peakload portfolio spans the period from February 10, 2011 to February 22, 2012. We evaluate all risk **estimates** at 1% and 5% conﬁdence levels, since they constitute the levels most commonly used for model evaluation both in literature and in ﬁnancial markets. Figure 3.7.1 displays the Baseload portfolio return series and the 95%-VaR forecasts for the mixed C- vine-EVT and A-C-vine-EVT models. For a backtesting period of 250 observations and conﬁdence levels 1% and 5% we expect 2.5 and 12.5 exceedances, respectively. According to Figure 3.7.1, both models seem to respond well to volatility changes and produce an acceptable number of failures. However, the VaR performance for each single model is hard to assess visually and hence a two-stage selection proce- dure, similar to Sarma et al. (2003), is followed. In the ﬁrst stage, all models are tested for statistical accuracy and, if they survive rejection, a second stage ﬁltering of the surviving models is employed using subjective loss functions.

Show more
234 Read more

In applications, the space-time **covariance** or the CSD ma- trix generally have to be estimated from data. While the accu- racy of the decomposition itself has been investigated in [19, 20], and limiting factors due to algorithm-internal order re- ductions [8, 21–23] and the conditioning of the underlying source model [24] are known, it is only recently that the ef- fect of estimating the space time **covariance** **matrix** from a finite data set has been addressed [25]. While [25] linked the