- Signal subspace
In

signal processing ,**signal subspace**methods are empirical linear methods fordimensionality reduction andnoise reduction . These approaches have attracted significant interest and investigation recently in the context ofspeech enhancement ,speech modeling andspeech classification research.Essentially the methods represent the application of a

principal components analysis (PCA) approach to ensembles of observed time-series obtained by sampling, for example sampling anaudio signal. Such samples can be viewed as vectors in a high-dimension alvector space over thereal number s. PCA is used to identify a set of orthogonalbasis vector s (basis signals) which capture as much as possible of the energy in the ensemble of observed samples. The vector space spanned by the basis vectors identified by the analysis is then the "signal subspace". The underlying assumption is that information in speech signals is almost completely contained in a smalllinear subspace of the overall space of possible sample vectors, whereasadditive noise is typically distributed through the larger space isotropically (for example when it iswhite noise ).By projecting a sample on a signal subspace, that is, keeping only the component of the sample that is in the "signal subspace" defined by linear combinations of the first few most energised basis vectors, and throwing away the rest of the sample, which is in the remainder of the space orthogonal to this subspace, a certain amount of noise filtering is then obtained.

Signal subspace noise-reduction can be compared to

Wiener filter methods. There are two main differences:* The basis signals used in Wiener filtering are usually harmonic

sine waves , into which a signal can be decomposed byFourier transform . In contrast, the basis signals used to construct the signal subspace are identified empirically, and may for example bechirp s, or particular characteristic shapes of transients after particular triggering events, rather than pure sinusoids.* The Wiener filter grades smoothly between linear components that are dominated by signal, and linear components that are dominated by noise. The noise components are filtered out, but not quite completely; the signal components are retained, but not quite completely; and there is a transition zone which is partly accepted. In contrast, the signal subspace approach represents a sharp cut-off: an orthogonal component either lies within the signal subspace, in which case it is 100% accepted, or orthogonal to it, in which case it is 100% rejected. This reduction in dimensionality, abstracting the signal into a much shorter vector, can be a particularly desired feature of the method.

In the simplest case signal subspace methods assume white noise, but extensions of the approach to coloured noise removal and the evaluation of the subspace-based speech enhancement for robust speech recognition have also been reported.

**References***

*Wikimedia Foundation.
2010.*

### Look at other dictionaries:

**Signal processing**— is an area of systems engineering, electrical engineering and applied mathematics that deals with operations on or analysis of signals, in either discrete or continuous time. Signals of interest can include sound, images, time varying measurement … Wikipedia**Signal reconstruction**— In signal processing, reconstruction usually means the determination of an original continuous signal from a sequence of equally spaced samples.This article takes a generalized abstract mathematical approach to signal sampling and reconstruction … Wikipedia**Multiple signal classification**— MUSIC redirects here. For other uses, see Music (disambiguation). MUltiple SIgnal Classification (MUSIC) is an algorithm used for frequency estimation[1] and emitter location.[2] Contents 1 MUSIC algorithm … Wikipedia**Frequency estimation**— This article is about the technique in signal processing. The term frequency estimation can also refer to probability estimation. Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of… … Wikipedia**Noise reduction**— For sound proofing, see soundproofing. For scientific aspects of noise reduction of machinery and products, see noise control. Noise reduction is the process of removing noise from a signal. All recording devices, both analogue or digital, have… … Wikipedia**Dimension reduction**— For dimensional reduction in physics, see Dimensional reduction. In machine learning, dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature… … Wikipedia**Wavelet**— A wavelet is a mathematical function used to divide a given function or continuous time signal into different frequency components and study each component with a resolution that matches its scale. A wavelet transform is the representation of a… … Wikipedia**Orthogonality principle**— In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator… … Wikipedia**Vector space**— This article is about linear (vector) spaces. For the structure in incidence geometry, see Linear space (geometry). Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is… … Wikipedia**Intrinsic dimension**— In signal processing of multidimensional signals, for example in computer vision, the intrinsic dimension of the signal describes how many variables are needed to represent the signal. For a signal of N variables, its intrinsic dimension M… … Wikipedia