Jump to content

Compressed sensing in speech signals

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 14.139.34.2 (talk) at 04:18, 3 September 2014. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Compressed Sensing (CS) can be used to reconstruct sparse vector from less number of measurements, provided the signal can be represented in sparse domain. Sparse domain is a domain in which only a few measurements have non-zero values. Suppose a signal can be represented in a domain where only coefficients out of (where ) are non zero, then the signal is said to be sparse in that domain. This reconstructed sparse vector can be used to construct back the original signal if the sparse domain of signal is known. CS can be applied to speech signal only if sparse domain of speech signal is known.

Consider a speech signal , which can be represented in a domain such that , where speech signal , dictionary matrix and the sparse coefficient vector . This speech signal is said to be sparse in domain , if number of significant (non zero) coefficients in sparse vector are , where .

The observed signal is of dimension . To reduce the complexity for solving using CS speech signal is observed using a measurement matrix such that


where , and measurement matrix such that .

Sparse decomposition problem for eq. 1 can be solved as standard minimization[1] as

If measurement matrix satisfies the restricted isometric property (RIP) and is incoherent with dictionary matrix .[2] then the reconstructed signal is much closer to the original speech signal.

Different types of measurement matrices like random matrices can be used for speech signals.[3][4] Estimating the sparsity of speech signal is a problem since speech signal highly varies over time and thus sparsity of speech signal also varies highly over time. If sparsity of speech signal can be calculated over time without much complexity that will be best. If this is not possible then worst-case scenario for sparsity can be considered for a given speech signal.

Sparse vector () for a given speech signals is reconstructed from less number of measurements () using minimization.[1] Then original speech signal is reconstructed form the calculated sparse vector using the fixed dictionary matrix as as = .[5]

Estimation of both the dictionary matrix and sparse vector from just random measurements only has been done iteratively in.[6] The speech signal reconstructed from estimated sparse vector and dictionary matrix is much closer to the original signal. Some more iterative approaches to calculate both dictionary matrix and speech signal from just random measurements of speech signal are shown in.[7] Th application of structured sparsity for joint speech localization-separation in reverberant acoustics has been investigated for multiparty speech recognition.[8] Further applications of the concept of sparsity are yet to be studied in the field of speech processing. The idea behind CS for speech signals is that can we come up with some algorithms or methods where we only use those random measurements () to do some application-based processing like speaker recognition, speech enhancement[9], etc.

References

  1. ^ a b Donoho D. (2006). "Compressed sensing". IEEE Transactions on Information Theory. 52 (4): 1289. doi:10.1109/TIT.2006.871582. CiteSeerx10.1.1.212.6447.
  2. ^ Candes E. and Romberg J. and Tao T. (2006). "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information" (PDF). IEEE Transactions on Information Theory. 52 (2): 489. doi:10.1109/TIT.2005.862083.
  3. ^ Zhang G., Jiao S., Xu X. and Wang L. (2010). "The 2010 IEEE International Conference on Information and Automation": 455. doi:10.1109/ICINFA.2010.5512379. ISBN 978-1-4244-5701-4. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link)
  4. ^ Li K., Ling C. and Gan L. (2011). "2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)": 3748. doi:10.1109/ICASSP.2011.5947166. ISBN 978-1-4577-0538-0. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)
  5. ^ Christensen M., Stergaard J. and Jensen S. (2009). "2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers": 356. doi:10.1109/ACSSC.2009.5469828. ISBN 978-1-4244-5825-7. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)
  6. ^ Raj C. S. and Sreenivas T. V. (2011). "Time-varying signal adaptive transform and IHT recovery of compressive sensed speech". Interspeech: 73–76.
  7. ^ Chetupally S.R. and Sreenivas T.V. (2012). "Joint pitch-analysis formant-synthesis framework for CS recovery of speech". Interspeech: 946–949.
  8. ^ Asaei A. and Bourlard H. and Cevher V. (2011). "Model-based Compressive Sensing for Multiparty Distant Speech Recognition". ICASSP: 4600–4603.
  9. ^ Abrol Vinayak and Sharma Pulkit (2013). "2013 Conference Record of 14th Interspeech": 3274–3278. doi:http://www.isca-speech.org/archive/interspeech_2013/i13_3274.html. {{cite journal}}: |chapter= ignored (help); Check |doi= value (help); Cite has empty unknown parameter: |1= (help); Cite journal requires |journal= (help); External link in |doi= (help)