Zum Hauptinhalt springen

Determining speech from facial skin movements using a housing supported by ear or associated with an earphone

Q (CUE) LTD.
2024
Online Patent

Titel:
Determining speech from facial skin movements using a housing supported by ear or associated with an earphone
Autor/in / Beteiligte Person: Q (CUE) LTD.
Link:
Veröffentlichung: 2024
Medientyp: Patent
Sonstiges:
  • Nachgewiesen in: USPTO Patent Grants
  • Sprachen: English
  • Patent Number: 11908,478
  • Publication Date: February 20, 2024
  • Appl. No: 18/179632
  • Application Filed: March 07, 2023
  • Assignees: Q (Cue) Ltd. (Ramat Gan, IL)
  • Claim: 1. A wearable sensing system for determining speech based on minute facial skin surface movements, the wearable sensing system comprising: a housing configured to be worn on a head of a user and to be supported by an ear of the user; at least one coherent light source associated with the housing and configured to direct light towards a facial region of the head; at least one sensor associated with the housing and configured to receive light source reflections from the facial region and to output associated reflection signals; at least one processor configured to: receive the reflection signals from the at least one sensor in a time interval; analyze the reflection signals to identify minute facial movements in the time interval; decipher the minute facial movements to determine associated speech; and generate output of the associated speech.
  • Claim: 2. The wearable sensing system of claim 1 , wherein the at least one coherent light source is configured to direct a plurality of beams of coherent light toward different locations on the facial region of the head thus creating an array of spots.
  • Claim: 3. The wearable sensing system of claim 2 , wherein the coherent light source is configured to cause an area of the array of spots to be at least 1 cm 2 .
  • Claim: 4. The wearable sensing system of claim 2 , wherein the coherent light source is configured to cause the area of the array of spots to be at least 4 cm 2 .
  • Claim: 5. The wearable sensing system of claim 1 , wherein the housing is supported by the ear of the user via association with a pair of spectacles.
  • Claim: 6. The wearable sensing system of claim 5 , wherein the at least one coherent light source is integrated with a frame of the pair of spectacles.
  • Claim: 7. The wearable sensing system of claim 1 , wherein the housing is configured such that when worn, the at least one sensor is held at a distance from a skin surface.
  • Claim: 8. The wearable sensing system of claim 7 , wherein the housing is configured such that the at least one sensor is held at least 5 mm from the skin surface.
  • Claim: 9. The wearable sensing system of claim 1 , wherein the output is audio, and wherein the system further comprises a speaker configured to output the speech in audio form.
  • Claim: 10. The wearable sensing system of claim 9 , wherein the audio output is a synthetization of the determined associated speech.
  • Claim: 11. The wearable sensing system of claim 1 , further comprising a microphone configured to sense sounds uttered by the user.
  • Claim: 12. The wearable sensing system of claim 11 , wherein the at least one processor is configured to calibrate spoken words detected by the microphone with the identified minute facial movements.
  • Claim: 13. The wearable sensing system of claim 1 , wherein the housing is configured, when worn, to assume an aiming direction of the at least one sensor for illuminating a portion of a cheek of the user.
  • Claim: 14. The wearable sensing system of claim 1 , wherein the generated output is textual output corresponding to the determined associated speech.
  • Claim: 15. The wearable sensing system of claim 1 , wherein the generated output is audible output of the determined associated speech.
  • Claim: 16. The wearable sensing system of claim 1 , wherein in deciphering the minute facial movements, the at least one processor is configured to determine activation of facial muscles associated with the minute facial movements.
  • Claim: 17. The wearable sensing system of claim 1 , wherein deciphering the minute facial movements is used to determine silent speech.
  • Claim: 18. The wearable sensing system of claim 1 , wherein deciphering the minute facial movements is used to determine vocalized speech.
  • Claim: 19. A wearable sensing system for determining speech based on minute facial skin surface movements, the wearable sensing system comprising: a housing configured to be worn on a head of a user, wherein the housing is associated with an earphone; at least one coherent light source associated with the housing and configured to direct light towards a facial region of the head; at least one sensor associated with the housing and configured to receive light source reflections from the facial region and to output associated reflection signals; at least one processor configured to: receive the reflection signals from the at least one sensor in a time interval; analyze the reflection signals to identify minute facial movements in the time interval; decipher the minute facial movements to determine associated speech; and generate output of the associated speech.
  • Claim: 20. The wearable sensing system of claim 19 , wherein the earphone has an arm extending therefrom and wherein the at least one coherent light source is located in the arm.
  • Claim: 21. A wearable sensing system for determining speech based on minute facial skin surface movements, the wearable sensing system comprising: a housing configured to be worn on a head of a user; at least one coherent light source associated with the housing and configured to direct light towards a facial region of the head; at least one sensor associated with the housing and configured to receive light source reflections from the facial region and to output associated reflection signals, wherein the housing is configured such that when worn, the at least one sensor is held at a distance of at least 2 cm from the skin surface; and at least one processor configured to: receive the reflection signals from the at least one sensor in a time interval; analyze the reflection signals to identify minute facial movements in the time interval; decipher the minute facial movements to determine associated speech; and generate output of the associated speech.
  • Claim: 22. A method for determining speech based on minute facial skin movements, the method comprising: controlling at least one coherent light source to generate a plurality of beams of coherent light for direction towards different locations on a facial region of a head of a user thus creating an array of spots of at least 1 cm 2 ; receiving via at least one sensor reflections of the light from the facial region in a time interval and outputting associated reflection signals; analyzing the reflection signals to identify minute facial movements captured during the time interval; deciphering the minute facial movements to determine associated speech; and generating output of the associated speech.
  • Patent References Cited: 5826234 October 1998 Lyberg ; 7859654 December 2010 Hartog ; 8638991 January 2014 Zalevsky et al. ; 8792159 July 2014 Zalevsky et al. ; 8860948 October 2014 Abdulhalim et al. ; 9129595 September 2015 Russell et al. ; 9199081 December 2015 Zalevsky et al. ; 9288045 March 2016 Sadot et al. ; 9668672 June 2017 Zalevsky et al. ; 10335041 July 2019 Fixler et al. ; 10679644 June 2020 Rakshit et al. ; 10838139 November 2020 Zalevsky et al. ; 11169176 November 2021 Zalevsky et al. ; 11341222 May 2022 Caffey ; 11343596 May 2022 Chappell, III et al. ; 11605376 March 2023 Hoover ; 20040249510 December 2004 Hanson ; 20060287608 December 2006 Dellacorna ; 20080103769 May 2008 Schultz et al. ; 20090233072 September 2009 Harvey et al. ; 20140375571 December 2014 Hirata ; 20150253502 September 2015 Fish et al. ; 20160004059 January 2016 Menon et al. ; 20160011063 January 2016 Zhang et al. ; 20160027441 January 2016 Liu et al. ; 20160093284 March 2016 Begum et al. ; 20160116356 April 2016 Goldstein ; 20160379638 December 2016 Basye et al. ; 20170084266 March 2017 Bronakowski et al. ; 20170222729 August 2017 Sadot et al. ; 20170263237 September 2017 Green et al. ; 20180020285 January 2018 Zass ; 20180107275 April 2018 Chen ; 20180232511 August 2018 Bakish ; 20190074012 March 2019 Kapur et al. ; 20190074028 March 2019 Howard ; 20190189145 June 2019 Rakshit et al. ; 20190277694 September 2019 Sadot et al. ; 20200020352 January 2020 Ito et al. ; 20200075007 March 2020 Kawahara et al. ; 20210027154 January 2021 Zalevsky et al. ; 20210035585 February 2021 Gupta ; 20210052368 February 2021 Smadja et al. ; 20210063563 March 2021 Zalevsky et al. ; 20210072153 March 2021 Zalevsky et al. ; 20210169333 June 2021 Zalevsky et al. ; 20220084196 March 2022 Ogawa et al. ; 20220125286 April 2022 Zalevsky et al. ; 20220163444 May 2022 Zalevsky ; 20230215437 July 2023 Maizels et al. ; 20230230574 July 2023 Maizels et al. ; 20230230575 July 2023 Maizels et al. ; 20230230594 July 2023 Maizels et al. ; 20230267914 August 2023 Maizels et al. ; 105488524 April 2016 ; 2012040747 March 2021 ; 2002077972 October 2022 ; 2023012527 February 2023 ; 2023012546 February 2023
  • Other References: Chandrashekhar, V., “The Classification of EMG Signals Using Machine Learning for the Construction of a Silent Speech Interface,” The Young Researcher-RSGC Royal St. George's College, 2021, vol. 5, Issue 1, pp. 266-283. cited by applicant ; International Search Report and Written Opinion from International Application No. PCT/IB2022/054527 dated Aug. 30, 2022 (7 pages). cited by applicant ; International Search Report and Written Opinion from International Application No. PCT/IB2022/056418 dated Oct. 31, 2022 (11 pages). cited by applicant
  • Primary Examiner: Kazeminezhad, Farzad
  • Attorney, Agent or Firm: Finnegan, Henderson, Farabow, Garrett & Dunner LLP

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -