Zum Hauptinhalt springen

Contextual Computer-Generated Reality (CGR) Digital Assistants

2020
Online Patent

Titel:
Contextual Computer-Generated Reality (CGR) Digital Assistants
Link:
Veröffentlichung: 2020
Medientyp: Patent
Sonstiges:
  • Nachgewiesen in: USPTO Patent Applications
  • Sprachen: English
  • Document Number: 20200098188
  • Publication Date: March 26, 2020
  • Appl. No: 16/577310
  • Application Filed: September 20, 2019
  • Claim: 1. A method comprising: at a device including one or more processors, non-transitory memory, and one or more displays: obtaining image data characterizing a field of view captured using an image sensor; identifying in the image data a contextual trigger for one of a plurality of contextual computer-generated reality (CGR) digital assistants; in response to identifying the contextual trigger, selecting a visual representation of the one of the plurality of contextual CGR digital assistants based on context; and presenting a CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, wherein the visual representation provides information associated with the contextual trigger.
  • Claim: 2. The method of claim 1, wherein obtaining the image data characterizing the field of view captured by the image sensor includes receiving the image data characterizing the field of view captured using the image sensor on a second device, distinct from the device.
  • Claim: 3. The method of claim 1, wherein obtaining the image data characterizing the field of view captured using the image sensor includes obtaining the image data characterizing the field of view captured by the image sensor integrated into the device.
  • Claim: 4. The method of claim 1, wherein the contextual trigger for the one of the plurality of contextual CGR digital assistants is identified in response to receiving an input from a user through an input device connected to or integrated into the device.
  • Claim: 5. The method of claim 4, wherein: the input device includes a gaze sensor configured to detect eye gaze; and identifying in the image data the contextual trigger for the one of the plurality of contextual CGR digital assistants includes: detecting the eye gaze of the user proximate to an object in the field of view; and activating the contextual trigger associated with the object.
  • Claim: 6. The method of claim 4, wherein: the input device includes an audio input device; and identifying in the image data the contextual trigger for the one of the plurality of contextual CGR digital assistants includes activating the contextual trigger according to input received using the audio input device.
  • Claim: 7. The method of claim 4, wherein: the input device includes an inertial measurement unit (IMU) to obtain body poses of the user; and identifying in the image data the contextual trigger for the one of the plurality of contextual CGR digital assistants includes: deriving from the body poses positions of body portions of the user, wherein the positions of body portions indicate an interest in a subject in the image data; and activating the contextual trigger associated with the subject in the image data.
  • Claim: 8. The method of claim 4, wherein: the input device includes one or more cameras associated with an HMD worn by the user to obtain the field of view associated with the user; and identifying in the image data the contextual trigger for the one of the plurality of contextual CGR digital assistants includes: generating pose information for the user based on the field of view; deriving from the pose information positions of body portions of the user, wherein the positions of body portions indicate an interest in a subject in the image data; and activating the contextual trigger associated with the subject in the image data.
  • Claim: 9. The method of claim 1, wherein the visual representation of the one of the plurality of contextual CGR digital assistants has at least one contextual meaning.
  • Claim: 10. The method of claim 1, wherein the context includes at least one of a calendar event associated with a user or a location of the device.
  • Claim: 11. The method of claim 1, further comprising displaying on the one or more displays an animation of the visual representation of the one of the plurality of contextual CGR digital assistants providing the information to a user, including adjusting the animation according to the context.
  • Claim: 12. The method of claim 11, further comprising: determining that the user has received the information; and ceasing to display at least one of the animation or the visual representation of the one of the plurality of contextual CGR digital assistants in the CGR scene.
  • Claim: 13. The method of claim 1, further comprising providing audio output using a plurality of speakers, the audio output spatially corresponding to a location associated with the one of the plurality of contextual CGR digital assistants in the field of view.
  • Claim: 14. The method of claim 1, wherein the device includes at least one of a head-mounted device, a mobile phone, a tablet, or a drone.
  • Claim: 15. A device comprising: one or more displays; non-transitory memory; and one or more processors to: obtain image data characterizing a field of view captured by an image sensor; identify in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants; in response to identifying the contextual trigger, select a visual representation of the one of the plurality of contextual CGR digital assistants; and presenting a CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, wherein the visual representation provides information associated with the contextual trigger.
  • Claim: 16. The device of claim 15, wherein obtaining the image data characterizing the field of view captured by the image sensor includes receiving the image data characterizing the field of view captured using the image sensor on a second device, distinct from the device.
  • Claim: 17. The device of claim 15, wherein obtaining the image data characterizing the field of view captured using the image sensor includes obtaining the image data characterizing the field of view captured by the image sensor integrated into the device.
  • Claim: 18. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with one or more displays, cause the device to perform operations comprising: obtaining image data characterizing a field of view captured by an image sensor; identifying in the image data a contextual trigger for one of a plurality of contextual CGR digital assistants; in response to identifying the contextual trigger, selecting a visual representation of the one of the plurality of contextual CGR digital assistants; and presenting a CGR scene by displaying the visual representation of the one of the plurality of contextual CGR digital assistants, wherein the visual representation provides information associated with the contextual trigger.
  • Claim: 19. The non-transitory memory of claim 18, wherein obtaining the image data characterizing the field of view captured by the image sensor includes receiving the image data characterizing the field of view captured using the image sensor on a second device, distinct from the device.
  • Claim: 20. The non-transitory memory of claim 18, wherein obtaining the image data characterizing the field of view captured using the image sensor includes obtaining the image data characterizing the field of view captured by the image sensor integrated into the device.
  • Current International Class: 06; 06; 02

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -