Zum Hauptinhalt springen

DEVICE, METHOD, AND GRAPHICAL USER INTERFACE FOR COMPOSING CGR FILES

2020
Online Patent

Titel:
DEVICE, METHOD, AND GRAPHICAL USER INTERFACE FOR COMPOSING CGR FILES
Link:
Veröffentlichung: 2020
Medientyp: Patent
Sonstiges:
  • Nachgewiesen in: USPTO Patent Applications
  • Sprachen: English
  • Document Number: 20200387289
  • Publication Date: December 10, 2020
  • Appl. No: 16/892045
  • Application Filed: June 03, 2020
  • Claim: 1. A method comprising: receiving, via one or more input devices, a user input generating a computer-generated scene; receiving, via the one or more input devices, a user input associating an anchor with the computer-generated scene, wherein the anchor is associated with a visual characteristic; receiving, via the one or more input devices, a user input associating one or more objects with the computer-generated scene; receiving, via the one or more input devices, a user input associating a behavior with the computer-generated scene, wherein the user input defines one or more triggers and one or more actions; and while displaying, on a display, the computer-generated scene including the or more objects in association with the visual characteristic associated with the anchor: receiving, via the one or more input devices, a user input satisfying the one or more triggers; and in response to receiving the user input satisfying the one or more triggers, performing the one or more actions, including modifying the computer-generated scene on the display.
  • Claim: 2. The method of claim 1, wherein the anchor is an anchor image.
  • Claim: 3. The method of claim 1, wherein the anchor includes a first anchor and a second anchor, and wherein displaying the one or more objects includes displaying the one or more objects in association with the second anchor in response to determining that an image of a physical environment lacks a portion corresponding to the first anchor.
  • Claim: 4. The method of claim 1, wherein receiving, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene includes: displaying, on the display, a representation of an object associated with a first parameter and a second parameter, wherein the first parameter has a first one of a plurality of first parameter values and the second parameter has a first one of a plurality of second parameter values; displaying, on the display, a first user interface element for selection of a second one of the plurality of first parameter values; and displaying, on the display, a second user interface element for selection of a second one of the plurality of second parameter values, wherein, based on the first one of the plurality of first parameter values and one or more selection rules, a subset of the plurality of second parameters values are presented for selection via the second user interface element.
  • Claim: 5. The method of claim 4, wherein the second user interface element includes a set of selectable affordances corresponding to the subset of the plurality of second parameters and a set of non-selectable affordances corresponding to the others of the plurality of second parameters.
  • Claim: 6. The method of claim 1, wherein receiving, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene includes: displaying, on the display, a particular representation of an object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the particular representation of the object; in response to receiving the user input directed to the particular representation of the object, providing a first manipulation mode associated with a first set of shape-preserving spatial manipulations of the corresponding object; receiving, via the one or more input devices, a user input switching from the first manipulation mode to a second manipulation mode associated with a second set of shape-preserving spatial manipulations of the corresponding object; and in response to receiving the user input switching from the first manipulation mode to a second manipulation mode, providing the second manipulation mode.
  • Claim: 7. The method of claim 6, wherein the first set of shape-preserving spatial manipulations of the corresponding object includes translation of the corresponding object in a plane without including translation of the corresponding object perpendicular to the plane, and wherein the second set of shape-preserving spatial manipulations of the corresponding object includes translation of the corresponding object perpendicular to the plane.
  • Claim: 8. The method of claim 6, wherein the first set of shape-preserving spatial manipulations of the corresponding object includes rotation of the corresponding object about an axis without including translation of the corresponding object about other axes, and wherein the second set of shape-preserving spatial manipulations of the corresponding object includes translation of the corresponding object about other axes.
  • Claim: 9. The method of claim 1, wherein receiving, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene includes: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input spatially manipulating the particular object, wherein the particular object is associated with a spatial manipulation point; and changing a spatial property of the particular object based on the user input and the spatial manipulation point.
  • Claim: 10. The method of claim 9, wherein the spatial manipulation point is neither an edge of the particular object, a midpoint of a bounding box surrounding the particular object, nor an unweighted center-of-mass of the particular object.
  • Claim: 11. The method of claim 1, wherein receiving, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene includes: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the representation of the particular object, the particular object associated with a parameter; displaying, on the display from a first perspective, a plurality of representations of the particular object, wherein each of the plurality of representations of the particular object is associated with a different respective value of the parameter; receiving, via the one or more input devices, a user input changing the first perspective to a second perspective; and in response to receiving the user input changing the first perspective to a second perspective, displaying, on the display, the plurality of representations of the particular object from the second perspective.
  • Claim: 12. The method of claim 11, wherein receiving, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene further includes: receiving a user input selecting a particular representation of the plurality of representations of the particular object; and setting the parameter of the particular object to the respective value of the parameter of particular representation.
  • Claim: 13. The method of claim 1, wherein a first object of the one or more objects is associated with a display mesh and a physics mesh different than the display mesh, and wherein displaying the first object is based on the display mesh and an interaction of the first object with a second object of the one or more objects is to be determined based on the physics mesh.
  • Claim: 14. The method of claim 13, wherein the physics mesh includes fewer polygons than the display mesh.
  • Claim: 15. The method of claim 1, wherein the behavior includes a trigger associated with a first object of the one or more objects, wherein the behavior includes an action associated with a second object of the one or more objects, the method further comprising displaying, on the display, a representation of the first object with a first highlighting and displaying, on the display, the representation of the second object with a second highlighting, different than the first highlighting.
  • Claim: 16. An electronic device comprising: one or more input devices; a non-transitory memory; a display; and one or more processors to: receive, via the one or more input devices, a user input generating a computer-generated scene; receive, via the one or more input devices, a user input associating an anchor with the computer-generated scene, wherein the anchor is associated with a visual characteristic; receive, via the one or more input devices, a user input associating one or more objects with the computer-generated scene; receive, via the one or more input devices, a user input associating a behavior with the computer-generated scene, wherein the user input defines one or more triggers and one or more actions; and while displaying, on the display, the computer-generated scene including the or more objects in association with the visual characteristic associated with the anchor: receive, via the one or more input devices, a user input satisfying the one or more triggers; and in response to receiving the user input satisfying the one or more triggers, perform the one or more actions, including modifying the computer-generated scene on the display.
  • Claim: 17. The electronic device of claim 16, wherein the one or more processor are to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a particular representation of an object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the particular representation of the object; in response to receiving the user input directed to the particular representation of the object, providing a first manipulation mode associated with a first set of shape-preserving spatial manipulations of the corresponding object; receiving, via the one or more input devices, a user input switching from the first manipulation mode to a second manipulation mode associated with a second set of shape-preserving spatial manipulations of the corresponding object; and in response to receiving the user input switching from the first manipulation mode to a second manipulation mode, providing the second manipulation mode.
  • Claim: 18. The electronic device of claim 16, wherein the one or more processors are to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input spatially manipulating the particular object, wherein the particular object is associated with a spatial manipulation point; and changing a spatial property of the particular object based on the user input and the spatial manipulation point.
  • Claim: 19. The electronic device of claim 16, wherein the one or more processors are to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the representation of a particular object, the particular object associated with a parameter; displaying, on the display from a first perspective, a plurality of representations of the particular object, wherein each of the plurality of representations of the particular object is associated with a different respective value of the parameter; receiving, via the one or more input devices, a user input changing the first perspective to a second perspective; and in response to receiving the user input changing the first perspective to a second perspective, displaying, on the display, the plurality of representations of the particular object from the second perspective.
  • Claim: 20. The electronic device of claim 16, wherein the behavior includes a trigger associated with a first object of the one or more objects, and wherein the behavior includes an action associated with a second object of the one or more objects, the one or more processors further to display, on the display, a representation of the first object with a first highlighting and display, on the display, the representation of the second object with a second highlighting, different than the first highlighting.
  • Claim: 21. A non-transitory computer-readable medium having instructions encoded thereon which, when executed by an electronic device including one or more input devices, one or more processors, and a display, cause the electronic device to: receive, via the one or more input devices, a user input generating a computer-generated scene; receive, via the one or more input devices, a user input associating an anchor with the computer-generated scene, wherein the anchor is associated with a visual characteristic; receive, via the one or more input devices, a user input associating one or more objects with the computer-generated scene; receive, via the one or more input devices, a user input associating a behavior with the computer-generated scene, wherein the user input defines one or more triggers and one or more actions; and while displaying, on the display, the computer-generated scene including the or more objects in association with the visual characteristic associated with the anchor: receive, via the one or more input devices, a user input satisfying the one or more triggers; and in response to receiving the user input satisfying the one or more triggers, perform the one or more actions, including modifying the computer-generated scene on the display.
  • Claim: 22. The non-transitory computer-readable medium of claim 21, wherein the instructions, when executed, cause the electronic device to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a particular representation of an object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the particular representation of the object; in response to receiving the user input directed to the particular representation of the object, providing a first manipulation mode associated with a first set of shape-preserving spatial manipulations of the corresponding object; receiving, via the one or more input devices, a user input switching from the first manipulation mode to a second manipulation mode associated with a second set of shape-preserving spatial manipulations of the corresponding object; and in response to receiving the user input switching from the first manipulation mode to a second manipulation mode, providing the second manipulation mode.
  • Claim: 23. The non-transitory computer-readable medium of claim 21, wherein the instructions, when executed, cause the electronic device to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input spatially manipulating the particular object, wherein the particular object is associated with a spatial manipulation point; and changing a spatial property of the particular object based on the user input and the spatial manipulation point.
  • Claim: 24. The non-transitory computer-readable medium of claim 21, wherein the instructions, when executed, cause the electronic device to receive, via the one or more input devices, the user input associating the one or more objects with the computer-generated scene by: displaying, on the display, a representation of a particular object associated with the computer-generated scene; receiving, via the one or more input devices, a user input directed to the representation of a particular object, the particular object associated with a parameter; displaying, on the display from a first perspective, a plurality of representations of the particular object, wherein each of the plurality of representations of the particular object is associated with a different respective value of the parameter; receiving, via the one or more input devices, a user input changing the first perspective to a second perspective; and in response to receiving the user input changing the first perspective to a second perspective, displaying, on the display, the plurality of representations of the particular object from the second perspective.
  • Claim: 25. The non-transitory computer-readable medium of claim 21, wherein the behavior includes a trigger associated with a first object of the one or more objects, wherein the behavior includes an action associated with a second object of the one or more objects, and wherein the instructions, when executed, further cause the electronic device to display, on the display, a representation of the first object with a first highlighting and display, on the display, the representation of the second object with a second highlighting, different than the first highlighting.
  • Current International Class: 06; 06

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -