Zum Hauptinhalt springen

METHOD AND DEVICE FOR PRESENTING A CGR ENVIRONMENT BASED ON AUDIO DATA AND LYRIC DATA

2024
Online Patent

Titel:
METHOD AND DEVICE FOR PRESENTING A CGR ENVIRONMENT BASED ON AUDIO DATA AND LYRIC DATA
Link:
Veröffentlichung: 2024
Medientyp: Patent
Sonstiges:
  • Nachgewiesen in: USPTO Patent Applications
  • Sprachen: English
  • Document Number: 20240071377
  • Publication Date: February 29, 2024
  • Appl. No: 18/385540
  • Application Filed: October 31, 2023
  • Claim: 1. A method comprising: at an electronic device including a processor, non-transitory memory, a speaker, and a display: obtaining a first audio file and a second audio file; parsing the first audio file into a plurality of first segments; parsing the second audio file into a plurality of second segments; generating, for each of the plurality of first segments and each of the plurality of second segments, segment metadata; determining a relationship between first segment metadata of one of the plurality of first segments and second segment metadata of one of the plurality of second segments; and generating computer-generated reality (CGR) content associated with the one of the plurality of first segments and the one of the plurality of second segments based on the relationship, the first segment metadata, and the second segment metadata.
  • Claim: 2. The method of claim 1, wherein parsing the first audio file into the plurality of first segments includes parsing the first audio file into a plurality of segments, each segment indicating a line of lyrics.
  • Claim: 3. The method of claim 1, wherein the first segment metadata indicates a tempo or a key of the one of the plurality of first segments.
  • Claim: 4. The method of claim 1, wherein the first segment metadata indicates a mood of the one of the plurality of first segments.
  • Claim: 5. The method of claim 1, wherein the first segment metadata indicates a meaning of the one of the plurality of first segments derived by semantic analysis of audio data and lyric data of the one of the plurality of first segments.
  • Claim: 6. The method of claim 1, wherein determining the relationship includes determining a matching relationship between the first segment metadata and the second segment metadata.
  • Claim: 7. The method of claim 6, wherein generating the CGR content is based on matched metadata of the first segment metadata and the second segment metadata.
  • Claim: 8. The method of claim 1, wherein determining the relationship includes determining a complementary relationship between the first segment metadata and the second segment metadata.
  • Claim: 9. The method of claim 8, wherein generating the CGR content is based on complementary metadata of the first segment metadata and the second segment metadata and the CGR content includes a plurality of CGR content with one emphasized.
  • Claim: 10. The method of claim 1, wherein determining the relationship includes determining a contrasting relationship between the first segment metadata and the second segment metadata.
  • Claim: 11. The method of claim 10, wherein generating the CGR content is based on contrasting metadata of the first segment metadata and the second segment metadata and the CGR content includes two opposite CGR content interacting.
  • Claim: 11. A device comprising: one or more processors; a non-transitory memory; a speaker; a display; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: obtain a first audio file and a second audio file; parse the first audio file into a plurality of first segments; parse the second audio file into a plurality of second segments; generate, for each of the plurality of first segments and each of the plurality of second segments, segment metadata; determine a relationship between first segment metadata of one of the plurality of first segments and second segment metadata of one of the plurality of second segments; and generate computer-generated reality (CGR) content associated with the one of the plurality of first segments and the one of the plurality of second segments based on the relationship, the first segment metadata, and the second segment metadata.
  • Claim: 12. The method of claim 1, further comprising concurrently: playing, via the speaker, the one of the plurality of first segments; playing, via the speaker, the one of the plurality of second segments; and displaying, on a display, the CGR content.
  • Claim: 12. The device of claim 11, wherein determining the relationship includes determining a matching relationship between the first segment metadata and the second segment metadata.
  • Claim: 13. The method of claim 12, wherein playing the one of the plurality of first segments includes adjusting a key and/or tempo of the one of the plurality of first segments to better match the one of the plurality of second segments.
  • Claim: 13. The device of claim 12, wherein generating the CGR content is based on matched metadata of the first segment metadata and the second segment metadata.
  • Claim: 14. The device of claim 11, wherein determining the relationship includes determining a complementary relationship between the first segment metadata and the second segment metadata.
  • Claim: 15. The device of claim 14, wherein generating the CGR content is based on complementary metadata of the first segment metadata and the second segment metadata and the CGR content includes a plurality of CGR content with one emphasized.
  • Claim: 16. The device of claim 11, wherein determining the relationship includes determining a contrasting relationship between the first segment metadata and the second segment metadata.
  • Claim: 17. The device of claim 16, wherein generating the CGR content is based on contrasting metadata of the first segment metadata and the second segment metadata and the CGR content includes two opposite CGR content interacting.
  • Claim: 18. A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with a speaker and a display, cause the device to: obtain a first audio file and a second audio file; parse the first audio file into a plurality of first segments; parse the second audio file into a plurality of second segments; generate, for each of the plurality of first segments and each of the plurality of second segments, segment metadata; determine a relationship between first segment metadata of one of the plurality of first segments and second segment metadata of one of the plurality of second segments; and generate computer-generated reality (CGR) content associated with the one of the plurality of first segments and the one of the plurality of second segments based on the relationship, the first segment metadata, and the second segment metadata.
  • Claim: 19. The non-transitory memory of claim 18, wherein determining the relationship includes determining a matching relationship between the first segment metadata and the second segment metadata.
  • Claim: 20. The non-transitory memory of claim 19, wherein generating the CGR content is based on matched metadata of the first segment metadata and the second segment metadata.
  • Claim: 21. The non-transitory memory of claim 18, wherein determining the relationship includes determining a complementary relationship between the first segment metadata and the second segment metadata.
  • Claim: 22. The non-transitory memory of claim 21, wherein generating the CGR content is based on complementary metadata of the first segment metadata and the second segment metadata and the CGR content includes a plurality of CGR content with one emphasized.
  • Claim: 23. The non-transitory memory of claim 18, wherein determining the relationship includes determining a contrasting relationship between the first segment metadata and the second segment metadata.
  • Claim: 24. The non-transitory memory of claim 23, wherein generating the CGR content is based on contrasting metadata of the first segment metadata and the second segment metadata and the CGR content includes two opposite CGR content interacting.
  • Current International Class: 10; 06; 06; 06; 10; 10

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -