Zum Hauptinhalt springen

Gaming environment tracking optimization

LNW Gaming, Inc.
2024
Online Patent

Titel:
Gaming environment tracking optimization
Autor/in / Beteiligte Person: LNW Gaming, Inc.
Link:
Veröffentlichung: 2024
Medientyp: Patent
Sonstiges:
  • Nachgewiesen in: USPTO Patent Grants
  • Sprachen: English
  • Patent Number: 11861,975
  • Publication Date: January 02, 2024
  • Appl. No: 17/217090
  • Application Filed: March 30, 2021
  • Assignees: LNW Gaming, Inc. (Las Vegas, NV, US)
  • Claim: 1. A method of operating a wagering game system comprising a gaming table and a camera, said method comprising: detecting, by an electronic processor in response to electronic communication with the wagering game system, a game state from a plurality of game states associated with a wagering game presented at the gaming table; dynamically generating, by the electronic processor based, at least in part on the detected game state, a set of digital images from portions of a frame of image data captured by the camera at the gaming table; in response to determining, by the electronic processor via electronic analysis of the set of digital images, that the set of images fails a maximum resolution target input requirement for a neural network model that corresponds to the game state, iteratively modifying, by the electronic processor, an image resolution property of a subset of images from the set of images until the iteratively modifying causes the set of digital images to collectively meet the maximum resolution target input requirement, wherein the iteratively modifying comprises, in an iterative manner, until the set of digital images meets the maximum resolution target input requirement, selecting, as the subset of images, one or more images from the set of digital images that possess an image resolution width that is largest in size amongst all the members of the set of digital images, scaling the image resolution width of each of the one or more images by one pixel and scaling an image resolution height of each of the one or more images by one pixel divided by an aspect ratio at which the frame of image data was captured, and wherein in response to each iteration of the scaling the image resolution width and image resolution height of the one or more images, determining, by the electronic processor via a rectangle packing algorithm, whether the set of images collectively fit into a rectangle that represents a maximum resolution limit for the neural network model; and in response to determining that said iteratively modifying causes the set of digital images to meet the maximum resolution target input requirement, providing, by the electronic processor, the set of digital images to the neural network model.
  • Claim: 2. The method of claim 1 , said generating comprising: determining, by the electronic processor, a level of image detail needed by the neural network model to identify physical objects within an area of interest at the gaming table; automatically selecting, by the electronic processor in response to determining the level of detail, one or more image capture settings associated with the camera based, at least in part, on the detected game state and the level of image detail; and capturing the frame of image data using the one or more image capture settings.
  • Claim: 3. The method of claim 2 , wherein the image capture settings comprise one or more of an image resolution setting, an aspect ratio setting, a shutter speed setting, an aperture size setting, or a zoom setting.
  • Claim: 4. The method of claim 1 , wherein said generating comprises: determining, by the electronic processor based on the detected game state, portions of interest at the gaming table required for analysis by the neural network model; automatically superimposing, by the electronic processor, a set of graphical rectangles over the portions of the frame of image data, wherein the set of graphical rectangles represent locations in the frame that correspond to respective ones of the portions of interest at the gaming table required for analysis by the neural network model; copying, by the electronic processor, the portions of the frame of image data that correspond to the set of graphical rectangles; and storing, by the electronic processor, the copied portions as the set of images, wherein each of the copied portions has a respective image width and image height according to an image resolution setting of the camera.
  • Claim: 5. The method of claim 4 , further comprising: associating, by the electronic processor, a unique identifier from each of the set of rectangles to a corresponding one of the set of digital images, wherein the unique identifier associates each one of the set of digital images to a corresponding one of the locations in the frame that corresponds to one of the respective portions of interest at the gaming table; compositing, by the electronic processor via a rectangle packing algorithm, the set of digital images into a sprite sheet using the unique identifier to store a position of the corresponding one of the set of digital images on the sprite sheet; and analyzing, by the electronic processor via the neural network model, the set of digital images as a unit, wherein the analyzing uses the unique identifier to identify where each of the set of digital images are in relation to each other on the sprite sheet.
  • Claim: 6. The method of claim 4 , further comprising: receiving, in response to analysis by the neural network model, one or more data objects associated with characteristics of physical objects identified by the neural network model; and using the one or more data objects to update the set of rectangles.
  • Claim: 7. The method of claim 1 , wherein the determining whether the set of images collectively fit into the rectangle comprises: multiplying the image resolution width by itself and by a total number of members of the set of digital images; and determining whether a product of the multiplying is less than, or equal to, an area of the rectangle, wherein the area of the rectangle is a product of multiplying a resolution height limit for the neural network model with a resolution width limit for the neural network model.
  • Claim: 8. The method of claim 1 , wherein said providing comprises: compositing the set of digital images into a file; and transmitting the file to the neural network model to analyze the composited set of digital images concurrently as a unit.
  • Claim: 9. A gaming system comprising a network communication interface; and a processor configured to perform one or more operations to generate, based, at least in part on a game state of a wagering game, a set of images from one or more portions of a frame of image data captured by a camera at a gaming table, in response to determination that the set of images fails a target input requirement for a neural network model that corresponds to the game state, incrementally modify an image property of a subset of images from the set of images until the set of images meets the target input requirement, wherein the processor being configured to incrementally modify the image property of the subset of images is configured to perform one or more operations to, in an iterative manner until the set of images meets the target input requirement, select, as the subset of images, one or more images from the set of images that has an image resolution width that is largest in size amongst all members of the set of images, scale an image resolution width of each of the one or more images in the subset by one pixel, scale an image resolution height of each of the one or more images in the subset by one pixel divided by an aspect ratio at which the frame of image data was captured, and determine whether the set of images collectively fit into a rectangle that represents a maximum resolution limit for the neural network model, and in response to determination that the set of images meets the target input requirement, provide the set of images to the neural network model.
  • Claim: 10. The gaming system of claim 9 , wherein the processor being configured to generate the set of images is configured to perform one or more operations to: automatically select, based at least in part on the game state, one or more image capture settings associated with the camera; and capture the frame of image data based on the one or more image capture settings, wherein the image capture settings comprise one or more of an image resolution setting, an aspect ratio setting, a shutter speed setting, an aperture size setting, or a zoom setting.
  • Claim: 11. The gaming system of claim 9 , wherein the processor being configured to generate the set of images is configured to perform one or more operations to: automatically superimpose a set of rectangles over the portions of the frame of image data, wherein the set of rectangles represent locations that require analysis by the neural network model; crop the portions of the frame of image data that correspond to the set of rectangles; and store the cropped portions as the set of images, wherein each of the set of images has a respective image width and image height according to an image resolution setting of the camera.
  • Claim: 12. The gaming system of claim 9 , wherein the processor being configured to provide the set of images to the neural network model is configured to perform one or more operations to: composite the set of images into a single image; and transmit, via the network communication interface, the single image to a device having one or more additional processors that operate the neural network model.
  • Claim: 13. The gaming system of claim 12 , wherein the processor being configured to composite the set of images into the single image is configured to perform one or more operations to associate a unique identifier from each of the set of rectangles to a corresponding one of the set of images.
  • Claim: 14. The gaming system of claim 12 , wherein the single image comprises at least one of a sprite sheet or a texture atlas.
  • Claim: 15. The gaming system of claim 9 , wherein the processor being configure to determine whether the set of images collectively fit into the rectangle is configured to perform one or more operations to run a packing algorithm on the set of images.
  • Claim: 16. The gaming system of claim 9 , wherein the processor being configure to determine whether the set of images collectively fit into the rectangle is configured to perform one or more operations to: multiply the image resolution width by itself and by a total number of members of the set of images; and determine whether a product of the multiplying is less than, or equal to, an area of the rectangle, wherein the area of the rectangle is a product of multiplying a resolution height limit for the neural network model with a resolution width limit for the neural network model.
  • Claim: 17. One or more non-transitory computer-readable storage media having instructions stored thereon, which, when executed by a set of one or more processors of a gaming system, cause the set of one or more processors to perform operations comprising: generating, based at least in part on a game state of a wagering game, a set of images from portions of a frame of image data captured by a camera at a gaming table; determining that the set of images fails to collectively fit into an area of a rectangle that represents a maximum resolution limit for a neural network model that corresponds to the game state; in response to determining that the set of images fails to collectively fit into the rectangle, in an iterative manner, until the set of images meets the target input requirement, selecting, as the subset of images, one or more images from the set of images that have an image resolution width that is largest in size amongst all members of the set of images, scaling an image resolution width of each of the one or more images in the subset by one pixel; scaling an image resolution height of each of the one or more images in the subset by one pixel divided by an aspect ratio at which the frame of image data was captured; and running a packing algorithm to determine whether the set of images collectively fit into a rectangle that represents a maximum resolution limit for the neural network model; in response to determination that the set of images collectively fit into the rectangle, compositing the set of images into a single image file; and transmitting, via a communications network, the single image file to the neural network model for concurrent analysis of the set of images contained within the single image file.
  • Patent References Cited: 5103081 April 1992 Fisher et al. ; 5451054 September 1995 Orenstein ; 5757876 May 1998 Dam et al. ; 6460848 October 2002 Soltys et al. ; 6514140 February 2003 Storch ; 6517435 February 2003 Soltys et al. ; 6517436 February 2003 Soltys et al. ; 6520857 February 2003 Soltys et al. ; 6527271 March 2003 Soltys et al. ; 6530836 March 2003 Soltys et al. ; 6530837 March 2003 Soltys et al. ; 6533276 March 2003 Soltys et al. ; 6533662 March 2003 Soltys et al. ; 6579180 June 2003 Soltys et al. ; 6579181 June 2003 Soltys et al. ; 6595857 July 2003 Soltys et al. ; 6663490 December 2003 Soltys et al. ; 6688979 February 2004 Soltys et al. ; 6712696 March 2004 Soltys et al. ; 6758751 July 2004 Soltys et al. ; 7011309 March 2006 Soltys et al. ; 7124947 October 2006 Storch ; 7316615 January 2008 Soltys et al. ; 7753781 July 2010 Storch ; 7771272 August 2010 Soltys et al. ; 8130097 March 2012 Knust et al. ; 8285034 October 2012 Rajaraman et al. ; 8606002 December 2013 Rajaraman et al. ; 8896444 November 2014 Knust et al. ; 9165420 October 2015 Knust et al. ; 9174114 November 2015 Knust et al. ; 9378605 June 2016 Koyama ; 9511275 December 2016 Knust et al. ; 9795870 October 2017 Ratliff ; 9811735 November 2017 Cosatto ; 9889371 February 2018 Knust et al. ; 10032335 July 2018 Shigeta ; 10096206 October 2018 Bulzacki et al. ; 10192085 January 2019 Shigeta ; 10242525 March 2019 Knust et al. ; 10242527 March 2019 Bulzacki et al. ; 10304191 May 2019 Mousavian et al. ; 10380838 August 2019 Bulzacki et al. ; 10398202 September 2019 Shigeta ; 10403090 September 2019 Shigeta ; 10410066 September 2019 Bulzacki et al. ; 10493357 December 2019 Shigeta ; 10529183 January 2020 Shigeta ; 10540846 January 2020 Shigeta ; 10580254 March 2020 Shigeta ; 10593154 March 2020 Shigeta ; 10600279 March 2020 Shigeta ; 10600282 March 2020 Shigeta ; 10665054 May 2020 Shigeta ; 10706675 July 2020 Shigeta ; 10720013 July 2020 Main, Jr. ; 10741019 August 2020 Shigeta ; 10748378 August 2020 Shigeta ; 10755524 August 2020 Shigeta ; 10755525 August 2020 Shigeta ; 10762745 September 2020 Shigeta ; 10825288 November 2020 Knust et al. ; 10832517 November 2020 Bulzacki et al. ; 10846980 November 2020 French et al. ; 10846985 November 2020 Shigeta ; 10846986 November 2020 Shigeta ; 10846987 November 2020 Shigeta ; 11244191 February 2022 Yao ; 11284041 March 2022 Bergamo ; 11386306 July 2022 Siddiquie ; 20050059479 March 2005 Soltys et al. ; 20060019739 January 2006 Soltys et al. ; 20110115158 May 2011 Gagner et al. ; 20110230248 September 2011 Baerlocher et al. ; 20160089594 March 2016 Yee ; 20180040190 February 2018 Keilwert et al. ; 20180053377 February 2018 Shigeta ; 20180060730 March 2018 Kurata ; 20180061178 March 2018 Shigeta ; 20180068525 March 2018 Shigeta ; 20180075698 March 2018 Shigeta ; 20180114406 April 2018 Shigeta ; 20180211110 July 2018 Shigeta ; 20180211472 July 2018 Shigeta ; 20180232987 August 2018 Shigeta ; 20180239984 August 2018 Shigeta ; 20180336757 November 2018 Shigeta ; 20190043309 February 2019 Shigeta ; 20190088082 March 2019 Shigeta ; 20190102987 April 2019 Shigeta ; 20190147689 May 2019 Shigeta ; 20190172312 June 2019 Shigeta ; 20190188957 June 2019 Bulzacki et al. ; 20190188958 June 2019 Shigeta ; 20190236891 August 2019 Shigeta ; 20190259238 August 2019 Shigeta ; 20190266832 August 2019 Shigeta ; 20190318576 October 2019 Shigeta ; 20190320768 October 2019 Shigeta ; 20190333322 October 2019 Shigeta ; 20190333323 October 2019 Shigeta ; 20190333326 October 2019 Shigeta ; 20190340873 November 2019 Shigeta ; 20190344157 November 2019 Shigeta ; 20190347893 November 2019 Shigeta ; 20190347894 November 2019 Shigeta ; 20190362594 November 2019 Shigeta ; 20190371112 December 2019 Shigeta ; 20190392680 December 2019 Shigeta ; 20200034629 January 2020 Vo et al. ; 20200035060 January 2020 Shigeta ; 20200065618 February 2020 Zhang ; 20200118390 April 2020 Shigeta ; 20200122018 April 2020 Shigeta ; 20200175806 June 2020 Shigeta ; 20200202134 June 2020 Bulzacki et al. ; 20200226878 July 2020 Shigeta ; 20200234464 July 2020 Shigeta ; 20200242888 July 2020 Shigeta ; 20200258351 August 2020 Shigeta ; 20200265672 August 2020 Shigeta ; 20200273289 August 2020 Shigeta ; 20200294346 September 2020 Shigeta ; 20200302168 September 2020 Vo ; 20200342281 October 2020 Shigeta ; 20200349806 November 2020 Shigeta ; 20200349807 November 2020 Shigeta ; 20200349808 November 2020 Shigeta ; 20200349809 November 2020 Shigeta ; 20200349810 November 2020 Shigeta ; 20200349811 November 2020 Shigeta ; 20200364979 November 2020 Shigeta ; 20200372746 November 2020 Shigeta ; 20200372752 November 2020 Shigeta ; 20210190937 June 2021 Niu ; 20210312187 October 2021 Que ; 20220101688 March 2022 Shigeta ; 20220139148 May 2022 Shigeta ; 1020160088224 July 2016 ; 2009062153 May 2009 ; WO-2019157288 August 2019
  • Other References:
  • Primary Examiner: Chu, Randolph I

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -