WO2024046782A1 - A method for distinguishing user feedback on an image - Google Patents
A method for distinguishing user feedback on an image Download PDFInfo
- Publication number
- WO2024046782A1 WO2024046782A1 PCT/EP2023/072776 EP2023072776W WO2024046782A1 WO 2024046782 A1 WO2024046782 A1 WO 2024046782A1 EP 2023072776 W EP2023072776 W EP 2023072776W WO 2024046782 A1 WO2024046782 A1 WO 2024046782A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- luminaire
- light effect
- user feedback
- design
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0202—Market predictions or forecasting for commercial activities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0621—Electronic shopping [e-shopping] by configuring or customising goods or services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Recommending goods or services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping
- G06Q30/0643—Electronic shopping [e-shopping] utilising user interfaces specially adapted for shopping graphically representing goods, e.g. 3D product representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q90/00—Systems or methods specially adapted for administrative, commercial, financial, managerial or supervisory purposes, not involving significant data processing
Definitions
- the invention relates to a method for distinguishing user feedback on an image.
- the invention further relates to a computer program and controller for distinguishing user feedback on an image.
- Patent application WO 2014064634 a method is proposed that assists a user in selecting a lighting device based on a scene and light effect selected by the user.
- Patent application WO 2014087274 relates to assisting a user in selecting a lighting device design through receiving an image of a scene and analyzing this image in order to select or generate a lighting device design.
- the user may be first presented a light effect as part of a scene (e.g., a broad beam down light from the center of the ceiling applied to a living room), allowing the user to choose the light effect prior to choosing the lighting device design.
- the inventors have realized that when a user is presented an image of a luminaire design of a luminaire and its effect in an environment, the user may assess the image as attractive based on a combination of both the luminaire design, e.g., luminaire shape, texture, material of the housing of the luminaire, etc., and the administered light effect of the luminaire in the environment, e.g., administered spatial spectrum distribution, timedynamics of the effect, light intensity, beam shape, etc.
- the preference applying or dislike feedback
- the user towards the image may be directed to / associated with either the luminaire design, the light effect of the luminaire design on the environment, or a combination of both.
- a user’s rejection of an image of a scene comprising an environment, a luminaire design of a luminaire and its light effect may be due to the user appraising the luminaire design (or specific aspects of the luminaire design) but disliking the light effect of the luminaire on the environment depicted in the image.
- the user’s rejection of the image may be due to the user appraising the light effect but disliking the material of the luminaire, etc.
- the object is achieved by a method for distinguishing user feedback on an image, the method comprising the steps of: providing an image of a scene comprising an environment, a luminaire design of a luminaire, and a light effect of the luminaire in the environment; analyzing the image; determining a first saliency value for the luminaire design in the image; determining a second saliency value for the light effect in the image; receiving the user feedback on the image; associating the user feedback to the luminaire design and/or the light effect based on the first and second saliency values.
- An image of a scene comprising an environment, a luminaire design of a luminaire, and a light effect of the luminaire in the environment, is analyzed to determine a first saliency value (saliency level) for the luminaire design and a second saliency value (saliency level) for the light effect of the luminaire.
- the second saliency value may be different from the first saliency value.
- a saliency value may refer to a value indicative of the noticeability (importance/prominence) of the luminaire design or light effect, respectively, in the image.
- a first image of a luminaire design where the light effect of the luminaire design is negligible e.g., image of a luminaire design during daytime where only ambient lighting is present
- a second image of the same luminaire design in the same environment but during the evening, where the light effect of the luminaire design is prominent (highly visible) has a high second saliency value for the light effect.
- the feedback of the user can be associated to the luminaire design and/or the light effect based on the first and second saliency values.
- the method thereby allows to distinguish whether the feedback of the user is directed to the luminaire design or to the light effect without explicitly asking the user (that may also not be able to explain the rationale behind the like/dislike feedback). This is beneficial because it improves the learning of user preferences.
- the step of associating the user feedback may comprise associating, if the second saliency value is greater than the first saliency value, the user feedback with the light effect and associating, if the first saliency value is greater than the second saliency value, the user feedback with the luminaire design. This provides a simple approach to determine whether the feedback is directed to the luminaire design or the light effect.
- the step of associating the user feedback may comprise associating the user feedback with the luminaire design as a function of the first saliency level and associating the user feedback with the light effect as a function of the second saliency value.
- the feedback of the user may be related to both the luminaire design and the light effect.
- the method may comprise associating the user feedback with the luminaire design as a function of the first saliency level and associating the user feedback with the light effect as a function of the second saliency value.
- the method may comprise assigning a likelihood to an association of the feedback with the luminaire design and with the light effect.
- the likelihood may comprise probabilities or weights of feedback being assigned to the luminaire design/ light effect.
- the likelihood may comprise relative weights or relative probabilities.
- the likelihood may be based on a function of the saliency values, such that the likelihood that the feedback is associated with the luminaire design / light effect is proportional (analogous) to the light effect saliency value of the luminaire design / light effect, respectively.
- the method may further comprise generating, using a machine learning model, a first text description of preferences of the user for the luminaire design and a second text description of preferences of the user for the light effect based on the associated feedback, and outputting the generated first and second text descriptions.
- the generated first and second text descriptions may be outputted to a user interface for presentation to the user, or for example as input to a further machine learning model.
- the machine learning model may have been trained using labeled instances of images with associated user feedback as input.
- the machine learning model may for example be a layered combination of a Convolution Neural Network (CNN) responsible for image feature extraction and a Long short-term memory model (LSTM) which generates the text descriptions.
- CNN Convolution Neural Network
- LSTM Long short-term memory model
- An example of such a machine learning model may be a Generative Pre-trained Transformer 3 (GPT-3).
- GPT-3 Generative Pre-trained Transformer 3
- the user may be informed on his/her personal preferences on the different aspects of a luminaire, namely, the luminaire design and the light effect of the luminaire in the environment.
- the user may provide feedback on the text descriptions of preferences (whether (s)he considers the text descriptions of preferences accurate, etc.).
- the text descriptions can be fed into a database comprising the feedback from many different users.
- the system may automatically generate different user types, which are sharing similar design preferences. During inference, these user types can be leveraged to speed up the convergence of a new user to an agreeable luminaire lighting design.
- the second saliency value may be determined based on the spread of the light effect in the environment.
- the image may be analyzed to determine the level of spread (spatial distribution) of light in the environment.
- the level of visual saliency (visual saliency value) of the light effect may depend on the level of spread, i.e., the spatial distribution of the light effect in the image.
- the second visual saliency value may depend on how much the light effect of the luminaire influences the surrounding of the luminaire design in the image.
- the image may be analyzed to determine the number of luminaires in the image.
- the second saliency value may also depend on whether there is just a single luminaire visible in the image or whether there is more than one luminaire present which also generate their own light effects in the image.
- the first saliency value may depend on the spatial distribution of the luminaire design in the image. In other words, how much space the luminaire design occupies in the image.
- the second saliency value may be determined based on characteristics of the environment.
- the image may be analyzed to determine the light effect distribution in the image.
- an abstract environment with a uniform light effect distribution (light effect spread uniformly around the luminaire design in the image) may have a lower second saliency value compared to a detailed environment, i.e., an environment with a plurality of elements wherein the light effect integrates with the elements of the environment.
- the first and second saliency values may further depend on the saturation of the luminaire design ⁇ light effect, respectively, in the image.
- the first and second saliency values may further depend on whether there is just a single luminaire visible in the image or whether there are more than one luminaire present which also generate their own light effects in the image. If there are multiple luminaires present in the image, the saliency of the lighting effect is more prominent.
- the feedback of the user may comprise a text input.
- a text input from the user may be received via a user interface.
- a user interface may be implemented by way of one or more web pages displayed by a user device by way of a web browser software program.
- the user feedback may be in the form of a voice command as well.
- the user feedback may be input data indicative of physiological changes of the user (indicative of an image appraisal or dislike). For example, a heart rate and/or breathing rate of the user may be received, and the user feedback may be determined based on measured changes in the heart rate and/or breathing rate of the user.
- a heart rate and/or breathing rate of the user may be received, and the user feedback may be determined based on measured changes in the heart rate and/or breathing rate of the user.
- an EEG electroencephalogram
- EOG electroooculogram
- EDA electrodemal activity
- PPG photoplethysmogram
- EMG electrochromography activity
- the feedback of the user may further comprise a gesture.
- a gesture input such as a finger swipe or a hand motion, as for example sensed using an optical or capacitive sensor, may be received from the user.
- the method may further comprise generating, using a generative-AI machine learning model, a synthesized image of a scene comprising a further (synthesized) luminaire design of a further luminaire and a light effect based on the associated feedback.
- the generative-AI machine learning model is a text-conditional generative adversarial network conditioned to generate the synthesized image based on the generated text descriptions of user preferences on the luminaire design and the associated light effect.
- a text-to-image diffusion model such as Imagen, DALL-E 2, etc.
- Imagen a text-to-image diffusion model
- DALL-E 2 a text-to-image diffusion model
- the method may further comprise: analyzing the synthesized image; determining a (further) first saliency value for the luminaire design in the synthesized image; determining a (further)second saliency value for the light effect in the synthesized image; receiving the user feedback on the synthesized image; associating the user feedback to the luminaire design and/or the light effect based on the first and second saliency values.
- the method may further comprise generating a specification for the further luminaire design and a specification for the light effect, and outputting the specifications to a system or service for generating (or manufacturing) the further luminaire design.
- the specification for the light effect may comprise, a shape of the light effect, a pattern of the light effect, one or more colors of the light effect, and/or a location of a feature of the light effect.
- the specification for the luminaire design may comprise a shape of the luminaire design, a size of the luminaire design, a number of lumens based on the light effect, a number of light emitters and a type of the light emitters based on the number of lumens, a number of drivers and a type of the drivers based on the number and type of light emitters.
- the object is achieved by a computer program product for a computing device, the computer program product comprising computer program code to perform any of the above-mentioned methods when the computer program product is run on a processing unit of the computing device.
- a computer program product may be executed on a computer, such as a personal computer or a laptop, or a smart phone or other computing device.
- the object is achieved by a controller for distinguishing user feedback on an image configured to: provide an image of a scene comprising an environment, a luminaire design, and a light effect of the luminaire in the environment; analyze the image; determine a first saliency value for the luminaire design in the image; determine a second saliency value for the light effect in the image, wherein the second saliency value is different from the first saliency value, receive the user feedback on the image; associate the user feedback to the luminaire design and/or the light effect based on the first and second saliency values.
- Fig. 1 shows schematically an example of an image of a scene comprising an environment, a luminaire design of a luminaire and a light effect of the luminaire in the environment;
- Fig. 2 shows schematically a controller configured to provide an image of a scene comprising an environment, a luminaire design of a luminaire, and a light effect of the luminaire in the environment to a user;
- Fig. 3 shows schematically a flowchart illustrating an embodiment of a method for distinguishing user feedback on an image
- Fig. 4 shows schematically an example of an image of a scene comprising an environment, a luminaire design of a luminaire and a light effect of the luminaire in the environment.
- Fig. 1 shows an example of an image 100 of a scene comprising an environment 142, a luminaire design 102 of a luminaire and a light effect 112 of the luminaire in the environment.
- the environment 102 refers to the surroundings of the luminaire design.
- the environment 102 may be any type of home environment, e.g., a kitchen, a living-room, a bathroom, etc., a commercial environment, e.g., a factory, a restaurant, an office, etc., or a plain background environment, e.g., plain white, or other color font.
- the luminaire design 102 comprises at least one light source or lamp (not shown), such as an LED-based lamp, gas-discharge lamp or filament bulb, etc., optionally with an associated support, casing or other such housing.
- the luminaire design 102 may take any of a variety of forms, e.g., a ceiling mounted lighting device, a wall-mounted lighting device, a wall washer, a free-standing lighting device, a LED strip, a LED bulb, a laser lighting fixture, and ultra-thin OLED luminaire etc., and any size, shape, material or color.
- the luminaire design 102 is a ceiling luminaire.
- the image 100 may contain any number of luminaires.
- the light effect 112 of the luminaire in the environment refers to the light output of the at least one light source or lamp and how the light output influences the surrounding of the luminaire, the environment.
- the light effect 112 may comprise a color or color temperature of the light source, an illumination intensity (brightness), a beam width, beam direction, and other parameters of the one or more light sources of the luminaire design 102.
- the image may be part of video and the light effect 112 may comprise a dynamic light scene, wherein the dynamic light scene may comprise light effects which change with time.
- Fig. 2 shows schematically and exemplary a controller 210 configured to provide an image 200 (e.g., the image of Fig. 1) of a scene comprising an environment, a luminaire design of a luminaire, and a light effect of the luminaire in the environment to the user 220.
- the controller 210 may be implemented in a device, such as a desktop computer or a portable terminal such as a laptop, tablet or smartphone.
- the controller 210 may alternatively be implemented in the cloud, for instance as a server that is accessible via the internet.
- the image 200 is provided to the user 220, for example via a user interface on a device such as a laptop, tablet or smartphone 236. Alternatively, the image 200 may be shown as a video to the user 220.
- the image 200 may be shown via an AR/VR headset to the user 220.
- the controller 210 is configured to analyze the image 200 and determine a first saliency value for the luminaire design in the image and a second saliency value for the light effect in the image based on the analysis.
- the controller 210 may be configured to apply image analysis techniques to recognize the luminaire (and therewith its design) and the light effect of the luminaire in the image 200. Image analysis techniques for recognizing objects and features in the image are known in the art and will therefore not be discussed in detail.
- the second saliency value may, for example, be determined based on the spread of the light effect 112 in the environment 142. That is, the second saliency value may be proportional to the spatial distribution of the light effect 112 in the image 100. In other words, the second visual saliency value may depend on how much space, e.g., a number of pixels, the light effect 112 of the luminaire occupies in the image 100.
- the first saliency value may depend on the spatial distribution of the luminaire design 102 in the image 100. In other words, the first saliency value may depend on how much space, e.g., a number of pixels, the luminaire design 102 occupies in the image 100. In exemplary Fig.
- the first (visual) saliency value is higher than the second (visual) saliency value, as the luminaire design 102 occupies most of the image 100.
- the second saliency value may be further analyzed to include saliency values for design aspects of the luminaire design. For example, a saliency value for the shape, material, style, etc., of the luminaire design 102.
- Fig. 4 shows an example of an image 400 of a scene comprising an environment 442, a luminaire design 402 of a luminaire and a light effect 412 of the luminaire in the environment.
- the first (visual) saliency value for the luminaire design is lower than the second (visual) saliency value for the light effect, as the light effect 412 occupies more space (higher number of pixels) than the luminaire design 402 in the image 400.
- the second saliency value may be determined based on characteristics of the environment.
- an abstract environment with a uniform light effect distribution may have a lower second saliency value compared to a detailed environment, i.e., environment with a plurality of elements wherein the light effect integrates with the elements of the environment, for example as the environment 442 in image 400.
- the first and second saliency values may further depend on the saturation of the luminaire design ⁇ light effect respectively in the image.
- the second saliency value may further depend on the color contrast of the light effect. For example, an image with high color contrast of the light effect may have a higher saliency value for the light effect than an image where the color (color temperature) of the light effect is homogeneous.
- saliency algorithms for image saliency detection e. g., GrabCut algorithm
- GrabCut algorithm may be used for automatic extraction of the first and second saliency values in an image.
- the feedback 132 may comprise a user rating scale, e.g., numeric rating scale like 1-10 rating scale, a binary rating scale (user rates positively or negatively the image), verbal rating scale, actuating at least one actuator, e.g., a like/dislike button on a user’s mobile device 236 to indicate his/her preference, the user may move his ⁇ her fingers (to swipe) across a screen to indicate positive/negative feedback depending on the direction of the movement, etc. Additionally, and/or alternatively, the feedback 132 may comprise input data indicative of physiological changes of the user 220.
- a user rating scale e.g., numeric rating scale like 1-10 rating scale, a binary rating scale (user rates positively or negatively the image), verbal rating scale, actuating at least one actuator, e.g., a like/dislike button on a user’s mobile device 236 to indicate his/her preference, the user may move his ⁇ her fingers (to swipe) across a screen to indicate positive/negative feedback depending on the direction of the movement
- a heart rate or breathing rate of the user 220 may be received by the controller 210, and the user feedback 132 may be determined based on measured changes in the heart rate, sweating rate or breathing rate of the user 220.
- an EEG of the user 220 may be received by the controller 210, and the user feedback 132 may be determined based on changes on the EEG measurements of the user 220.
- the controller 210 may be configured to, based on the condition, that if the first saliency value of the light effect is greater than the second saliency value of the luminaire design, associate the feedback 132 to the light effect 112. Similarly, if the first saliency value of the luminaire design is greater than the second saliency value of the light effect, the feedback 132 is associated to the luminaire design 102. Alternatively, the controller 210 may be configured to associate the user feedback 132 with the luminaire design 102 as a function of the first saliency level and associate the user feedback 132 with the light effect 112 as a function of the second saliency value.
- a likelihood may be assigned to an association of the feedback with the luminaire design and with the light effect; wherein the likelihood that the feedback 132 is associated with the light effect (luminaire design) is proportional to the saliency values of the light effect (luminaire design respectively).
- the controller may further comprise a memory 222 which may be arranged for storing, for example the feedback of the user.
- the controller 210 may further be optionally configured to generate using a machine learning model a first text description of preferences of the user for the luminaire design 102 and a second text description of preferences of the user for the light effect 112 based on the associated feedback 132.
- a first text description may be in the form of “Oval-shaped pendant luminaire design”.
- a second text description may be in the form of “Homogeneous distributed blue light effect”, etc.
- the machine learning model may have been trained using labeled instances of images with associated user feedback as input.
- Computer Vision machine learning models such as a convolution neural networks, may be used to recognize features in the image, e.g., a shape of a luminaire design, etc.
- Natural Language Processing e.g., recurrent neural networks like LSTMs
- the generated text descriptions may be outputted to the user 220, for example, on the user’s mobile device 236, for instance via a display, an AR/VR headset or a voice interface.
- Fig. 3 shows schematically and exemplary a flowchart illustrating an embodiment of a method 300 for distinguishing user feedback on an image, the method comprising the steps of: providing 302 by the controller 210 an image 100 of a scene comprising an environment 142, a luminaire design 102 of a luminaire, and a light effect 112 of the luminaire in the environment; analyzing 304 the image 100 by the controller 210; determining 306, by the controller 210, a first saliency value for the luminaire design in the image; determining 308, by the controller 210, a second saliency value for the light effect in the image, wherein the second saliency value is different from the first saliency value, receiving 310, by the controller 210, a user feedback 132 on the image 100, and associating 312, by the controller 210, the user feedback 132 to the luminaire design 102 and/or the light effect 112 based on the first and second saliency values.
- the method 300 may comprise generating 314, using a generative-AI machine learning model, a synthesized image of a scene comprising a further luminaire design and a light effect based on the associated feedback.
- a text Conditioned Generative Adversarial Network e.g., TAC-GAN model
- TAC-GAN model may be used to synthesize an image from a text description by conditioning the generated synthesized image on the text description.
- a text-to-image diffusion model such as Imagen, DALL-E 2, etc., may be used to generate the synthesized image using the text description as an input.
- the method 300 may further comprise repeating the steps 302 to 312 for the synthesized image.
- the method 300 may further comprise generating 316 a specification for the further luminaire design and a specification for the light effect, and outputting 318 the specifications to a system or service for generating (or manufacturing) the further luminaire design.
- the user may place an order for the further (synthesized) luminaire design or print the design via a 3D printer.
- the method 300 may be executed by computer program code of a computer program product when the computer program product is run on a processing unit of a computing device, such as the controller 210.
- any reference signs placed between parentheses shall not be construed as limiting the claim.
- Use of the verb "comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim.
- the article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
- the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer or processing unit. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
- aspects of the invention may be implemented in a computer program product, which may be a collection of computer program instructions stored on a computer readable storage device which may be executed by a computer.
- the instructions of the present invention may be in any interpretable or executable code mechanism, including but not limited to scripts, interpretable programs, dynamic link libraries (DLLs) or Java classes.
- the instructions can be provided as complete executable programs, partial executable programs, as modifications to existing programs (e.g. updates) or extensions for existing programs (e.g. plugins).
- parts of the processing of the present invention may be distributed over multiple computers or processors or even the ‘cloud’.
- Storage media suitable for storing computer program instructions include all forms of nonvolatile memory, including but not limited to EPROM, EEPROM and flash memory devices, magnetic disks such as the internal and external hard disk drives, removable disks and CD-ROM disks.
- the computer program product may be distributed on such a storage medium, or may be offered for download through HTTP, FTP, email or through a server connected to a network such as the Internet.
Landscapes
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Data Mining & Analysis (AREA)
- Circuit Arrangement For Electric Light Sources In General (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202380062776.3A CN119836847A (en) | 2022-08-30 | 2023-08-18 | Method for distinguishing user feedback to image |
| EP23755425.8A EP4581907A1 (en) | 2022-08-30 | 2023-08-18 | A method for distinguishing user feedback on an image |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263402178P | 2022-08-30 | 2022-08-30 | |
| US63/402,178 | 2022-08-30 | ||
| EP22194776 | 2022-09-09 | ||
| EP22194776.5 | 2022-09-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024046782A1 true WO2024046782A1 (en) | 2024-03-07 |
Family
ID=87580116
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2023/072776 Ceased WO2024046782A1 (en) | 2022-08-30 | 2023-08-18 | A method for distinguishing user feedback on an image |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4581907A1 (en) |
| CN (1) | CN119836847A (en) |
| WO (1) | WO2024046782A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014064634A1 (en) | 2012-10-24 | 2014-05-01 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| WO2014087274A1 (en) | 2012-10-24 | 2014-06-12 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| US20150278896A1 (en) * | 2012-10-24 | 2015-10-01 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| US20170293349A1 (en) * | 2014-09-01 | 2017-10-12 | Philips Lighting Holding B.V. | Lighting system control method, computer program product, wearable computing device and lighting system kit |
| US20200380652A1 (en) * | 2019-05-30 | 2020-12-03 | Signify Holding B.V. | Automated generation of synthetic lighting scene images using generative adversarial networks |
-
2023
- 2023-08-18 WO PCT/EP2023/072776 patent/WO2024046782A1/en not_active Ceased
- 2023-08-18 CN CN202380062776.3A patent/CN119836847A/en active Pending
- 2023-08-18 EP EP23755425.8A patent/EP4581907A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014064634A1 (en) | 2012-10-24 | 2014-05-01 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| WO2014087274A1 (en) | 2012-10-24 | 2014-06-12 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| US20150278896A1 (en) * | 2012-10-24 | 2015-10-01 | Koninklijke Philips N.V. | Assisting a user in selecting a lighting device design |
| US20170293349A1 (en) * | 2014-09-01 | 2017-10-12 | Philips Lighting Holding B.V. | Lighting system control method, computer program product, wearable computing device and lighting system kit |
| US20200380652A1 (en) * | 2019-05-30 | 2020-12-03 | Signify Holding B.V. | Automated generation of synthetic lighting scene images using generative adversarial networks |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119836847A (en) | 2025-04-15 |
| EP4581907A1 (en) | 2025-07-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Rockcastle et al. | Comparing perceptions of a dimmable LED lighting system between a real space and a virtual reality display | |
| US12236008B2 (en) | Enhancing physical notebooks in extended reality | |
| US11475646B1 (en) | Computer implemented display system responsive to a detected mood of a person | |
| WO2016157650A1 (en) | Information processing device, control method, and program | |
| Şener Yılmaz | Human factors in retail lighting design: an experimental subjective evaluation for sales areas | |
| US10706820B2 (en) | Methods and apparatus for producing a multimedia display that includes olfactory stimuli | |
| CN110235525B (en) | Recommendation engine for lighting systems | |
| US11961410B1 (en) | Systems and methods to measure and affect focus and engagement | |
| US12204958B2 (en) | File system manipulation using machine learning | |
| WO2020249502A1 (en) | A method for controlling a plurality of lighting units of a lighting system | |
| JP7518005B2 (en) | Systems and methods for smart image capture - Patents.com | |
| WO2020249543A1 (en) | A controller for downscaling a set of light settings and a method thereof | |
| De Vries et al. | From luminance to brightness: A data-driven approach to support brightness assessments in open plan offices | |
| CN113424659B (en) | Enhance users’ recognition of light scenes | |
| EP4581907A1 (en) | A method for distinguishing user feedback on an image | |
| CN117980867A (en) | Interactive events based on physiological responses to lighting | |
| KR102386463B1 (en) | Method and server for creating lights by reflecting user emotion | |
| Cai et al. | Case studies of a camera-aided imaging method for evaluation of interior luminous environments | |
| US11043040B2 (en) | Extended reality based positive affect implementation for product development | |
| US12215844B2 (en) | System and methods for controlling light emitting elements | |
| Mustafa et al. | The Human in the Loop: EEG-Driven Photo Optimization | |
| CASCIANI et al. | What does light do? Reflecting on the active social effects of lighting design and technology | |
| JP2025029334A (en) | Sensory evaluation method and sensory evaluation system | |
| JP2016533560A (en) | System and method for adapting an image to a viewing environment | |
| WO2020249538A1 (en) | A controller for assigning lighting control settings to a user input device and a method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23755425 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202380062776.3 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023755425 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2023755425 Country of ref document: EP Effective date: 20250331 |
|
| WWP | Wipo information: published in national office |
Ref document number: 202380062776.3 Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023755425 Country of ref document: EP |