WO2025237555A1 - Procédé de commande d'éclairage à l'intérieur d'un véhicule, procédé d'entraînement d'un modèle d'apprentissage automatique, système de traitement de données et programme informatique - Google Patents
Procédé de commande d'éclairage à l'intérieur d'un véhicule, procédé d'entraînement d'un modèle d'apprentissage automatique, système de traitement de données et programme informatiqueInfo
- Publication number
- WO2025237555A1 WO2025237555A1 PCT/EP2025/055864 EP2025055864W WO2025237555A1 WO 2025237555 A1 WO2025237555 A1 WO 2025237555A1 EP 2025055864 W EP2025055864 W EP 2025055864W WO 2025237555 A1 WO2025237555 A1 WO 2025237555A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- view
- area
- vehicle
- interior
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q3/00—Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
- B60Q3/80—Circuits; Control arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q2900/00—Features of lamps not covered by other groups in B60Q
- B60Q2900/50—Arrangements to reconfigure features of lighting or signalling devices, or to choose from a list of pre-defined settings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Definitions
- the present invention relates to a method for controlling lighting in the interior of a vehicle.
- the color of interior lighting is to be adjusted based on image data from the interior, whereby interfering factors such as shadows or overexposure can be taken into account.
- Ambient lighting is a specific type of interior lighting designed to create a pleasant atmosphere within the vehicle.
- ambient lighting is indirect illumination of the vehicle interior, for example, in the side panels.
- interior lighting can also include direct interior lighting, illumination of instruments or displays, and other lighting elements within the vehicle.
- interior lighting color can vary depending on personal taste or mood. Many modern vehicles offer the option to choose between different colors or to adjust the color according to mood or driving situation. This allows for flexible personalization of the interior. It may also be possible to adjust the brightness of the interior lighting to create a desired atmosphere.
- the simplest method for adjusting the interior lighting color is manual control by the user via the infotainment system or a control unit in the vehicle, such as a touchscreen.
- the desired color can be selected from a predefined palette, for example, via a menu.
- This can be cumbersome and is often used only rarely, usually just to personalize the vehicle once.
- Changing the interior lighting color again requires accessing the corresponding settings menu each time.
- Some vehicles also offer the option of controlling the interior lighting via voice commands. However, this also requires active intervention from the driver each time they want to change the interior lighting. It is also known that vehicles offer the option of creating individual driver profiles, which include settings such as the color of the interior lighting. This allows the interior lighting to automatically adjust to the preferences of different drivers as soon as they enter the vehicle and select their profile, or, if necessary, be automatically recognized via a vehicle key.
- dynamic adaptation to different situations is not possible with this method.
- Advanced vehicles may also be equipped with sensors that detect ambient light, driving mode, vehicle speed, or other factors and automatically adjust the color of the ambient lighting. It may also be possible to adjust the interior lighting color based on image data from inside the vehicle, captured, for example, by an interior camera.
- the present invention aims to provide an improved method for controlling lighting in the interior of a vehicle. In particular, it aims to improve the recognition of colors based on image data from the vehicle's interior.
- a first aspect of the invention relates to a method, particularly a computer-implemented one, for controlling lighting in the interior of a vehicle.
- first image data are acquired, representing a first view of at least a part of the vehicle's interior captured in the visible light spectrum
- second image data representing a second view of at least a part of the vehicle's interior captured in the infrared spectrum, which at least partially coincides with the first view, preferably being substantially or completely congruent.
- An exclusion zone in the second view and a valid zone in the first view are determined, wherein, to determine the valid zone, a region of the first view corresponding to the exclusion zone of the second view is excluded.
- a color is then selected (extracted) from the valid zone of the first view, and at least a portion of the lighting in the vehicle's interior, such as ambient lighting, is controlled or adjusted based on the selected color.
- first and second image data are captured, each exhibiting different image modalities.
- the first image data is recorded in the visible light spectrum.
- This image data contains the colors visible inside the vehicle. Since, as mentioned earlier, disturbances in this image, such as harsh shadows or overexposure, can only be detected through complex image processing, additional second image data is captured.
- this second image data represents essentially the same view as the first.
- the second image data is captured in the infrared spectrum. While this second image data does not contain color, disturbances such as shadows or overexposure can be more easily detected in the second image data, i.e., the IR image. It goes without saying that the first and second image data can be captured in any order, including simultaneously.
- an exclusion zone can be easily defined, which is then used to define a valid zone in the first view.
- the valid zone specifies the area of the first view from which a color can be selected.
- the valid zone is then... especially free from cast shadows or overexposure, so that the colors are reproduced realistically within the valid range.
- the vehicle's interior lighting such as ambient lighting or other interior lighting components, is then controlled based on a color selected from the valid range.
- the interior lighting color can either match the selected color or be derived from it, for example, as a similar color, such as a similar shade of red or blue, or as a complementary color.
- the method also allows for the dynamic detection of disturbances in a simple and efficient manner, and thus for the dynamic determination of the valid range, for example, when lighting conditions change during travel due to sunlight.
- the method efficiently enables dynamic color recognition.
- the method can be applied to an image sequence, especially a video or stream. With conventional image processing, for example with edge detection, this would require complex processing of the individual frames.
- vehicle refers in particular to a passenger car, including all types of motor vehicles, hybrid and battery-powered electric vehicles, as well as vehicles such as vans, buses, trucks, delivery vans and the like.
- image capture device refers specifically to a camera, particularly a digital camera.
- the camera can capture still images (photos) or moving images (videos), especially in the visible light spectrum ("RGB range").
- An image capture device can capture or record such an image and output corresponding image data.
- This image data represents, in particular, a specific view.
- a view can, as described herein, comprise various areas, such as a valid area or an exclusion area.
- An "area” need not necessarily be a single, contiguous area, but can also comprise several separate sub-areas.
- An “image capture device” can also be suitable for monitoring the interior of a vehicle, particularly the vehicle occupants, and especially their positions and movements within the space.
- An infrared (IR) camera e.g., a near-infrared (NIR) camera
- NIR near-infrared
- An IR image is well-suited for surveillance because it is robust against changing lighting conditions, such as strong sunlight. Recordings in darkness are also possible. It goes without saying that a suitable IR light source must be available.
- a time-of-flight (ToF) camera also referred to here as a "3D sensor”
- the terms “configured” or “set up” to perform a specific function are to be understood, within the meaning of the invention, as meaning that the corresponding device already exists in a particular configuration or setting.
- the device must be capable of performing the function, or at least configurable so that it can perform the function after appropriate settings. Configuration can be achieved, for example, by adjusting process parameters, switches, or similar devices to activate or deactivate functionalities or settings.
- the device can have several predefined configurations or operating modes, allowing configuration by selecting one of these.
- the exclusion zone is defined such that it includes shadows and/or overexposed areas. Such areas are unsuitable, or at least less suitable, for accurately determining colors. In particular, the colors in this zone are distorted; for example, colors in shadow areas are too dark, and colors in overexposed areas are too light or even white. Therefore, such areas are to be excluded from color selection by defining the exclusion zone. It is also conceivable that the exclusion zone includes other areas that contain other interfering factors or are otherwise less suitable for color recognition.
- the first and second views are superimposed to determine the exclusion area (and thus the valid area) of the first view, which corresponds to the exclusion area of the second view.
- the valid area in the first view can then be easily determined by removing or at least excluding (i.e., excluding from the color selection) the exclusion area (or an area in the first view that corresponds to the exclusion area) from the first view.
- a mask is created from the exclusion region of the second view by inverting it, which is then used to mask the first view.
- masking is a simple image processing technique used to select or exclude one or more areas in an image. Specifically, if the mask is intended to mask the valid area, it can be obtained simply by inverting the exclusion area. This eliminates the need for complex image processing, particularly for the initial image data; masking is all that is required.
- a static area is defined in the second view. This static area is added to the exclusion area if overexposure is detected within it.
- a static area is an area that does not move within the view. These are typically parts belonging to the vehicle or visible within the vehicle interior that do not move. Free objects, including passengers such as the driver or front passenger, are dynamic objects because they can usually move. It may therefore be possible to exclude a static area or areas from the color selection process. These could be, in particular, windows such as side windows or rear windows, which are often heavily overexposed due to external sunlight and are therefore unsuitable for color selection (as they may simply appear white, for example). Static areas can be determined, for example, from the image data or using a 3D sensor.
- the color is selected from the valid range by determining the color of a single pixel within the valid range or by calculating an average of the colors of several pixels within the valid range. Particularly with monochrome surfaces, selecting any single pixel is sufficient to choose the color corresponding to a specific object, such as a piece of clothing worn by the driver. However, it is also conceivable to calculate an average of several pixels distributed across a specific area.
- brightness information for pixels within the valid range is determined from the second image data to select the color, and the colors of the pixels within the valid range (or target range) are normalized based on this brightness information.
- a normalized color is then selected.
- the IR image can be used not only to define an exclusion range but also to determine brightness information. This also applies if no exclusion range is defined. This can be achieved, for example, by including a bright but not overexposed area in the color selection.
- the colors can then be normalized, allowing the correct color to be selected regardless of the brightness.
- a uniformly red sweater partially exposed to direct sunlight. Regardless of whether there are over- or underexposures, which can be eliminated as explained, adjustment or normalization can be performed based on the brightness information in the IR image.
- the first and second image data are acquired by means of at least one image acquisition device located in the vehicle's interior.
- image acquisition devices such as cameras, or the same device capable of capturing image data in different modalities, particularly RGB and IR.
- the image acquisition device (or devices) may already be present in the vehicle.
- Such cameras can, for example, be located in the vehicle's rearview mirror, as this location provides a good overview of the vehicle's interior, including the driver.
- Driver monitoring systems for instance, often incorporate IR cameras. This method is suitable for monitoring a vehicle's interior because image data captured in the IR range is robust against varying lighting conditions.
- sensor data is further acquired using a 3D sensor.
- This sensor data pertains to at least a portion of the first view, enabling the determination of the orientations of surfaces visible in the vehicle interior from that view.
- Sensor data representing a three-dimensional model can be acquired, for example, with a time-of-flight camera, allowing for a detailed three-dimensional view of the vehicle interior. From this three-dimensional model, parallel surfaces or surfaces equidistant from the sensor or camera can be identified to derive color information. In particular, parallel surfaces can exhibit the same color intensity. Such identified surfaces can therefore be advantageously used for color selection.
- a 3D sensor can also be used to define static areas.
- a target area is further defined within the valid area of the first view, and a color is selected from this target area.
- a target area can be defined within the valid area.
- surfaces in the vehicle interior that are part of the vehicle, such as side panels or armrests may be irrelevant for color selection.
- Such parts can also be treated as static areas, as described above.
- the color of a piece of clothing can then be determined from the image data and used as the basis for the interior lighting color.
- the lighting color can then be the color of the clothing or at least a similar color.
- a complementary color could be selected. In this way, dynamic customization of the interior lighting color scheme is possible.
- the target area is determined using a trained machine learning model, such as a neural network.
- a trained machine learning model such as a neural network.
- a model can be trained with training data to learn, for example, which areas in a view contain or represent a rider's clothing. This allows for the automatic determination of a target area—that is, a relevant area or one of interest for color selection—to further enhance the individual user experience.
- a second aspect of the invention relates to a method for training a machine learning model to determine a target area for a method according to the first aspect, particularly the embodiment described last.
- training data is provided, the training data comprising image data representing a first view of at least a part of the vehicle's interior, captured in the visible light spectrum, wherein a target area of the first view is defined in the training data.
- a region of the first view corresponding to the target area is then determined.
- the machine learning model may, in particular, comprise a neural network. Training can be carried out primarily as so-called supervised training. In supervised training, or learning, of a model such as a neural network, the network is trained with input data and the corresponding correct outputs.
- the network is presented with input data and attempts to predict the correct output for that input.
- the network's predictions are then compared with the actual outputs, and the network's internal parameters are adjusted to improve the accuracy of the predictions. This process is performed iteratively until the network is able to make accurate predictions for new, unseen data.
- Supervised learning can therefore be used, for example, to train the model to identify which areas of a view of a vehicle interior represent the driver's clothing.
- the target area is a sub-area of the valid area of the initial view that includes a user's clothing inside the vehicle. It can also be intended to train the model to select the correct or a suitable part of the clothing; for example, the chest area of a garment on the driver's torso might be better suited for color selection than the collar.
- the training data can include image data for multiple initial views with respective target areas.
- the model can be trained, for example, to recognize different types of clothing, such as sweaters, shirts, or T-shirts, as garments on the driver's upper body.
- a third aspect of the invention relates to a data processing system comprising at least one processor configured to perform the method according to the first and/or second aspect of the invention.
- the system includes at least one image acquisition device configured to capture image data in the visible (RGB) range of light and/or in the IR range.
- an image acquisition device such as a camera, may be provided that can capture image data in both the RGB and IR ranges. This has the advantage that the first and second views are then automatically congruent.
- a fourth aspect of the invention relates to a computer program with instructions which, when executed on a system according to the third aspect, cause the system to execute the method according to the first and/or second aspect.
- the computer program can be stored, in particular, on a non-volatile data carrier.
- a non-volatile data carrier Preferably, this is a data carrier in the form of an optical data carrier or a flash memory module.
- the computer program can exist as a file on a data processing unit, in particular on a server, and be downloadable via a data connection, for example, the Internet or a dedicated data connection, such as a proprietary or local network.
- the computer program can comprise a plurality of interacting individual program modules.
- the system according to the third aspect can accordingly have a program memory in which the computer program is stored.
- the system can also be configured to access an external computer program, for example on one or more servers or other data processing units, via a communication link, in particular to exchange data with it that is used during the execution of the procedure or computer program or represents outputs of the computer program.
- Fig. 1 shows a flowchart of a process according to one embodiment
- Fig. 2 shows a vehicle interior
- Fig. 3 shows a first exemplary view of a vehicle interior
- Fig. 4 shows a second exemplary view of a vehicle interior.
- Figure 1 illustrates a method 100 for controlling interior lighting in a vehicle using a flowchart.
- Method 100 can be implemented, in particular, in a vehicle data processing system (not shown).
- Method 100 is further explained below with reference to Figures 2 to 4.
- Figure 2 schematically shows a section of a vehicle interior 1.
- Figures 3 and 4 are exemplary camera images of a vehicle interior with a user, which can be used to select a color for interior lighting.
- step S1a image data for a first view 10 of the vehicle interior 1 is acquired in the visible light spectrum (RGB).
- step S1b image data for a second view 20 is acquired in the infrared (IR) spectrum.
- Steps S1a and S1b can be performed simultaneously.
- An example is shown in Fig. 3.
- Another example of a first view 10' and a second view 20' is shown in Fig. 4.
- Figs. 3 and 4 the first and second views are shown superimposed. It is understood that the first view 10 is in color, while the second view 20 in the IR spectrum contains no color.
- the image data for the first view 10 and the second view 20 are captured by a camera 5, which is preferably located in the base of the interior rearview mirror 7 of the vehicle (see Fig. 2).
- the camera 5 is preferably designed as an RGB/IR camera, i.e., it is suitable for capturing image data in the visible spectrum (RGB) for the first view 10 and image data in the infrared spectrum (IR) for the second view 20. This ensures that the first view 10 and the second view 20 are identical.
- a 3D sensor 6, such as a ToF camera can be provided. The selection of a color can be further refined using a 3D model of the interior 1 created in this way.
- an exclusion area 21 is defined in the second view 20.
- the exclusion area 21 can, in particular, include a drop shadow (see Example from Fig. 3, where the edge of the cast shadow runs across the driver's chest, which often occurs in vehicles due to sunlight. This is more easily recognizable in the second view 20 than in the first view 10.
- the example from Fig. 4 does not contain a shadow area.
- the exclusion area 21' includes a water stain on the passenger's trousers. This, like a shadow, is unsuitable for color selection because it is too dark. Another area that may be unsuitable for color selection is a bright side window. This overexposed area can be captured as a static area 22.
- a valid area 11 is determined in the first view 10, i.e., an area free of interfering factors and therefore, in principle, suitable for a realistic color selection.
- the valid area 11 is determined by subtracting or excluding the exclusion area 21, determined from the second view 20, from the first view 10. This can be done, for example, by masking. Since the valid area 11 still contains areas that may not be relevant for color selection, such as the seatbelt, a target area 12 is determined in step S4. This is, in particular, an area of a garment—in the example from Fig. 3, the driver's shirt—that is best suited for color selection, i.e., the chest area and not the collar or folds in the sleeve.
- a machine learning model such as a neural network, can be used to determine the target area 12, which is trained accordingly before application.
- a color is selected (extracted) from the target area 12. This color can be determined from the RGB image of the first view 10 using one or more pixels. To improve color selection, brightness information from the IR image can be used to normalize the colors of the RGB image.
- interior lighting 1 of the vehicle is controlled according to the selected color. For example, the selected color itself can be used, or a derived color such as a similar or complementary hue.
- the lighting can include ambient lighting 2. This primarily serves to enhance the atmosphere in the vehicle, so adjusting the color to match the driver's clothing can personalize the driving experience. Other lighting elements, such as gauges 3 or displays 4, can also be controlled accordingly. While at least one exemplary embodiment has been described above, it should be noted that a large number of variations exist.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé de commande d'éclairage (2) à l'intérieur (1) d'un véhicule. Des premières données d'image représentant une première vue (10) d'au moins une partie de l'intérieur (1) du véhicule, ladite vue étant enregistrée dans le domaine visible de la lumière, et des secondes données d'image représentant une seconde vue (20) d'au moins une partie de l'intérieur (1) du véhicule, ladite vue étant enregistrée dans le domaine infrarouge et correspondant au moins partiellement à la première vue (10), sont capturées. Une région d'exclusion (21) est déterminée dans la seconde vue (20). Une région valide (11) est déterminée dans la première vue (10), une région de la première vue (10) qui correspond à la région d'exclusion (21) de la seconde vue (20) étant exclue à cet effet. Une couleur est choisie dans la région valide (11) de la première vue (10), en particulier une région cible (12) telle qu'une région d'un vêtement du conducteur. Un éclairage (2) à l'intérieur (1) du véhicule, tel que l'éclairage ambiant, est alors commandé en fonction de la couleur sélectionnée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102024113289.2A DE102024113289A1 (de) | 2024-05-13 | 2024-05-13 | Verfahren zum steuern einer beleuchtung in einem innenraum eines fahrzeugs |
| DE102024113289.2 | 2024-05-13 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025237555A1 true WO2025237555A1 (fr) | 2025-11-20 |
Family
ID=95022824
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/055864 Pending WO2025237555A1 (fr) | 2024-05-13 | 2025-03-04 | Procédé de commande d'éclairage à l'intérieur d'un véhicule, procédé d'entraînement d'un modèle d'apprentissage automatique, système de traitement de données et programme informatique |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102024113289A1 (fr) |
| WO (1) | WO2025237555A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106274657A (zh) * | 2016-08-05 | 2017-01-04 | 法乐第(北京)网络科技有限公司 | 一种内饰灯控制方法及装置 |
| DE102018005461A1 (de) * | 2018-07-10 | 2020-01-16 | Daimler Ag | Vorrichtung und Verfahren zur Steuerung einer Innenraumbeleuchtung |
| DE102018219668A1 (de) * | 2018-11-16 | 2020-05-20 | Zf Friedrichshafen Ag | Steuereinheit zum Steuern einer Fahrzeuginnenraumbeleuchtung |
| US20200184669A1 (en) * | 2018-12-07 | 2020-06-11 | Toyota Research Institute, Inc. | Adaptive infrared lighting for full vehicle cabin based on occupancy |
| DE102022110165A1 (de) * | 2022-04-27 | 2023-11-02 | Bayerische Motoren Werke Aktiengesellschaft | Computerimplementiertes Verfahren zur Bereitstellung eines Bildstroms, Kamerasystem und Fahrzeug |
| US20240028649A1 (en) * | 2022-07-22 | 2024-01-25 | Toyota Jidosha Kabushiki Kaisha | Information provision system and storage medium |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102006052777B4 (de) * | 2006-11-09 | 2023-11-16 | Bayerische Motoren Werke Aktiengesellschaft | Vorrichtung zur ambienten Beleuchtung eines Fahrzeuginnenraums |
| US9505345B1 (en) * | 2015-08-21 | 2016-11-29 | Honda Motor Co., Ltd. | System and method for vehicle ambient lighting |
| DE102023003032B3 (de) * | 2023-07-24 | 2024-09-26 | Mercedes-Benz Group AG | Fahrzeug |
-
2024
- 2024-05-13 DE DE102024113289.2A patent/DE102024113289A1/de active Pending
-
2025
- 2025-03-04 WO PCT/EP2025/055864 patent/WO2025237555A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106274657A (zh) * | 2016-08-05 | 2017-01-04 | 法乐第(北京)网络科技有限公司 | 一种内饰灯控制方法及装置 |
| DE102018005461A1 (de) * | 2018-07-10 | 2020-01-16 | Daimler Ag | Vorrichtung und Verfahren zur Steuerung einer Innenraumbeleuchtung |
| DE102018219668A1 (de) * | 2018-11-16 | 2020-05-20 | Zf Friedrichshafen Ag | Steuereinheit zum Steuern einer Fahrzeuginnenraumbeleuchtung |
| US20200184669A1 (en) * | 2018-12-07 | 2020-06-11 | Toyota Research Institute, Inc. | Adaptive infrared lighting for full vehicle cabin based on occupancy |
| DE102022110165A1 (de) * | 2022-04-27 | 2023-11-02 | Bayerische Motoren Werke Aktiengesellschaft | Computerimplementiertes Verfahren zur Bereitstellung eines Bildstroms, Kamerasystem und Fahrzeug |
| US20240028649A1 (en) * | 2022-07-22 | 2024-01-25 | Toyota Jidosha Kabushiki Kaisha | Information provision system and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102024113289A1 (de) | 2025-11-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| DE112017007579T5 (de) | Verfahren und System zum automatischen Kolorieren von Nachtsichtbildern | |
| EP3014569B1 (fr) | Inspection de la surface profilée du dessous de caisse d'un véhicule à moteur | |
| DE102009036844A1 (de) | Belichtungsbestimmungsvorrichtung und Bildverarbeitungsvorrichtung | |
| DE102013019138A1 (de) | Verfahren zum Erkennen eines verdeckten Zustands einer Kamera, Kamerasystem und Kraftfahrzeug | |
| DE102014116199B4 (de) | Bildverzerrungs-Kompensationsvorrichtung und Betriebsverfahren dafür | |
| DE112011103083T5 (de) | Umgebungsbild-Anzeigesystem und Umgebungsbild-Anzeigeverfahren für ein Fahrzeug | |
| DE102011106072A1 (de) | Schattenentfernung in einem durch eine fahrzeugbasierte kamera erfassten bild unter verwendung einer optimierten ausgerichteten linearen achse | |
| WO2019063341A1 (fr) | Procédé destiné à détecter une qualité de chaussée d'une chaussée pour un véhicule automobile, système d'assistance à la conduite et véhicule automobile | |
| DE102007034657A1 (de) | Bildverarbeitungsvorrichtung | |
| DE102010003804A1 (de) | Multifunktionale Sensoreinheit zum Ermitteln von Kontrollinformationen für die Lichtsteuerung | |
| WO2020260328A1 (fr) | Procédé et dispositif pour déterminer un confort thermique | |
| DE102015014263A1 (de) | Verfahren und Vorrichtung zur fahrsituationsabhängigen Parametervariation bei einem fahrzeugseitigen Kamera-Monitor-System | |
| WO2020064521A1 (fr) | Concept d'édition d'images infrarouges | |
| WO2024017675A1 (fr) | Système de caméra et procédé de fonctionnement correspondant | |
| DE102021123275B3 (de) | Verfahren und Prozessorschaltung zum Bereitstellen von Farbbilddaten, durch welche zumindest ein Objekt in einer vorbestimmten Umgebung farbig dargestellt ist, sowie Kraftfahrzeug mit der Prozessorschaltung | |
| WO2025237555A1 (fr) | Procédé de commande d'éclairage à l'intérieur d'un véhicule, procédé d'entraînement d'un modèle d'apprentissage automatique, système de traitement de données et programme informatique | |
| EP1567982B1 (fr) | Procede et dispositif de compensation des taches dans des images numeriques | |
| DE102019106258A1 (de) | Insassenüberwachungsvorrichtung | |
| DE102016101149A1 (de) | Verfahren zum Erkennen von Rauch in einem Umgebungsbereich eines Kraftfahrzeugs mit Hilfe einer Kamera des Kraftfahrzeugs, Fahrerassistenzsystem sowie Kraftfahrzeug | |
| DE102021201255A1 (de) | Computerimplementiertes Verfahren und System zum Detektieren einer Sichteinschränkung eines bildgebenden Sensors und Trainingsverfahren | |
| DE102021206625A1 (de) | Computerimplementiertes Verfahren und System zur Unterstützung einer Installation eines bildgebenden Sensors und Trainingsverfahren | |
| DE102016115128A1 (de) | Verfahren zum Angleichen zumindest zweier Bilder bezüglich der Farbe und/oder der Helligkeit, Kamarasystem sowie Kraftfahrzeug | |
| WO2021121491A2 (fr) | Conversion de données image d'entrée d'une pluralité de caméras de véhicule d'un système à visibilité périphérique en données image de sortie optimisées | |
| EP4184456B1 (fr) | Procédé et dispositif de détermination d'une image biométrique numérique pour un document de sécurité et procédé de personnalisation d'un document de sécurité | |
| EP3069934B1 (fr) | Procede de fabrication d'une image entiere d'un environnement de vehicule et dispositif correspondant |