WO2025135656A1 - Robot et son procédé de projection d'image - Google Patents
Robot et son procédé de projection d'image Download PDFInfo
- Publication number
- WO2025135656A1 WO2025135656A1 PCT/KR2024/020103 KR2024020103W WO2025135656A1 WO 2025135656 A1 WO2025135656 A1 WO 2025135656A1 KR 2024020103 W KR2024020103 W KR 2024020103W WO 2025135656 A1 WO2025135656 A1 WO 2025135656A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- image
- calibration image
- projected
- marker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/317—Convergence or focusing systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/06009—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
- G06K19/06037—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
Definitions
- the present disclosure relates to a robot for projecting an image and an image projection method thereof.
- robots can sense their surroundings in real time using sensors, cameras, etc., collect information, and drive autonomously.
- the robot is equipped with a projector and can perform the function of projecting images onto walls, ceilings, etc. in the space where the robot is located, thereby providing various image contents to users.
- a robot includes a projector, a camera, a driving unit, and one or more processors.
- the one or more processors project a first calibration image in which a marker is moved using the projector.
- the one or more processors capture the first calibration image projected by the robot and the first calibration image projected by another robot using the camera to acquire images.
- the one or more processors identify positions of markers in each of the first calibration image projected by the robot and the first calibration image projected by the other robot based on the acquired images.
- the one or more processors identify a delay time for adjusting a playback time of an image to be projected by the robot based on the identified positions.
- the one or more processors control the projector to project the image on the projection surface based on the delay time while the robot is at a position corresponding to the projection surface on which the image is to be projected.
- the one or more processors can identify a playback point in time of the first calibration image projected by the robot based on a position of a marker in the first calibration image projected by the robot, identify a playback point in time of the first calibration image projected by the other robot based on a position of a marker in the first calibration image projected by the other robot, and identify a difference between the identified playback points in time as the delay time.
- the one or more processors can identify a playback point in time corresponding to a position of a marker in a first calibration image projected by the robot among a plurality of playback points in time corresponding to a plurality of movement degrees of the marker, and identify the identified playback point in time as a playback point in time of the first calibration image played back by the robot, and identify a playback point in time corresponding to a position of a marker in a first calibration image projected by the other robot among a plurality of playback points in time corresponding to a plurality of movement degrees of the marker, and identify the identified playback point in time as a playback point in time of the first calibration image played back by the other robot.
- the markers included in the first calibration image projected by each of the robot and the other robot may include a QR code.
- the one or more processors may project a second calibration image using the projector while the robot is at a position corresponding to a projection surface on which the image is to be projected, capture the second calibration image projected by the robot and the second calibration image projected by the other robot using the camera to acquire an image, control the movement of the robot using the driving unit based on the acquired image so that the second calibration image projected by the robot and the second calibration image projected by the other robot are aligned with each other, and control the projector to project the image based on the delay time after the movement of the robot is controlled.
- the one or more processors can identify a target area corresponding to a marker of a second calibration image projected by the robot based on the acquired image, and control the movement of the robot using the driving unit so that the marker of the second calibration image projected by the other robot is positioned in the target area in the acquired image.
- a method for projecting an image by a robot including a projector and a camera includes the steps of: projecting a first calibration image in which a marker moves using the projector; capturing the first calibration image projected by the robot and the first calibration image projected by another robot using the camera to acquire images; identifying positions of markers in each of the first calibration image projected by the robot and the first calibration image projected by the other robot based on the acquired images; identifying a delay time for adjusting a playback time of an image to be projected by the robot based on the identified positions; and controlling the projector to project the image on the projection surface based on the delay time while the robot is at a position corresponding to the projection surface on which the image is to be projected.
- a non-transitory computer-readable medium storing computer instructions that, when executed by one or more processors of a robot including a projector and a camera according to an embodiment of the present disclosure, cause the robot to perform an operation, the operation including: projecting a first calibration image in which a marker is moved using the projector; capturing the first calibration image projected by the robot and the first calibration image projected by another robot using the camera to acquire images; identifying positions of markers in each of the first calibration image projected by the robot and the first calibration image projected by the other robot based on the acquired images; identifying a delay time for adjusting a playback time of an image to be projected by the robot based on the identified positions; and controlling the projector to project the image on the projection surface based on the delay time while the robot is at a position corresponding to the projection surface on which the image is to be projected.
- FIG. 1 is a drawing for explaining the operation of a robot according to an embodiment of the present disclosure.
- FIG. 2a is a block diagram illustrating the configuration of a robot according to an embodiment of the present disclosure.
- FIG. 2b is a block diagram illustrating the configuration of a robot according to an embodiment of the present disclosure.
- FIG. 3 is a flowchart illustrating an operation of a robot performing time synchronization according to an embodiment of the present disclosure.
- FIG. 5 is a drawing for explaining an example of a robot capturing first calibration images according to an embodiment of the present disclosure.
- FIG. 6 is a diagram illustrating a method for a robot to identify the positions of markers in first calibration images according to an embodiment of the present disclosure.
- FIG. 7 is a drawing for explaining an example of a method for a user to set a projection surface using a mobile device according to an embodiment of the present disclosure.
- FIG. 8 is a drawing for explaining an example of images projected by a robot and another robot according to an embodiment of the present disclosure.
- FIG. 10 is a drawing for explaining an example of a robot capturing second calibration images according to an embodiment of the present disclosure.
- FIGS. 11, 12A and 12B are drawings for explaining an example of a method for controlling the movement of a robot by capturing images of second calibration images according to an embodiment of the present disclosure.
- FIG. 13 is a drawing for explaining an example of images projected by a robot and another robot according to an embodiment of the present disclosure.
- the expression “at least one of a, b, or c” can refer to “a,” “b,” “c,” “a and b,” “a and c,” “b and c,” “all of a, b, and c,” or variations thereof.
- FIG. 1 is a drawing for explaining the operation of a robot according to an embodiment of the present disclosure.
- the robot (100-1, 100-2) may be a mobile robot (e.g., a mobile robot).
- the robot (100-1, 100-2) may also be referred to as an autonomous driving device or a mobile device.
- the movement of the robot (100-1, 100-2) may include exploring the surroundings to detect the location of the robot (100-1, 100-2) and obstacles, and using the detected information to drive within the space on its own.
- the movement may be replaced with expressions such as driving, for example.
- the space may include various indoor spaces where the robot (100-1, 100-2) can move, such as a home, office, hotel, factory, store, mart, restaurant, etc.
- the robot (100-1, 100-2) may be implemented as various types of robots.
- the robot (100-1, 100-2) may be implemented as a robot cleaner that moves within a space and performs cleaning, a guide robot that guides a user along a path within a space or provides various information related to services provided within a space, a delivery robot or serving robot that transports loaded products to a specific location within a space, a mobile projection device for projecting an image while moving within a location, etc.
- the robots (100-1, 100-2) can project images together to provide images to the user.
- the first robot (100-1) and the second robot (100-2) may project an image together, such that the first robot (100-1) projects a part (11) of the image (10) and the second robot projects the remaining part (12) of the image (10), thereby implementing a single entire image (10) through the images projected by the first robot (100-1) and the second robot (100-2). Accordingly, a larger image can be provided to the user than when the image is projected by a single robot.
- one robot is referred to as robot (100), and the other robot is referred to as another robot.
- FIG. 2a is a block diagram illustrating the configuration of a robot according to an embodiment of the present disclosure.
- the robot (100) may include a projector (110), a camera (120), a driving unit (130), and one or more processors (140).
- the projector (110) can project an image under the control of one or more processors (140).
- the image can include a still image and a video.
- the video can include various visual information that represents the movement of an object by using a plurality of continuous still images.
- Each of the plurality of still images included in the video can mean a frame (or a video frame).
- the projector (110) can project an image onto a projection surface using light emitted from a light source.
- the projector (110) can project an image using a CRT (Cathode-Ray Tube) method, an LCD (Liquid Crystal Display) method, a DLP (Digital Light Processing) method, or an LCoS (Liquid Crystal on Silicon) method.
- the projection surface can include a separately provided screen, various wall surfaces within a space where the robot (100) moves, or one side of an object.
- the projector (110) can project images in various aspect ratios.
- the aspect ratios can include various ratios such as 4:3, 5:4, or 16:9.
- the camera (120) can perform shooting under the control of one or more processors (140).
- the camera (120) can include an RGB camera.
- the camera (120) can shoot the front of the robot (100) and generate an image.
- the image can include a plurality of pixels.
- the driving unit (130) can control the movement of the robot (100) under the control of one or more processors (140).
- the driving unit (130) can move and stop the robot (100) and control the moving direction and/or moving speed of the robot (100).
- the driving unit (130) may include a plurality of wheels and at least one wheel motor.
- the wheel motor may control the direction of rotation and speed of the wheels, thereby controlling the direction of movement and speed of movement of the robot (100).
- the wheel motor may include a left wheel motor for controlling the direction of rotation and speed of movement of the left wheel, and a right wheel motor for controlling the direction of rotation and speed of movement of the right wheel.
- the driving unit (130) can rotate the body of the robot (100).
- the driving unit (130) can rotate the body of the robot (100) in an upward or downward direction.
- the driving unit (130) can include a motor and/or an actuator.
- One or more processors (140) can control the overall operations of the robot (100). For example, one or more processors (140) can control the overall operations of the robot (100) to project an image together with another robot and provide a single image to a user by executing one or more instructions of a program stored in the memory of the robot (100).
- the one or more processors (140) may include one or more of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an APU (Accelerated Processing Unit), a MIC (Many Integrated Core), a DSP (Digital Signal Processor), an NPU (Neural Processing Unit), a hardware accelerator, or a machine learning accelerator.
- the one or more processors (140) may control one or any combination of other components of the robot (100), and may perform operations related to communication or data processing.
- the one or more processors (140) may execute one or more programs or instructions stored in the memory of the robot (100). For example, the one or more processors (140) may perform a method according to an embodiment of the present disclosure by executing one or more instructions stored in the memory of the robot (100).
- the plurality of operations may be performed by one processor or may be performed by a plurality of processors.
- the first operation, the second operation, and the third operation may all be performed by a first processor, or the first operation and the second operation may be performed by a first processor (e.g., a general-purpose processor) and the third operation may be performed by a second processor (e.g., an artificial intelligence-only processor).
- a first processor e.g., a general-purpose processor
- a second processor e.g., an artificial intelligence-only processor
- One or more processors (140) may be implemented as a single core processor including one core, or may be implemented as one or more multicore processors including multiple cores (e.g., homogeneous multi-core or heterogeneous multi-core).
- processors (140) are implemented as a multi-core processor
- each of the multiple cores included in the multi-core processor may include an internal processor memory, such as a cache memory or an on-chip memory, and a common cache shared by the multiple cores may be included in the multi-core processor.
- each of the multiple cores (or some of the multiple cores) included in the multi-core processor may independently read and execute a program instruction for implementing a method according to an embodiment of the present disclosure, or all (or some) of the multiple cores may be linked to read and execute a program instruction for implementing a method according to an embodiment of the present disclosure.
- the plurality of operations may be performed by one core among the plurality of cores included in the multi-core processor, or may be performed by the plurality of cores.
- the first operation, the second operation, and the third operation may all be performed by a first core included in the multi-core processor, or the first operation and the second operation may be performed by a first core included in the multi-core processor, and the third operation may be performed by a second core included in the multi-core processor.
- a processor may mean a system on a chip (SoC) in which one or more processors and other electronic components are integrated, a single core processor, a multi-core processor, or a core included in a single core processor or a multi-core processor, wherein the core may be implemented as a CPU, a GPU, an APU, a MIC, a DSP, an NPU, a hardware accelerator, or a machine learning accelerator, but embodiments of the present disclosure are not limited thereto.
- SoC system on a chip
- FIG. 2b is a block diagram illustrating the configuration of a robot according to an embodiment of the present disclosure.
- the memory (150) may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, an SD or XD memory, etc.), and may include a non-volatile memory including at least one of a ROM (Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), a PROM (Programmable Read-Only Memory), a magnetic memory, a magnetic disk, and an optical disk, and a volatile memory such as a RAM (Random Access Memory) or a SRAM (Static Random Access Memory).
- the memory (150) can store one or more instructions and/or programs that cause the robot (100) to perform an operation to project an image together with another robot to provide a single image to a user.
- the sensor unit (160) can detect a structure or object in a space.
- the object can include a wall, furniture, home appliance, etc. in the space.
- the information acquired from the sensor unit (160) can be used to create a map of the space.
- the sensor unit (160) may include at least one of a 3D camera sensor (e.g., a depth camera), a light detection and ranging (LiDAR) sensor, an obstacle detection sensor, and a driving detection sensor.
- a 3D camera sensor e.g., a depth camera
- LiDAR light detection and ranging
- a 3D camera sensor can capture images of the surroundings of a robot (100) and generate 3D spatial information related to the surroundings of the robot (100). For example, the 3D camera sensor can detect the distance to an object around the robot (100) and generate an image (e.g., a depth image) containing 3D distance information.
- the 3D camera sensor can detect the distance to an object around the robot (100) and generate an image (e.g., a depth image) containing 3D distance information.
- the lidar sensor outputs a laser in a 360-degree direction, and when the laser reflected from an object is received, the difference in time taken for the laser to reflect from the object and return, and the signal intensity of the received laser, etc. are analyzed to obtain geometry information about the space.
- the geometry information may include the position, distance, direction, etc. of the object.
- the lidar sensor may provide the obtained geometry information to one or more processors (140).
- the obstacle detection sensor can detect objects around the robot (100).
- the obstacle detection sensor can include at least one of an ultrasonic sensor, an infrared sensor, an RF (radio frequency) sensor, a geomagnetic sensor, and a PSD (Position Sensitive Device) sensor.
- the obstacle detection sensor can detect objects existing in front, behind, on the side, or on the moving path of the robot (100).
- the obstacle detection sensor can provide information about the detected objects to one or more processors (140).
- the communication interface (170) can perform data communication with an electronic device under the control of one or more processors (140).
- the electronic device can include a server, a home appliance, a mobile device (e.g., a smartphone, a tablet PC, a wearable device, etc.).
- the communication interface (170) may include a communication circuit that can perform data communication between the robot (100) and the electronic device by using at least one of data communication methods including wired LAN, wireless LAN, Wi-Fi, Wi-Fi Direct, Bluetooth, ZigBee, WFD (Wi-Fi Direct), infrared Data Association (IrDA), Bluetooth Low Energy (BLE), Near Field Communication (NFC), Wireless Broadband Internet (Wibro), World Interoperability for Microwave Access (WiMAX), Shared Wireless Access Protocol (SWAP), Wireless Gigabit Alliances (WiGig), and RF communication.
- data communication methods including wired LAN, wireless LAN, Wi-Fi, Wi-Fi Direct, Bluetooth, ZigBee, WFD (Wi-Fi Direct), infrared Data Association (IrDA), Bluetooth Low Energy (BLE), Near Field Communication (NFC), Wireless Broadband Internet (Wibro), World Interoperability for Microwave Access (WiMAX), Shared Wireless Access Protocol (SWAP), Wireless Gigabit Alliances (Wi
- the input interface (180) may include various types of input devices.
- the input interface (180) may include a physical button.
- the physical button may include a function key or a dial button.
- the physical button may also be implemented as one or more keys.
- the input interface (180) can receive user input using a touch method.
- the input interface (180) can be implemented as a touch screen that can perform the function of the display (191).
- the input interface (180) may receive a user's voice using a microphone.
- One or more processors (140) may perform a function corresponding to the user's voice using voice recognition.
- one or more processors (140) may convert the user's voice into text data using an STT (Speech To Text) function, obtain control command data based on the text data, and perform a function corresponding to the user's voice based on the control command data.
- the STT function may be performed in an external server.
- the output interface (190) may include a display (191) and a speaker (192).
- the display (191) can display various screens.
- One or more processors (140) can display various notifications, messages, information, etc. related to the operation of the robot (100) on the display (191).
- the display (191) may be implemented as a display including a self-luminous element, or a display including a non-luminous element and a backlight.
- the display (191) may be implemented as various types of displays such as an LCD (Liquid Crystal Display), an OLED (Organic Light Emitting Diodes) display, an LED (Light Emitting Diodes) display, a micro LED display, a Mini LED display, a QLED (Quantum dot light-emitting diodes) display, etc.
- the speaker (192) can output audio signals.
- One or more processors (140) can output warning sounds, notification messages, response messages corresponding to user input, etc. related to the operation of the robot (100) through the speaker (192).
- One or more processors (140) can generate a map of a space using information acquired through the sensor unit (160). The map can be generated during an initial exploration process of the space. For example, one or more processors (140) can explore a space using a lidar sensor to acquire terrain information about the space, and generate a map of the space using the terrain information.
- the map may include a grid map.
- a grid map is a map that divides a space into cells of a certain size and expresses it.
- a grid map may be a map that divides a space into a plurality of cells having a preset size and indicates whether or not an object exists in each cell.
- the plurality of cells may be divided into cells where no object exists (e.g., cells where a robot (100) can drive) (free space) and cells where an object exists (occupied space).
- a line connecting cells occupied by an object may represent a boundary of the space (e.g., an object such as a wall).
- the present invention is not limited thereto, and the map may be various types of maps.
- One or more processors (140) can identify the location of the robot (100) on the map using SLAM (Simultaneous Localization and Mapping). For example, one or more processors (140) can obtain spatial topographic information using a lidar sensor, compare the obtained topographic information with pre-stored topographic information, or compare the obtained topographic information to identify the location of the robot (100) on the map. However, the present invention is not limited thereto, and one or more processors (140) can identify the location of the robot (100) on the map using SLAM using a camera (120).
- SLAM Simultaneous Localization and Mapping
- One or more processors (140) can control the movement of the robot (100) using information acquired through the sensor unit (160).
- one or more processors (140) can control the driving unit (130) to move the robot (100) through space using a map stored in the memory (150).
- one or more processors (140) can obtain information using the sensor unit (160) while the robot (100) moves through space, and can detect objects around the robot (100) using the obtained information.
- one or more processors (140) can control the driving unit (130) to move while avoiding the object.
- processors (140) are referred to as processors (140) below.
- the robot (100) can project an image together with another robot to provide the image to the user.
- the robot (100) and the other robot each project a portion of the entire image, synchronization between the robots may be necessary to provide a natural, large-screen image to the user.
- Synchronization between robots may include time synchronization.
- Time synchronization may include adjusting the playback timing of images projected by the robot (100) and/or the other robot so that images from the same playback timing are projected onto the projection surface by the robot (100) and the other robot.
- FIG. 3 is a flowchart illustrating an operation of a robot performing time synchronization according to an embodiment of the present disclosure.
- a calibration image for time synchronization projected by the robot (100) and another robot is described as a first calibration image.
- the processor (140) may receive user input for time synchronization.
- user input can be entered via a mobile device.
- the mobile device can communicate with the robot (100) and other robots.
- a user can access a server by executing an application installed on the mobile device, create a user account, and communicate with the server based on the logged-in user account to register the robot (100) and other robots.
- the server can register the robot (100) and other robots to the user account by listing identification information (e.g., serial number or MAC address) of the robot (100) and other robots in the user account.
- the user can control the robot (100) and other robots using the application installed on the mobile device.
- a UI (user interface) screen related to the robot (100) and other robots registered to the user account can be displayed on the mobile device.
- the user can input user input for controlling the robot (100) and other robots on the UI screen displayed on the mobile device.
- the server can transmit a control command corresponding to the user input inputted on the mobile device to the robot (100) and other robots.
- the mobile device may be directly connected to the robot (100) and other robots, without limitation.
- the mobile device may communicate with the robot (100) and other robots using a short-range wireless network (e.g., Bluetooth, Wi-Fi Direct, etc.).
- the mobile device may transmit control commands corresponding to user inputs entered into the mobile device to the robot (100) and other robots.
- the processor (140) can perform an operation for time synchronization with another robot.
- the operation for time synchronization may include the operation of the robot (100) identifying a delay time between images based on the positions of markers in the first calibration images projected by the robot (100) and another robot.
- the delay time may be used to adjust the playback time of the images to be projected by the robot (100).
- the processor (140) can project a first calibration image using a projector (110) and capture the first calibration image projected by the robot (100) and the first calibration image projected by another robot using a camera (120) to obtain an image.
- the robot (100) and another robot may be positioned adjacent to each other.
- the robots being adjacent to each other may include the robots being positioned within a predetermined distance from each other so that one robot can capture images projected by the robots using a camera.
- the processor (140) may control the driving unit (130) to cause the robot (100) to move to a specific location within the map based on the location of the robot (100). Then, the processor (140) may reproduce a first calibration image and project the first calibration image onto a projection surface using a projector (110).
- another robot may move to a specific location within the map according to the user input for time synchronization. Accordingly, the robot (100) and the other robot may be positioned adjacent to each other. Then, the other robot may reproduce the first calibration image and project the first calibration image onto the projection surface.
- a user may place a robot (100) and another robot adjacent to each other.
- the robot (100) and the other robot may each project a first calibration image onto the projection surface when a user input for time synchronization is received.
- the first calibration image may be an image in which a marker is moved.
- the marker may move in one direction within the first calibration image.
- the sizes of the markers in the first calibration images projected by the robot (100) and other robots may be the same, and the moving speed may also be the same.
- the marker (420) can move in the x-axis direction over time.
- a marker may include a visual graphic element that can be recognized by the robot (100) and other robots.
- the marker could be a Quick Response (QR) code.
- QR Quick Response
- the marker may be a graphic element in the shape of a square having a different color from the background area.
- the background area may include the remaining area in the first calibration image excluding the marker.
- the present invention is not limited thereto, and the shape of the marker may vary.
- the processor (140) can capture images by using the camera (120) to capture the first calibration images projected by the robot (100) and the other robot, respectively.
- the robot (100) can capture images of the first calibration images (510, 520) captured by the robot (100) and the first calibration images (520) projected by the other robot (100-2).
- the processor (140) can identify the positions of markers of each of the first calibration image projected by the robot (100) and the first calibration image projected by another robot based on the images acquired using the camera (120).
- the processor (140) can identify areas corresponding to the first calibration images in an image acquired using the camera (120) and identify markers included in the areas.
- the area corresponding to the first calibration image in the image acquired using the camera (120) may be an area including the first calibration image in the image acquired using the camera (120).
- pixels corresponding to four vertices of the first calibration image projected by the robot (100) and another robot, respectively may include QR codes.
- the processor (140) may recognize four QR codes in the image acquired using the camera (120) and identify an area defined by the four QR codes as an area corresponding to the first calibration image.
- the processor (140) can identify an area in the image corresponding to the first calibration image based on the color value (e.g., RGB value, etc.) of the image.
- the color of the first calibration image may be different from the color of the projection surface.
- the processor (140) can identify the color of the projection surface on which the first calibration image is to be projected using the camera (120), and project the first calibration image having a color different from the color of the projection surface onto the projection surface.
- Another robot can also project the first calibration image having a color different from the color of the projection surface onto the projection surface using the same method.
- the processor (140) can capture the first calibration images projected by the robot (100) and the other robot using the camera (120), and identify an area in the image having a color that is distinct from other areas based on the color value of the captured image as an area corresponding to the first calibration image.
- the processor (140) can identify two areas corresponding to the first calibration image in the image captured by the camera (120). In this case, the processor (140) can distinguish between the area corresponding to the first calibration image projected by the robot (100) and the area corresponding to the first calibration image projected by another robot among the two areas.
- the processor (140) can identify the direction of another robot by searching the surroundings using the camera (120) before projecting the first calibration image. Then, the processor (140) can identify, based on the direction of the other robot, an area corresponding to the first calibration image projected by the robot (100) and an area corresponding to the first calibration image projected by the other robot among two areas.
- the processor (140) can identify that the left area among the two areas is the area corresponding to the first calibration image projected by the robot (100), and that the right area among the two areas is the area corresponding to the first calibration image projected by the other robot.
- the processor (140) can identify that among the two regions, the left region is the region corresponding to the first calibration image projected by the other robot, and among the two regions, the right region is the region corresponding to the first calibration image projected by the robot (100).
- the QR code may include identification information of the robot.
- the processor (140) may obtain identification information from an image captured by the camera (120) through QR code recognition, and may identify an area corresponding to the first calibration image projected by the robot (100) and an area corresponding to the first calibration image projected by another robot among two areas based on the identification information.
- the marker may be a rectangular graphic element.
- the colors of the markers projected by the robot (100) and the other robot may be different.
- the processor (140) may identify the color value of the marker based on the image captured by the camera (120), and may identify, among the two areas, the area corresponding to the first calibration image projected by the robot (100) and the area corresponding to the first calibration image projected by the other robot based on the color value of the marker.
- the processor (140) can identify markers in areas corresponding to the first calibration images and identify the locations of the markers.
- the marker may be a QR code.
- the processor (140) may identify the marker in an area corresponding to the first calibration image through QR code recognition.
- the processor (140) can calculate the position of the marker (630) as (x a3 -x a1 )/(x a2 -x a1 ).
- the processor (140) can calculate the position of the marker (640) as (x a6 -x a4 )/(x a5 -x a4 ).
- the playing back times of the images played back by the robots may be different from each other due to factors such as the specifications of the robots, the system load, or the communication environment. If the playing back times of the images projected by the robots are different, it becomes impossible to implement a natural, large-screen image using the images projected by the robots.
- the delay time is identified using the first calibration images, and the playing back time of the image to be projected from the robot (100) is adjusted based on the identified delay time.
- the processor (140) can identify the playback point of the first calibration image played back by the robot (100) based on the position of the marker of the first calibration image projected by the robot (100). Additionally, the processor (140) can identify the playback point of the first calibration image played back by another robot based on the position of the marker of the first calibration image projected by the other robot.
- the degree to which the marker moves in the first calibration image can correspond to the playback time of the first calibration image.
- information about multiple playback points of the first calibration image corresponding to multiple degrees of movement of the marker can be stored in the memory (150).
- the processor (140) can identify the difference between identified playback points as a delay time.
- the playback time of the first calibration image projected by the robot (100) is n seconds
- the playback time of the first calibration image projected by another robot is m seconds.
- n ⁇ m, and n and m are natural numbers.
- the processor (140) can identify that the playback time of the first calibration image projected by the robot (100) is m-n seconds slower than the first playback time of the calibration image projected by the other robot, and that the delay time is m-n seconds.
- the playback time of the first calibration image projected by the robot (100) is m seconds
- the playback time of the first calibration image projected by another robot is n seconds.
- n ⁇ m, and n and m are natural numbers.
- the processor (140) can identify that the playback time of the first calibration image projected by the robot (100) is m-n seconds earlier than the playback time of the first calibration image projected by the other robot, and that the delay time is m-n seconds.
- the processor (140) can project an image based on a delay time.
- the video may include various video contents such as television programs, movies, or dramas.
- the video may include various information such as weather, time, etc., or various information and advertising videos related to services provided within the space where the robot (100) is located.
- the processor (140) may receive user input for displaying an image.
- the user input may be input via a mobile device.
- the user may input user input for displaying an image into the mobile device using an application installed on the mobile device.
- the user input for displaying the image may include user input for setting at least one of the position of the projection surface, the size of the image, and the aspect ratio of the image.
- the mobile device (200) can display a UI screen (700) for setting a projection surface using an application.
- the UI screen (700) may include a GUI (graphical user interface) for setting the position of the projection surface, the size of the image, and the aspect ratio of the image.
- GUI graphical user interface
- a GUI (710) for setting the position of the projection surface may include a map of the space where the robot (100) is located.
- a user may input a user input for selecting one wall on the map to set the position of the projection surface on which the image will be projected.
- the GUI (720) for setting the size of the image may include multiple selection items.
- the user may input a user input for selecting one of the multiple selection items to set the size of the image (e.g., 65 inches in FIG. 7).
- GUI (730) for setting the aspect ratio of the image may include multiple selection items.
- the user may input user input to select one of the multiple selection items to set the aspect ratio of the image (e.g., 32:9 in FIG. 7).
- the server can transmit control commands to the robot (100) and other robots to display images based on user input entered into the mobile device. Additionally, when the mobile device communicates directly with the robot (100) and other robots, the mobile device can transmit control commands to the robot (100) and other robots to display images based on user input.
- the processor (140) can control the driving unit (130) to move the robot (100) to a preset location within the map based on the control command.
- Other robots can also move to a preset location within the map based on the control command.
- the preset position may be a position corresponding to a projection surface on which an image is to be projected.
- the position corresponding to the projection surface on which an image is to be projected may be determined based on at least one of a position of the projection surface set according to user input, a size of the image, and an aspect ratio of the image.
- the memory (150) may store information about a location within a map corresponding to the location of the projection surface, the size of the image, and the aspect ratio of the image.
- the location may be expressed by a coordinate value.
- the robot (100) projects an image on a wall based on an aspect ratio set according to user input.
- the size of the image projected on the wall may vary depending on the distance between the wall and the robot (100). For example, the closer the distance between the robot (100) and the wall, the smaller the size of the image projected on the wall by the robot (100), and the longer the distance between the robot (100) and the wall, the larger the size of the image projected on the wall by the robot (100).
- a suitable position for the robot (100) to project an image onto the projection surface can be preset based on the position of the projection surface, the size of the image, and the aspect ratio of the image. Information about the position within this map can be stored in the memory (150).
- the processor (140) can control the driving unit (130) to move the robot (100) to a position corresponding to the projection surface based on the position information stored in the memory (150). Other robots can also move to a position corresponding to the projection surface within the map using the same method. When the robot (100) and the other robot move to the corresponding positions, the robot (100) and the other robot can be in adjacent positions to each other.
- the processor (140) can control the projector (110) to project an image onto a projection surface based on a delay time while the robot (100) is at a position corresponding to the projection surface on which the image is to be projected.
- the processor (140) can adjust the playback time of an image to be projected by the robot (100) based on the delay time, and project the image with the adjusted playback time onto a projection surface using the projector (110).
- the processor (140) can adjust the playback time of the image to be projected by the robot (100) to be m-n seconds faster. For example, if the playback time of the image to be projected by the robot (100) is t seconds, the processor (140) can adjust the playback time of the image to t+(m-n) seconds.
- the processor (140) may adjust the playback time of the image to be projected by the robot (100) to be m-n seconds slower. For example, if the processor (140) identifies that the playback time of the image to be projected by the robot (100) is t seconds, the processor (140) may adjust the playback time of the calibration image to t-(m-n) seconds.
- the images projected by the robot (100) and the other robots may be a part of the entire image.
- the aspect ratios of the images projected by the robot (100) and the other robots may be determined based on the aspect ratios of the images set according to the user input. For example, it is assumed that the aspect ratio of the images set according to the user input is a:b.
- the processor (140) may control the projector (110) to project the images at an aspect ratio of a/2:b.
- the playback time of images (810, 820) projected by the robot (100) and another robot (100-2) is made to coincide, and a natural single large-screen image can be provided to the user.
- the robot (100) projects the first calibration image using a projector (110), but it is not limited thereto.
- the robot (100) may not project the first calibration image.
- the robot (100) may capture the first calibration image projected by another robot using the camera (120), identify the position of the marker of the first calibration image, and identify the playback time of the first calibration image based on the position of the marker.
- the robot (100) may reproduce the first calibration image according to a user input for time synchronization, and identify the playback time of the first calibration image reproduced by the robot (100) at the time of capturing the first calibration image projected by the other robot using the camera (120).
- the robot (100) may identify the delay time based on the identified playback time points.
- the position where the robot (100) projects the first calibration image and the position where the image is projected are described as being different, but this is not limited thereto.
- the robot (100) and another robot may move to a position corresponding to the projection surface and project the first calibration image, respectively.
- the robot (100) may identify the playback point in time using the first calibration images projected by the robot (100) and another robot, and may project the image based on the playback point in time.
- the robot (100) and other robots can each move to a position corresponding to the projection surface on which the image is to be projected in order to project the image.
- the robots can search the surroundings to identify their own positions and move based on the identified positions.
- the robot (100) and/or other robots cannot move to the correct positions, and if the robots project images on the projection surface at incorrect positions, the projected images may not be aligned with each other.
- the sizes of the projected images may be different, or the projected images may overlap or be spaced apart. In this case, it becomes impossible to implement a natural, single, large-screen image using the images projected by the robots.
- the robot (100) can perform fine tuning while the robot (100) is at a position corresponding to the projection surface on which the image is to be projected, and project the image onto the projection surface based on the delay time.
- Fine tuning may include controlling the movement of the robot (100) so that images projected by the robot (100) and another robot can be aligned with each other on the projection surface.
- the operation of controlling the movement of the robot (100) may include moving the robot (100) or moving the body of the robot (100).
- the projector (110) and camera (120) provided on the body may also rotate together.
- FIG. 9 is a flowchart illustrating an operation of a robot performing fine adjustment according to an embodiment of the present disclosure.
- the processor (140) can project a second calibration image using a projector (110). Another robot can also project a second calibration image.
- the processor (140) can capture the second calibration image projected by the robot (100) and the second calibration image projected by the other robot using a camera (120) to obtain an image.
- the second calibration image may include markers.
- the markers do not move in the second calibration image and may be positioned at fixed positions.
- the sizes of the markers in the second calibration images projected by the robot (100) and other robots may be the same.
- the position of the marker can be determined based on the position of the robot (100).
- the robot (100) may be positioned on the left side of another robot.
- the marker in the second calibration image projected by the robot (100), the marker may be positioned in the central area of the right side of the second calibration image. That is, the right side of the marker may be positioned on the right side of the second calibration image, and the center of the right side of the marker may be positioned at the center of the right side of the second calibration image.
- the marker in the second calibration image projected by another robot, the marker may be located in the central area of the left part of the second calibration image.
- the left side of the marker may be located on the left side of the second calibration image, and the center of the left side of the marker may be located at the center of the left side of the second calibration image.
- the robot (100) may be positioned on the right side of another robot.
- the marker in the second calibration image projected by the robot (100), the marker may be positioned in the central area of the left part of the second calibration image. That is, the left side of the marker may be positioned on the left side of the second calibration image, and the center of the left side of the marker may be positioned at the center of the left side of the second calibration image.
- the marker in the second calibration image projected by another robot, the marker may be located in the central area of the right part of the second calibration image.
- the right side of the marker may be located on the right side of the second calibration image, and the center of the right side of the marker may be located at the center of the right side of the second calibration image.
- a marker may include a visual graphic element that can be recognized by the robot (100) and other robots.
- the marker may be a graphic element in the shape of a square having a different color from the background area.
- the background area may include the remaining area in the second calibration image excluding the marker.
- the present invention is not limited thereto, and the shape of the marker may vary.
- the processor (140) can control the movement of the robot using the driving unit (130) based on the image acquired using the camera (120) so that the second calibration image projected by the robot (100) and the second calibration image projected by another robot are aligned with each other.
- the alignment of the second calibration images projected by the robot (100) and another robot may include the sizes of the second calibration images on the projection surface being the same, the second calibration images not overlapping or spaced apart from each other, and being arranged side by side.
- the processor (140) can control the movement of the robot (100) using image-based visual servoing.
- Image-based visual servoing may include a control method that controls the movement of the robot so that the error between a specific position in the image and a target position is reduced, so that the specific position is located at the target position. In this regard, this will be described in more detail in operations S930 and S940.
- the processor (140) can identify a target area corresponding to a marker of a second calibration image projected by the robot (100) based on an image acquired using the camera (120).
- the processor (140) can identify a marker of a second calibration image projected by the robot (100) in an image acquired using the camera (120).
- One side of the marker may be a side located on one side of the second calibration image among the plurality of sides of the marker.
- the processor (140) may identify the x, y coordinate values of the pixels corresponding to the plurality of vertices of the target area based on the x, y coordinate values of the pixels corresponding to the plurality of vertices of the marker of the second calibration image projected by the robot (100), thereby identifying the target area.
- the x, y coordinate values of the pixels corresponding to the four vertices of the target area (1140) are (x b5 , y b1 ), (x b6 , y b1 ), (x b7 , y b3 ), (x b8 , y b3 ).
- x b6 x b2 +(x b2 -x b1 )
- x b8 x b4 +(x b4 -x b3 ).
- the processor (140) can control the movement of the robot (100) using the driving unit (130) so that the marker of the second calibration image projected by another robot is located in the target area in the image acquired using the camera (120).
- the size of the marker of the second calibration image projected by the other robot in the image captured by the camera (120) may increase, and when the robot (100) moves to the opposite side of the projection surface (e.g., when the robot (100) moves backward), the size of the marker of the second calibration image projected by the other robot in the image captured by the camera (120) may decrease.
- the marker of the second calibration image projected by another robot in the image captured by the camera (120) may move to the right, and when the robot (100) moves to the right, the marker of the second calibration image projected by another robot in the image captured by the camera (120) may move to the left.
- the projector (110) and camera (120) equipped on the body may also rotate together.
- the marker of the second calibration image projected by another robot in the image captured by the camera (120) may move downward
- the marker of the second calibration image projected by another robot in the image captured by the camera (120) may move upward.
- the processor (140) can control the movement of the robot (100) using the driving unit (130) so that the marker of the second calibration image projected by another robot is positioned in the target area in the image acquired using the camera (120).
- the processor (140) can control the driving unit (120) to move the robot (100) forward or backward so that the size of the marker in the second calibration image projected by another robot in the image captured by the camera (120) becomes the same as the size of the target area.
- the processor (140) controls the driving unit (120) to move the robot (100) to the left or right, or rotate the body of the robot (100) in an upward or downward direction so that the marker of the second calibration image projected by another robot in the image captured by the camera (120) does not overlap or is spaced apart from the marker of the second calibration image projected by the robot (100) and is positioned in the target area.
- the processor (140) can repeatedly control the movement of the robot (100) by using the driving unit (120) so that the error between the position of the marker of the second calibration image projected by another robot in the image captured by the camera (120) and the position of the target area is reduced. Accordingly, the marker of the second calibration image projected by another robot in the image captured by the camera (120) can be positioned in the target area.
- the robot (100) can control the movement of the robot (100) to position a marker (1160) of an area (1150) corresponding to a second calibration image projected by another robot in an image (1100) acquired using a camera (120) in a target area (1140).
- the upper left vertex, the upper right vertex, the lower left vertex, and the lower right vertex of the marker (1160) can be positioned at the upper left vertex, the upper right vertex, the lower left vertex, and the lower right vertex of the target area (1140), respectively.
- the second calibration images (1210, 1220) projected by the robot (100) and another robot (100-2) have the same size on the projection surface and can be arranged side by side without overlapping or being spaced apart from each other.
- the processor (140) can control the projector (110) to project an image based on a delay time after the movement of the robot (100) is controlled.
- the method of projecting an image based on the delay time is the same as described in the first calibration image.
- images (1310, 1320) projected onto the projection surface by the robot (100) and another robot (100-2) have the same size and can be arranged in parallel.
- the playback points of the images (1310, 1320) can be aligned with each other. Accordingly, a natural single large-screen image can be provided to the user.
- embodiments of the present disclosure may be implemented in a computer or similar device-readable recording medium using software, hardware, or a combination thereof.
- the embodiments described herein may be implemented by the processor itself.
- embodiments such as the procedures and functions described herein may be implemented by separate software modules. Each of the software modules may perform one or more functions and operations described herein.
- computer instructions for performing processing operations of an electronic device may be stored in a non-transitory computer-readable medium.
- the computer instructions stored in the non-transitory computer-readable medium are executed by a processor of a specific device, they cause the specific device to perform processing operations in a robot (100) according to various embodiments described above.
- a non-transitory computer-readable medium is not a medium that stores data for a short period of time, such as a register, cache, or memory, but a medium that permanently stores data and can be read by a device.
- Specific examples of non-transitory computer-readable media include CDs, DVDs, hard disks, Blu-ray disks, USBs, memory cards, and ROMs.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Un robot est divulgué. Le robot comprend un projecteur, une caméra, une unité d'entraînement et un ou plusieurs processeurs configurés pour : projeter une première image d'étalonnage dans laquelle un marqueur est déplacé à l'aide du projecteur ; obtenir une image par photographie, à l'aide de la caméra, d'une première image d'étalonnage projetée par le robot et d'une première image d'étalonnage projetée par un autre robot ; sur la base de l'image obtenue, identifier les positions de marqueurs respectifs de la première image d'étalonnage projetée par le robot et de la première image d'étalonnage projetée par l'autre robot ; identifier un temps de retard pour ajuster un point temporel de reproduction d'une image à projeter par le robot sur la base des positions identifiées ; et pendant que le robot est à une position correspondant à une surface de projection sur laquelle une image doit être projetée, commander le projecteur pour projeter l'image sur la surface de projection sur la base du temps de retard.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2023-0187233 | 2023-12-20 | ||
| KR1020230187233A KR20250096205A (ko) | 2023-12-20 | 2023-12-20 | 로봇 및 그의 영상 투사 방법 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025135656A1 true WO2025135656A1 (fr) | 2025-06-26 |
Family
ID=96138415
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2024/020103 Pending WO2025135656A1 (fr) | 2023-12-20 | 2024-12-09 | Robot et son procédé de projection d'image |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR20250096205A (fr) |
| WO (1) | WO2025135656A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004523027A (ja) * | 2000-12-05 | 2004-07-29 | イエダ リサーチ アンド ディベロプメント カンパニー リミテッド | 空間的または時間的にオーバーラップしない画像シーケンスの位置合わせのための装置並びに方法 |
| JP2016220137A (ja) * | 2015-05-25 | 2016-12-22 | みこらった株式会社 | 移動型プロジェクションシステム及び移動型プロジェクション方法 |
| KR20180131904A (ko) * | 2017-06-01 | 2018-12-11 | 한국전자통신연구원 | 수술로봇의 카메라 정렬 방법 및 장치 |
| JP2019161397A (ja) * | 2018-03-12 | 2019-09-19 | キヤノン株式会社 | 制御装置、プログラム、及び制御方法 |
| KR20220093184A (ko) * | 2019-11-05 | 2022-07-05 | 유니버셜 시티 스튜디오스 엘엘씨 | 투사된 이미지를 디스플레이하는 머리 장착형 디바이스 |
-
2023
- 2023-12-20 KR KR1020230187233A patent/KR20250096205A/ko active Pending
-
2024
- 2024-12-09 WO PCT/KR2024/020103 patent/WO2025135656A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004523027A (ja) * | 2000-12-05 | 2004-07-29 | イエダ リサーチ アンド ディベロプメント カンパニー リミテッド | 空間的または時間的にオーバーラップしない画像シーケンスの位置合わせのための装置並びに方法 |
| JP2016220137A (ja) * | 2015-05-25 | 2016-12-22 | みこらった株式会社 | 移動型プロジェクションシステム及び移動型プロジェクション方法 |
| KR20180131904A (ko) * | 2017-06-01 | 2018-12-11 | 한국전자통신연구원 | 수술로봇의 카메라 정렬 방법 및 장치 |
| JP2019161397A (ja) * | 2018-03-12 | 2019-09-19 | キヤノン株式会社 | 制御装置、プログラム、及び制御方法 |
| KR20220093184A (ko) * | 2019-11-05 | 2022-07-05 | 유니버셜 시티 스튜디오스 엘엘씨 | 투사된 이미지를 디스플레이하는 머리 장착형 디바이스 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20250096205A (ko) | 2025-06-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2015041405A1 (fr) | Appareil d'affichage et procédé de reconnaissance de mouvement associé | |
| WO2021096233A1 (fr) | Appareil électronique et son procédé de commande | |
| WO2021133053A1 (fr) | Dispositif électronique et son procédé de commande | |
| WO2019226025A1 (fr) | Procédé d'affichage de contenu d'application par l'intermédiaire d'un dispositif d'affichage, et dispositif électronique | |
| WO2019035582A1 (fr) | Appareil d'affichage et serveur, et procédés de commande associés | |
| WO2015115698A1 (fr) | Dispositif portatif et procédé de commande associé | |
| WO2019143050A1 (fr) | Dispositif électronique et procédé de commande de mise au point automatique de caméra | |
| WO2025135656A1 (fr) | Robot et son procédé de projection d'image | |
| WO2022260273A1 (fr) | Appareil électronique et son procédé de commande | |
| WO2018124473A1 (fr) | Dispositif de robot mobile et procédé de commande du fonctionnement de celui-ci | |
| WO2021107200A1 (fr) | Terminal mobile et procédé de commande de terminal mobile | |
| WO2024058405A1 (fr) | Dispositif électronique et procédé de commande de dispositif électronique | |
| WO2020091182A1 (fr) | Dispositif électronique pour fournir des données d'image à l'aide de la réalité augmentée et son procédé de commande | |
| WO2019083313A1 (fr) | Dispositif d'entrée et procédé de commande associé | |
| WO2019035536A1 (fr) | Dispositif électronique et procédé de commande associé | |
| WO2023038260A1 (fr) | Appareil électronique et son procédé de commande | |
| WO2022055161A1 (fr) | Dispositif électronique et procédé de commande associé | |
| WO2025116328A1 (fr) | Robot permettant de projeter des images et son procédé de projection d'images | |
| WO2024071579A1 (fr) | Dispositif de projection et son procédé de fonctionnement | |
| WO2025095316A1 (fr) | Dispositif électronique et procédé de fonctionnement de dispositif électronique | |
| WO2020004941A1 (fr) | Dispositif électronique comprenant un élément de réflexion et un élément de réflexion translucide qui peut transmettre, à une lentille, une lumière émise par un affichage | |
| WO2025183318A1 (fr) | Dispositif électronique et son procédé de commande | |
| WO2024025095A1 (fr) | Dispositif de sortie d'image comprenant plusieurs unités de projection et procédé de commande pour ce dispositif | |
| WO2024147479A1 (fr) | Dispositif électronique pour définir une pluralité de zones de projection, et procédé de commande associé | |
| WO2025211923A1 (fr) | Dispositif électronique et procédé de commande de dispositif électronique |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24907940 Country of ref document: EP Kind code of ref document: A1 |