AU2017204848A1 - Projecting rectified images on a surface using uncalibrated devices - Google Patents
Projecting rectified images on a surface using uncalibrated devices Download PDFInfo
- Publication number
- AU2017204848A1 AU2017204848A1 AU2017204848A AU2017204848A AU2017204848A1 AU 2017204848 A1 AU2017204848 A1 AU 2017204848A1 AU 2017204848 A AU2017204848 A AU 2017204848A AU 2017204848 A AU2017204848 A AU 2017204848A AU 2017204848 A1 AU2017204848 A1 AU 2017204848A1
- Authority
- AU
- Australia
- Prior art keywords
- projector
- camera
- cluster
- image
- determined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 117
- 239000013598 vector Substances 0.000 claims abstract description 105
- 230000015654 memory Effects 0.000 claims description 51
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 4
- 239000000243 solution Substances 0.000 description 107
- 230000000875 corresponding effect Effects 0.000 description 28
- 239000011159 matrix material Substances 0.000 description 18
- 230000008569 process Effects 0.000 description 15
- 238000003860 storage Methods 0.000 description 12
- 238000013507 mapping Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013519 translation Methods 0.000 description 5
- 239000004065 semiconductor Substances 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 241000283907 Tragelaphus oryx Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- BJAJDJDODCWPNS-UHFFFAOYSA-N dotp Chemical compound O=C1N2CCOC2=NC2=C1SC=C2 BJAJDJDODCWPNS-UHFFFAOYSA-N 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 1
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000010420 art technique Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000012482 calibration solution Substances 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Controls And Circuits For Display Device (AREA)
- Projection Apparatus (AREA)
Abstract
-42 Abstract PROJECTING RECTIFIED IMAGES ON A SURFAGE USING UNCALIBRATED A method of projecting a rectified image on a surface using a projector and a camera. A 5 camera-projector homography is determined by capturing with the camera a calibration pattern projected by the projector on to the surface. A plurality of normal vectors is determined for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters. Ane of a plurality of clusters of the normal vectors is selected according to a cost 10 value of the cluster. An image is projected using the projector, wherein the projected image is rectified according to a normal vector from the selected cluster to the surface. P264804 Sneci As Filed (13339392v1) -5/14 Camera 503 Projection 502 controller decoder e 504 img Camera projector 508 cor snde -es W- 27 Content 505 source Device calibrator 1397x Image calibration rectifier data Projector Fig. 5 13339379vl (P264804_SpeciAs Filed)
Description
PROJECTING RECTIFIED IMAGES ON A SURFACE USING UNCALIBRATED DEVICES
TECHNICAL FIELD
The present invention relates generally to the field of camera-projector systems and, in particular, to automatically calibrating a projector and camera. The present invention also relates to a method, apparatus and system for projecting a rectified image on a surface. The present invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for projecting a rectified image on to a surface in a camera-projector system.
BACKGROUND
Projectors are widely-used to produce images on many surface types. A problem of projecting an image on to a flat surface is producing an image that is upright, rectangular, and has the correct aspect ratio. This problem is known as rectification.
Commonly, the user is required to adjust parameters of the projector so that a rectified image is 15 produced on the surface. For example, the user may be required to adjust “keystone” properties of the projected image, or specify locations of the comers of the image on the projection surface. However, the process of adjusting parameters can be error-prone and tedious for a user. It is therefore desirable that rectification is performed automatically.
Many automatic rectification methods are known in the art. For example, orientation sensors can be mounted on a projector, in order to detect inclination of the projector and distance from the projector to the surface. Such sensors, however, increase the cost of manufacturing the projector. Other methods detect corners of the projection surface using a camera, and warp the projected image so that the image is aligned to the comers. However, these automatic rectification methods require all four corners of the projection surface to be within the view of the camera, which may be restrictive in some situations.
Typically, methods that use sensors or cameras require accurate calibration of the sensors/cameras, and the projector. The calibration of a device (typically a camera or projector) establishes the relationship between the device and the real world. Accurate calibration is necessary for a rectified image to be produced on the projection surface. If calibration is inaccurate, the image may appear rotated or distorted.
P264804_Speci_As Filed (13339392vl)
2017204848 13 Jul 2017
Device calibration is commonly performed in a factory. However, a calibrated device may subsequently become un-calibrated when the device is used, for example, when a user changes the zoom or lens shift of a projector. It is therefore necessary to calibrate each device on-site. The calibration of a camera or projector often entails the user holding a known calibration object (e.g. a chart) at several positions, which is error-prone, tiresome and time consuming for the user.
A device calibration process typically results in several intrinsic parameters of the device.
Intrinsic parameters typically include focal length, principal point, and lens distortion parameters. Some known automatic projector calibration methods minimise a cost function 10 based on projecting circular points from three or more projectors. However, such automatic projector calibration methods typically make assumptions about the location of the principal point, which do not hold when the projector has lens shift capabilities. Other methods rely on an initial projector calibration having been performed, which is not guaranteed in many situations.
Yet other methods require multiple planar projection surfaces to be arranged in a known configuration.
There is a need for automatic projector calibration methods for flat surface rectification that do not make restrictive assumptions about calibration parameters or surface configuration.
SUMMARY
It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
According to one aspect of the present disclosure, there is provided a method of projecting a rectified image on a surface using a projector and a camera, the method comprising:
determining a camera-projector homography by capturing with the camera a calibration 25 pattern projected by the projector on to the surface;
determining a plurality of normal vectors for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
P264804_Speci_As Filed (13339392vl)
-32017204848 13 Jul 2017 selecting one of a plurality of clusters of the normal vectors according to a cost value of the cluster;
projecting an image using the projector, wherein the projected image is rectified according to a normal vector from the selected cluster to the surface.
According to another aspect of the present disclosure, there is provided a system for projecting a rectified image on a surface using a projector and a camera, the system comprising:
a memory for storing data and a computer program;
a processor coupled to the memory for executing the program, the program comprising instructions for:
determining a camera-projector homography by capturing with the camera a calibration pattern projected by the projector on to the surface;
determining a plurality of normal vectors for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
selecting one of a plurality of clusters of the normal vectors according to a cost value of the cluster;
projecting an image using the projector, wherein the projected image is rectified according to a vector from the selected cluster to the surface.
According to still another aspect of the present disclosure, there is provided a non-transitory 20 computer readable medium having a program stored on the medium for projecting a rectified image on a surface using a projector and a camera, the program comprising:
code for determining a camera-projector homography by capturing with the camera a calibration pattern projected by the projector on to the surface;
P264804_Speci_As Filed (13339392vl)
-42017204848 13 Jul 2017 code for determining a plurality of normal vectors for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
code for selecting one of a plurality of clusters of the normal vectors according to a cost 5 value of the cluster;
code for projecting an image using the projector, wherein the projected image is rectified according to a vector from the selected cluster to the surface.
Other aspects are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described with reference to the following drawings, in which:
Fig. 1 shows an example of a projector-camera system;
Fig. 2A shows an example of a content image to be displayed using the projector-camera system of Fig. 1;
Fig. 2B shows a large trapezoidal shape projected on a projection surface of the system of Fig. i;
Fig. 2C shows a subset of pixels;
Fig. 3A shows an example of a calibration pattern consisting of a pseudo-random dot pattern;
Fig. 3B shows an example of a captured image of a projected calibration pattern;
Fig. 4A shows an example of a portion of a calibration pattern image;
Fig. 4B shows an example of decoding a pseudo-random dot pattern that is formed by tiling two or more smaller tiles throughout the calibration pattern;
Fig. 5 shows a software architecture for a projection controller;
P264804_Speci_As Filed (13339392vl)
-52017204848 13 Jul 2017
Fig. 6 is a diagram showing the relationships between a projector plane, a camera plane, and a planar projection surface;
Fig. 7 is a schematic flow diagram showing a method of calibrating the projector and camera of the projector-camera system of Fig. 1;
Fig. 8 is a schematic flow diagram illustrating a method of rectifying a projected image based on a calibrated projector and camera;
Fig. 9 shows an example clustering calibration solutions of a projector and camera;
Figs. 10A and 10B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced; and
Fig. 11A represents a normal vector map in 2D where the x-axis represents azimuth angle and the y-axis represents inclination angle;
Fig. 1 IB represents a heat map of the normal vector map of Fig. 11A where the vertical bar on the right represents density of the normal vector map;
Fig. 12A shows an example camera location graph;
Fig. 12B shows a projector location graph representing corresponding projector image locations for camera image locations shown in the camera location graph of Fig. 12A;
Fig. 12C shows the camera-projector correspondences of Fig. 12B stored as a warp-map table; and
Fig. 13 shows an example of pre-defined ranges for projector intrinsic parameters for a projector.
DETAILED DESCRIPTION INCLUDING BEST MODE
Context
Fig. 1 shows an example of a projector-camera system 100. The projector-camera system 100 includes a projector 111, and a camera 121. A projection controller 130 sends images to the
P264804_Speci_As Filed (13339392vl)
-62017204848 13 Jul 2017 projector 111 via display cable 141. The projection controller 130 receives images from the camera 121 via cable 151. The projector 111 projects images sent by the projection controller 130 onto the projection surface 160. In the example of Fig. 1, the projection surface 160 is planar (i.e. flat).
The camera 121 captures an image containing an entire projection of the projector 111 on the projection surface 160. The projection controller 130 therefore receives images from the camera 121 that contain images that are projected with projector 111.
The relative positions of the projector 111 and camera 121 is not fixed. The projector 111 and camera 121 may be placed at arbitrary locations. Alternatively, the camera 121 may be fixed to 10 the projector 111 or integrated within a projector 111 to form a single device that can project and capture images.
The projector-camera system 100 is configured to display a rectified content image on the projection surface 160. Prior to displaying a rectified content image, the projector 111 and camera 121 are automatically calibrated. One or more calibration patterns are projected by the 15 projector 111 onto the planar projection surface 160. Images of the projected calibration patterns are captured by the camera 121. The projection controller 130 uses the captured images to determine intrinsic parameters of the projector 111 and camera 121 (i.e. calibrate the projector 111 and camera 121). The projection controller 130 then sends a content image to the projector 111 via display cable 141, such that the image on the projection surface 160 is rectified.
A method 700 of automatically calibrating the projector 111 and camera 121, to display a rectified image, as performed by the projection controller 130 is described below with reference to Fig. 7. Other arrangements include the calibration of two or more projectors.
Figs. 10A and 10B depict a general-purpose computer system 1000, upon which the various arrangements described can be practiced. As described below, the projection controller 130 is a software application program executed on the computer system 1000. In alternative arrangement, the projection controller 130 is embedded within a projector, or is executed by a projection controller device.
As seen in Fig. 10A, the computer system 1000 includes: a computer module 1001; input devices such as a keyboard 1002, a mouse pointer device 1003, a scanner 1026, a camera 1027,
P264804_Speci_As Filed (13339392vl)
-72017204848 13 Jul 2017 and a microphone 1080; and output devices including a printer 1015, a display device 1014 and loudspeakers 1017. An external Modulator-Demodulator (Modem) transceiver device 1016 may be used by the computer module 1001 for communicating to and from a communications network 1020 via a connection 1021. The communications network 1020 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1021 is a telephone line, the modem 1016 may be a traditional “dial-up” modem. Alternatively, where the connection 1021 is a high capacity (e.g., cable) connection, the modem 1016 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1020.
The computer module 1001 typically includes at least one processor unit 1005, and a memory unit 1006. For example, the memory unit 1006 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1001 also includes an number of input/output (FO) interfaces including: an audio-video interface 1007 that couples to the video display 1014, loudspeakers 1017 and microphone 1080;
an FO interface 1013 that couples to the keyboard 1002, mouse 1003, scanner 1026, camera 1027 and optionally a joystick or other human interface device (not illustrated); and an interface 1008 for the external modem 1016 and printer 1015. In some implementations, the modem 1016 may be incorporated within the computer module 1001, for example within the interface 1008. The computer module 1001 also has a local network interface 1011, which permits coupling of the computer system 1000 via a connection 1023 to a local-area communications network 1022, known as a Focal Area Network (FAN). As illustrated in
Fig. 10A, the local communications network 1022 may also couple to the wide network 1020 via a connection 1024, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 1011 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1011.
The I/O interfaces 1008 and 1013 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1009 are provided and typically include a hard disk drive (HDD) 1010. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1012 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable,
P264804_Speci_As Filed (13339392vl)
-82017204848 13 Jul 2017 external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1000.
The components 1005 to 1013 of the computer module 1001 typically communicate via an interconnected bus 1004 and in a manner that results in a conventional mode of operation of the 5 computer system 1000 known to those in the relevant art. For example, the processor 1005 is coupled to the system bus 1004 using a connection 1018. Likewise, the memory 1006 and optical disk drive 1012 are coupled to the system bus 1004 by connections 1019. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or a like computer systems.
The method 700 and other methods described below may be implemented using the computer system 1000 wherein the processes of Figs. 1 to 9, to be described, may be implemented as the controller software application program 1033 executable within the computer system 1000. In particular, the steps of the described methods are effected by instructions 1031 (see Fig. 10B) in the software 130 that are carried out within the computer system 1000. The software instructions 1031 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software 1033 is typically stored in the HDD 1010 or the memory 1006. The software is loaded into the computer system 1000 from the computer readable medium, and then executed by the computer system 1000. Thus, for example, the software 1033 may be stored on an optically readable disk storage medium (e.g., CDROM) 1025 that is read by the optical disk drive 1012. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1000 preferably effects an advantageous apparatus for implementing the method 700.
In some instances, the application programs 1033 may be supplied to the user encoded on one or more CD-ROMs 1025 and read via the corresponding drive 1012, or alternatively may be read by the user from the networks 1020 or 1022. Still further, the software can also be loaded into the computer system 1000 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded
P264804_Speci_As Filed (13339392vl)
-92017204848 13 Jul 2017 instructions and/or data to the computer system 1000 for execution and/or processing.
Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Bluray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such 5 devices are internal or external of the computer module 1001. Examples of transitory or nontangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1001 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information 10 recorded on Websites and the like.
The second part of the program 1033 implementing the projection controller 130 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1014. Through manipulation of typically the keyboard 1002 and the mouse 1003, a user of the computer system 1000 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1017 and user voice commands input via the microphone 1080.
Fig. 10B is a detailed schematic block diagram of the processor 1005 and a “memory” 1034. The memory 1034 represents a logical aggregation of all the memory modules (including the HDD 1009 and semiconductor memory 1006) that can be accessed by the computer module 1001 in Fig. 10A.
When the computer module 1001 is initially powered up, a power-on self-test (POST) 25 program 1050 executes. The POST program 1050 is typically stored in a ROM 1049 of the semiconductor memory 1006 of Fig. 10A. A hardware device such as the ROM 1049 storing software is sometimes referred to as firmware. The POST program 1050 examines hardware within the computer module 1001 to ensure proper functioning and typically checks the processor 1005, the memory 1034 (1009, 1006), and a basic input-output systems software 30 (BIOS) module 1051, also typically stored in the ROM 1049, for correct operation. Once the
POST program 1050 has run successfully, the BIOS 1051 activates the hard disk drive 1010 of
Fig. 10A. Activation of the hard disk drive 1010 causes a bootstrap loader program 1052 that is
P264804_Speci_As Filed (13339392vl)
-102017204848 13 Jul 2017 resident on the hard disk drive 1010 to execute via the processor 1005. This loads an operating system 1053 into the RAM memory 1006, upon which the operating system 1053 commences operation. The operating system 1053 is a system level application, executable by the processor 1005, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
The operating system 1053 manages the memory 1034 (1009, 1006) to ensure that each process or application running on the computer module 1001 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of 10 memory available in the system 1000 of Fig. 10A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1034 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1000 and how such is used.
As shown in Fig. 10B, the processor 1005 includes a number of functional modules including a control unit 1039, an arithmetic logic unit (ALU) 1040, and a local or internal memory 1048, sometimes called a cache memory. The cache memory 1048 typically include a number of storage registers 1044 - 1046 in a register section. One or more internal busses 1041 functionally interconnect these functional modules. The processor 1005 typically also has one or more interfaces 1042 for communicating with external devices via the system bus 1004, using a connection 1018. The memory 1034 is coupled to the bus 1004 using a connection 1019.
The program 1033 includes a sequence of instructions 1031 that may include conditional branch and loop instructions. The program 1033 may also include data 1032 which is used in execution of the program 1033. The instructions 1031 and the data 1032 are stored in memory 25 locations 1028, 1029, 1030 and 1035, 1036, 1037, respectively. Depending upon the relative size of the instructions 1031 and the memory locations 1028-1030, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1030. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in 30 the memory locations 1028 and 1029.
In general, the processor 1005 is given a set of instructions which are executed therein. The processor 1105 waits for a subsequent input, to which the processor 1005 reacts to by executing
P264804_Speci_As Filed (13339392vl)
-112017204848 13 Jul 2017 another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1002, 1003, data received from an external source across one of the networks 1020, 1002, data retrieved from one of the storage devices 1006, 1009 or data retrieved from a storage medium 1025 inserted into the corresponding reader 1012, all depicted in Fig. 10A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1034.
The disclosed arrangements use input variables 1054, which are stored in the memory 1034 in corresponding memory locations 1055, 1056, 1057. The arrangements produce output variables 1061, which are stored in the memory 1034 in corresponding memory locations 1062, 1063, 1064. Intermediate variables 1058 maybe stored in memory locations 1059, 1060, 1066 and 1067.
Referring to the processor 1005 of Fig. 10B, the registers 1044, 1045, 1046, the arithmetic logic unit (ALU) 1040, and the control unit 1039 work together to perform sequences of micro15 operations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 1033. Each fetch, decode, and execute cycle comprises:
a fetch operation, which fetches or reads an instruction 1031 from a memory location 1028, 1029, 1030;
a decode operation in which the control unit 1039 determines which instruction has been 20 fetched; and an execute operation in which the control unit 1039 and/or the ALU 1040 execute the instruction.
Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1039 stores or writes a value to a memory location 1032.
Each step or sub-process in the processes of Figs. 1 to 9 is associated with one or more segments of the program 1033 and is performed by the register section 1044, 1045, 1047, the
ALU 1040, and the control unit 1039 in the processor 1005 working together to perform the
P264804_Speci_As Filed (13339392vl)
-122017204848 13 Jul 2017 fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1033.
The described methods may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the described methods.
Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
Fig. 2A shows an example of a content image 200 to be displayed using the projector-camera system 100.
Figs. 2B and 2C show the projection of content image 200 using the projector 111, for example, 10 of the system 100. The projector image displayed by the projector 111 is shown in Fig. 2C as projector image 231. The projector image 231 comprises a rectangular array of projector pixels, for example, one thousand, nine hundred and twenty (1920) horizontal pixels by twelve hundred (1200) vertical pixels. Each projector pixel comprises a number of values representing intensity of particular colour channels. For example, each projector pixel may comprise 1 byte 15 (8 bits) for a red colour channel, 1 byte for a blue colour channel and 1 byte for a green colour channel. If the projector 111 were to project all pixels of the projector image 231, a large trapezoidal shaped projection region 211 would be projected on the projection surface 160 as shown in Fig. 2B. To rectify the projected image, the projection controller 130 first determines a maximal rectangle 270 on the projection surface 160. This maximal rectangle 270 lies within 20 the projection region 211, is upright, is rectangular and has the same aspect ratio as the content image 200. The projection controller 130 then displays the content image 200 using a subset of pixels 241 of the projector image 231 as shown in Fig. 2C. The projection of a subset of pixels 241 results in the display of rectified image 221 on the projection surface 160 as seen in Fig. 2C.
The rectified image 221 is upright, rectangular, and of the same aspect ratio as the content 25 image 200.
As discussed earlier with reference to Fig. 1, prior to producing a rectified image, the projection controller 130, under execution of the processor 1005, projects one or more calibration patterns using the projector 111. The camera 121 captures one or more images of the projected calibration patterns. The projection controller 130 decodes the calibration patterns within the 30 captured images, in order to form correspondences between positions in projector images and positions in captured camera images.
P264804_Speci_As Filed (13339392vl)
-132017204848 13 Jul 2017
Fig. 3A shows an example calibration pattern 301 as projected by a projector 111. A calibration pattern typically consists of one or more calibration marks, which can be processed to determine a position within the calibration pattern. Example calibration pattern 301 consists of a pseudo-random pattern of dots (each calibration mark is a dot), for example, dot 310. A dot is a small group of pixels of a contrasting intensity in comparison with surrounding pixels. For example, a dot may be square shaped, consisting of 4x4 projector pixels. The dimensions (size) of dots are typically chosen such that dots are detectable in captured camera images of the projected calibration pattern. Other calibration patterns may consist of dots at regular locations on a grid, or other types of calibration marks, such as circles, lines or irregular shapes. The calibration pattern 301 is preferably the same size as the projector image 231 to enable a position in a projector image to be determined. The position in the projector image is determined by decoding the dots in a calibration pattern, for example, captured by a camera 121.
Fig. 3B shows a captured image 330 of a projected calibration pattern 301, projected onto a planar projection surface 160. At a sampling position 351, a portion of the image 341 is extracted. The calibration pattern within the extracted portion 341 is decoded, to determine a position 321 within the original projected calibration pattern 301. Thus, a correspondence between a position 351 in the camera image 330 and a position 321 in the projector image 301 is established.
An example of decoding a pseudo-random dot pattern at a position within a captured calibration pattern image, using direct correlation, will now be described with reference to Fig. 4A. A portion of a captured calibration pattern image is firstly extracted. For example, portion 410 may correspond to portion 341 within the captured calibration pattern image 330.
The extracted portion 410 is then correlated with the projected calibration pattern 301 using any suitable method. For example, the Discrete Fourier Transform (DFT) of both the extracted portion 410 and the calibration pattern 301 may be determined. The spectra produced by the DFT are then multiplied, and the result is transformed back to the spatial domain using the inverse DFT (iDFT). The iDFT produces an image that contains many intensities, where the largest intensity corresponds to the location (offset, shift) of the extracted portion within the calibration pattern that has the highest correlation (i.e. a match).
An alternative method of forming and decoding a pseudo-random dot calibration pattern will be described in detail below with reference to Fig. 4B. The calibration pattern 301 may be formed by tiling two or more smaller tiles of random dots, throughout the calibration pattern. For
P264804_Speci_As Filed (13339392vl)
-142017204848 13 Jul 2017 example, calibration pattern 301 is formed by tiling three smaller tiles 431-433. To determine the position of an extracted portion within the calibration pattern, the portion is correlated with each of the tiles, to determine an offset (shift) for each tile. Any known method of correlation may be used to determine the position of an extracted portion within the calibration pattern.
For example, the DFT-based method described above with respect for Fig. 4A may be used to determine the position of an extracted portion within the calibration pattern. The separate tile shifts are then combined, to determine the absolute position of the portion within the calibration pattern. A known method of combining separate tile offsets (shifts) to form an absolute positon is the Chinese Remainder Theorem (CRT).
The correlation of the portion 410 with each of the three tiles 431-433 used to form the calibration pattern 301 will now be described with reference to Fig. 4B. The correlation with each tile determines the x- andy-offset of the portion 410 that results in a match. For example, the correlation of the portion 410 with the first tile 431 results in anx-offset 441 and ay-offset 442.
The process of decoding a portion of a captured calibration pattern image results in a position within the projected calibration. Thus, a correspondence is formed between a position in the captured camera image and a position in the projected image. For example, a correspondence is formed between position 351 of the captured camera image 330, and position 321 of the projected calibration pattern 301. The projection controller 130, under execution of the processor 1005, forms several such correspondences (often known collectively as a warp map), by decoding the captured calibration pattern image 330 at several positions. The correspondences may be used to calibrate the projector 111 and camera 121, and subsequently project a rectified image.
The described methods enable automatic calibration of the projector 111 and camera 121 in the 25 projector-camera system 100, so that a rectified image can subsequently be projected onto a planar surface. The prior art techniques typically require more than one projector, one or more pre-calibrated devices, restrictions on the type of projection surface being used, or extensive user involvement. It is desirable that these problems be resolved, so that projector image rectification is simple for the user, fast, and applicable to a broad range of situations.
The described methods estimate several sets of intrinsic parameters of the projector 111 and/or camera 121, and identify the solutions that will most likely result in a rectified image. It has been observed that several solutions will typically result in a good rectified image. By
P264804_Speci_As Filed (13339392vl)
-152017204848 13 Jul 2017 clustering the solutions according to normal to the projection surface, the described methods identify such groups of similar solutions. Finally, a single solution is obtained using the strongest cluster of similar solutions.
Fig. 5 shows a software architecture for the projection controller 130, according to one example.
As described above, the projection controller 130 may be implemented as a software application program. In the example of Fig. 5, the software application program comprises software code modules 503, 505 and 507. The projection controller 130 receives captured images of projected calibration patterns from the camera 121. The projection controller 130 stores the captured calibration pattern images (camera images) 510 within a camera image database 502 configured within the memory 1006.
A pattern decoder module 503 decodes the calibration patterns in the captured calibration pattern images 510 to determine correspondences between positions in the camera image and positions in the projected image. The decoding method used by the pattern decoder module 503 depends on the specific calibration pattern used. In one arrangement, the calibration pattern consists of a pseudo-random dot pattern, as described previously with reference to Fig. 3 and
Fig. 4. The pattern decoder module 503 selects a number of sampling positons within the captured calibration pattern image 510, determines a portion of the calibration pattern surrounding each sampling position, and decodes the portion, as described previously with reference to Fig. 4. The process performed by the pattern decoder module 503 forms correspondences between points in the camera image 510 and points in the projector image 301. The pattern decoder module 503 stores the resulting camera-projector correspondences 504 within the memory 1006. Referring to Fig. 6, the pixel location 633 of projector 111 illuminates the projection surface 160 at the location 631. An image of the illuminated projection surface is captured by the camera 121 such that the location 631 on the projection surface 160 corresponds to the camera pixel location 632. The correspondence is graphically illustrated by the projector pixel ray 642 (i.e., corresponding to projector pixel location 633) and the camera pixel ray 641 (i.e., corresponding to camera pixel location 632) meeting at the projection surface 160 at a location 631. In this manner, a table of correspondences mapping projector pixel locations to camera pixel locations is formed by the pattern decoder module 503.
In another example, the calibration pattern may consist of a regular grid of dots. Once each dot is identified, the corresponding position in the projected image will be known. Other calibration patterns require different decoding methods. For example, a Gray code calibration pattern
P264804_Speci_As Filed (13339392vl)
-162017204848 13 Jul 2017 requires a sequence of frames to be projected and captured. Each frame in the sequence encodes a specific bit within each position of the projected image. The bits are merged over the sequence of frames, resulting in absolute positions in the projected image. Many other calibration patterns and decoding methods are possible. The described methods may be used with all calibration patterns which enable the formation of correspondences between a projector image and a camera image.
A device calibrator module 505, under execution of the processor 1005, receives the correspondences 504, and determines the intrinsic parameters of the projector 111 and camera 121 (i.e. calibrates the projector 111 and camera 121). The method 700 of calibrating the projector 111 and camera 121 is described below with reference to Fig. 7. The device calibrator module 505 stores the device calibration data 506, which includes the intrinsic parameters of the projector 111 and camera 121, within the memory 1006.
Image rectifier module 507 receives the device calibration data 506 and creates the projector image for the projector 111. The image rectifier module 507 retrieves a next content image from content source 508 configured within the memory 1006. The content source 508 may be a static image, a video, or any other form of content. The image rectifier module 507 renders the content image such that the content image appears upright and rectangular on the projection surface 160. The content image is displayed while maintaining the aspect ratio of the content image. The operation of the image rectifier module 507 is described in more detail below with reference to Fig. 8.
Finally, the image rectifier module 507 sends the projector image to the projector 111 to be displayed on the projection surface 160.
The calibration of the projector 111 and camera 121 using a planar or flat projection surface
160 will now be described by way of example with reference to Fig. 6. The projector image 611 of the projector 111 represents the images projected by the projector 111. The camera image 621 of the camera 121 represents the images captured by the camera 121. Essentially, the camera 121 and projector 111 are providing two views of the flat projection surface 160 (π). When the projector 111 projects a calibration pattern on to the flat projection surface 160, and that calibration pattern is captured by a camera 121, a homography between the plane of projection of the projector image 611 and camera image 621 is produced. A homography defines a transformation between points on two 2-dimensional planes (e.g. projector image 611
P264804_Speci_As Filed (13339392vl)
2017204848 13 Jul 2017
-17and camera image 621). The homography is said to be induced by the flat projection surface 160 (π).
The calibration data of optical devices such as cameras and projectors is often represented using a pinhole model. The pinhole model consists of:
• Intrinsic parameters:
- Focal length parameters fx and fy in image pixels. Focal length parameters define the scaling between physical object size and image size. Focal length parameters fx and fy are different parameters from lens focal length expressed in millimetres.
- Principal point (cx, cy) in image pixels
- Lens distortion parameters:
o radial distortion parameters k± and k2 o tangential distortion parameters p2 and p2 • Extrinsic parameters:
- rotation matrix R (a 3x3 matrix)
- translation vector t (a 3x1 column vector)
A 3x3 intrinsic matrix K is defined in accordance with Equation (1) as follows:
| \fx | skew | cx | ||
| K = | 0 | fy | Cy | (1) |
| -0 | 0 | 1- |
Where skew is the called the skew term, and may usually be set to zero (0). The aspect ratio a is defined in accordance with Equation (2) as follows:
P264804_Speci_As Filed (13339392vl)
-182017204848 13 Jul 2017
The aspect ratio defines the ratio of image pixel width to image pixel height. Assuming that image pixels are perfectly square, aspect ratio a ~ 1.0. In practice, aspect ratio may be slightly different from 1.0. For a given device, aspect ratio is usually considered to be a constant parameter (independent of focal length and principal point).
The 3x4 extrinsic matrix [R|t] transforms a homogeneous 3D world coordinate X into a 3D device relative coordinate X' in accordance with Equation (3) as follows:
X' = [fi| t]A (3) where:
• A is a homogeneous 3D point (a 4x1 column vector) in world coordinates · X’ is a homogeneous 3D point (a 4x1 column vector) in device 3D coordinates
The pinhole model equation transforms a 3D point X to an image point x in accordance with Equation (4) as follows:
x = K[/?|t]X (4) where:
· X is a homogeneous 3D point (a 4x1 column vector) • x is a homogeneous 2D image point (a 3x1 column vector)
A 3x4 projection matrix P may be defined from intrinsic and extrinsic parameters in accordance with Equation (5) as follows:
P = K[R\t] (5) such that:
x = PX
P264804_Speci_As Filed (13339392vl)
-192017204848 13 Jul 2017
The method 700 of calibrating the projector 111 and camera 121 will now be described with reference to Fig. 7. The method 700 may be implemented as one or more software code modules of the device calibrator module 505 resident in the hard disk drive 1010 and being controlled in its execution by the processor 1005.
The method 700 starts at determining step 710, where the camera-projector homography is determined under execution of the processor 1005. Camera-projector correspondences 504 were previously determined by the pattern decoder module 503. Collectively, the projector-camera correspondences 504 form a camera-projector warp map between the pixels in the camera image 621 and pixels in the projector image 611, and vice-versa. Referring to Figs. 12A, 12B and 12C, camera location graph 1201 represents a camera image location where a calibration pattern projected by the projector 111 are successfully decoded. Similarly, projector location graph 1202 represents corresponding projector image locations for the camera image locations shown in the camera location graph 1201. The camera-projector correspondences 504 which form a camera-projector warp map is stored in memory 1006 as a warp-map table 1205 shown in Fig. 12C. The warp-map table 1205 has four columns. The first two columns of the table 1205 are named Camera-X and Camera-Y and respectively contain the X and Y coordinates of the camera pixels. The third and fourth columns of the table 1205 are named Projector-X and Projector-Y and respectively contain the X and Y coordinates of the projector pixels. Referring to the warp-map table 1205, the row 1210 in the table 1205 maps a camera location 1203 to the projector location 1204.
At step 710, the device calibrator module 505 uses the camera-projector warp map to determine a camera-projector homography. The camera-projector homography (Hcp) is a linear transformation which maps points in the camera image 621 (xc, yc) to points in the projector image 611 (xp, yp) up to a scale factor (s). The camera-projector homography (Hcp) is represented as a 3x3 matrix in accordance with Equation (6), as follows:
| Xp | r*ci | ||
| s | yv Λ. | ~ Hep |
P264804_Speci_As Filed (13339392vl) (6)
-202017204848 13 Jul 2017
In determining the homography at step 710, all camera-projector-correspondences 504 in the warp map are considered. The determination is done in a least-squares sense by minimising a re-projection error as given in Equations (7), as follows:
where re-projection error =
ftllXc+ fcl2Vc + ^13 ft3iXc+ h32yc + h33
h-2iXc+ h22'yc+h23'\2 h31xc+ ^32^ + ^33/ (7)
| Tin | hl2 | ^13 | |
| II | h-21 | ^22 | k-23 |
| -^31 | k-32 | ^33- |
(8)
The camera-projector homography (Hcp) determined at step 710 may be stored in the memory 1006 under execution of the processor 1005.
The method 700 then proceeds to determining step 720, where the intrinsic parameters of the camera 121 are determined under execution of the processor 1005. The camera 121 is modelled using a pinhole model, as described previously. The intrinsic parameters of the pinhole model 10 for the camera 121 are defined by an estimated camera intrinsic matrix as shown in Equation (9), as follows:.
| fcx | 0 | ccx | |
| 0 | fCy | CCy | |
| - 0 | 0 | 1 |
(9)
The intrinsic parameters of the intrinsic matrix are determined using Equations (10), (11), (12) and (13), as follows:
fcx = f ocal_lengthmm * image _widthpixels width_of _sensormm (10) fcy = fcx * aspect_ratio image _widthpixeis
CC% — (11) (12)
P264804_Speci_As Filed (13339392vl)
-21(13)
2017204848 13 Jul 2017 ccy image _heightpixels where • focal_lengthmm is the focal length of the camera 121 (in mm).
• image_widthpixeis and image_heightpixeis are the width and height of the camera image 621, respectively.
· width_of_sensormm is the width of the camera sensor, in mm.
• aspect_ratio is the aspect ratio of the camera image 621.
The above intrinsic parameters may be determined by querying the camera 121, using, for example, a software-based camera interface resident within the hard disk drive 1010 and being controlled in execution by the processor 1005. The intrinsic parameters determined at step 720 10 may be stored in the memory 1006 by the processor 1005.
The method 700 then proceeds to determining step 730 where the intrinsic parameters of the projector 111 are determined, under execution of the processor 1005. The projector 111 is modelled using a pinhole model, as described previously. The intrinsic parameters of the projector 111 cannot be estimated directly, as was done at step 720 with the camera 121, 15 because the focal length and lens shift properties of the projector 111 are not provided by a software-based interface. Rather, the intrinsic parameters of the projector 111 are sampled at several random values within pre-defined ranges of the focal length and principal point of the projector 111. Step 730 results in several projector intrinsic matrices Kp^ as defined in Equation 14 as follows (where (i) represents one of the several samples of the projector intrinsic parameters matrices):
| fPx | 0 | CPx' |
| 0 | fPy | CPy |
| - 0 | 0 | 1 - |
P264804_Speci_As Filed (13339392vl)
Estimation of K® will now be described in reference to Fig. 13.
-222017204848 13 Jul 2017
Fig. 13 shows an example of pre-defined ranges for projector intrinsic parameters for a given projector model. The pre-defined ranges are stored on the Disk Storage Medium 1025 and loaded into memory 1006. The pre-defined ranges may be determined from manufacturer specifications of the projector model or by using a standard chart-based calibration by calibrating the projector 111 with different zoom level and lens shift settings.
Referring to Equation (14) and Fig. 13, the focal parameter fpxis sampled from focal length range 1301. Principal point coordinate cpxis sampled from principal point ex range 1302. Principal point coordinate cpyis sampled from principal point cy range 1303. Focal parameter fPyis determined by multiplying sampled focal parameter fpx and the aspect ratio 1304.
The estimated camera intrinsic matrix and set of projector intrinsic matrices will subsequently be used to determine a solution that results in a rectified image. The intrinsic parameters determined at step 730 may be stored in the memory 1006 by the processor 1005.
The method 700 then proceeds to decision step 740, where if there are projector intrinsic matrices remaining to be processed, then the method 700 proceeds to decomposing step
750. Otherwise, the method 700 proceeds to step 780.
At step 750, the next projector intrinsic matrix is selected, under execution of the processor 1005, and the camera-projector homography determined at step 710 is decomposed. At step 750, the camera-projector homography (Hcp) of Equation (6) is converted to a normalised homography, Hnorm, in accordance with Equation (15), as follows.
ij _ If! JCest nnorm ncri‘'c (15)
The homography Hnorm is then decomposed using a suitable analytical method. Such methods typically provide four possible solutions. Each of the solutions correspond to a camera and projector position and a normal vector to the projection surface 160. Each solution comprises three pieces of information. The first two pieces are rotation and translation vectors that map 3D points with respect to the camera 121 position, into 3D points that are relative to the projector 111. The third piece of information is a normal vector [Nx Ny Nz] of the planar projection surface 160, relative to the position of the camera 121.
P264804_Speci_As Filed (13339392vl)
-232017204848 13 Jul 2017
Of the four solutions provided by the analytical method, only two are physically possible. This means that for two of the four solutions, the camera 121 or the projector 111 is behind the projection surface 160 and for the other two solutions both the camera 121 and the projector 111 are in front of the projection surface 160. The two physically possible solutions, which correspond to the camera 121 and the projector 111 being in front of the projection surface, are determined by applying a “visibility constraint”. The visibility constraint ensures that the 3D features corresponding to the image points of the camera 121 are in front of the camera image 621. For example, with respect to Fig 6, 3D point location 631 (corresponding to camera image point 632) is in front of the camera image 621. The visibility constraint is applied as follows:
1. A point (or a pixel) 632 of the camera image 621 of the camera 121 is expressed in pixel coordinates Ppixei and normalized using the estimated camera intrinsic matrix /<c esi of the camera 121, in accordance with Equation (16), as follows:
(16) where Pnorm — [A norm Ynorm 1] represents the normalized coordinates of the camera pixel Ppixel
2. For the normalised points, Pnorm, al step 750, it is determined if the points (when extended through ray casting) lie on a plane (defined by the normal vector returned as part of the decomposed solution) in front of the camera 121. For example, if rays (e.g. ray 641) are passed through normalised points on camera image 621 (e.g. 632) the rays are intersected with the projection surface 160 (e.g. at location 631). The determination of whether the points lie on a plane in front of the camera 121 is made by determining the dot product, (dotp), of the ray (vector) 641 representing the normalised camera point, and the normal vector (e.g. 651) of the plane, in accordance with Equation (17), below. If the dot product is positive, then the corresponding surface normal is one of the correct solutions.
T dotp — [A/torm /lorm ' \jdx ^z\
P264804_Speci_As Filed (13339392vl) (Π)
By applying the above visibility constraint to all image features, only one possible solution remains, as long as not all feature points are closer to one camera centre than the other. Since it
-242017204848 13 Jul 2017 is generally not possible to impose the above visibility constraint, an alternative method may be used for choosing the correct solution as described later with reference to step 770.
Following step 750 in the method 700, two possible solutions corresponding to both the camera 111 and the projector 121 in front of the projection surface 160, associated with the projector intrinsic matrix Kp 1 currently being processed, are known.
The method 700 then proceeds to determining step 760, where the position of a virtual camera is determined, under execution of the processor 1005. As described above, each solution includes a normal vector of the flat projection surface 160. A virtual camera is an artificial device used to define a viewing position with respect to the projection surface 160. As described above, in the example of Fig 1, the projection surface 160 is planar. The virtual camera is an imaginary camera placed at a location orthogonal to the projection surface 160 such that a rectified image formed on the virtual camera implies that a rectified image is formed on the planar projection surface 160 for a viewer in front of the surface. The virtual camera position is located along the normal vector 651, at an arbitrary distance from the planar projection surface 160. An image plane is associated with the virtual camera, having a normal equal to the normal 651of the planar projection surface 160. The following steps are performed to position the virtual camera:
1. Place the planar projection surface 160 at an arbitrary location (e.g. at distance equal to
d) along the z-axis of the camera 121. Placing the planar projection surface 160 at the arbitrary location is valid since the planar projection surface 160 is in front of the camera 121 and hence has to intersect with the z-axis of the camera 121. The planar projection surface 160 is defined as a point Pp on the projection surface 160 and a normal vector Pn. In accordance with Equations (18) and (19), as follows:
Pp = [0 0 d]T (18)
Pn = [Nx Ny Nz] (19)
2. The point on the plane Pp and the normal vector Pn, defined by Equation (18) and (19), uniquely determine the planar projection surface 160 “π”. The virtual camera position relative to the point Pp along the normal vector Pn is determined. The virtual camera position is characterised by a rotation matrix Rv and a translation vector Tv. The rotation
P264804_Speci_As Filed (13339392vl)
-25and translation matrix pair transforms points from a coordinate system attached to the physical camera 121 to the coordinate system attached to the virtual camera. It is assumed that the virtual camera has a virtual camera intrinsic matrix Kv with arbitrarilychosen focal length (e.g., 1000) and a principal point which is equal to that of the physical camera 121. For example, the virtual camera intrinsic matrix Kv may be determined in accordance with Equation (20), as follows.
2017204848 13 Jul 2017
| 1000 | 0 | ||
| kv = | 0 | 1000 | CCy |
| - 0 | 0 | 1 - |
(20)
3. To calculate the virtual camera position, the normal vector Pn from Equation (18) is used to determine two independent vectors u, v which are orthogonal to each other and also to Pn. To determine u and v, Gram-Schmidt orthogonalisation is used. The virtual camera rotation and translation matrices are defined in accordance with Equations (21) and (22), as follows.
Rv = [uvPn]r (21)
Tv — -RvPp + Pp (22)
4. Using the virtual camera positions as specified in Equations (21) and (22), a homography Hcv that maps points from the image plane of the camera 121 to the image plane of the virtual camera is determined in accordance with Equation (23), as follows:
hcv = kv(rv + pj}^-1 \ d ) (23)
The method 700 then proceeds to an optional filtering step 770, where the two solutions are filtered, based on a circular points cost metric determined using Equation (28)(28) . The solution with the lowest cost is then selected as a candidate normal solution. Alternatively, both the remaining solutions may be considered normal vector candidates without filtering step 770. The process of determining the circular points cost metric is described below.
P264804_Speci_As Filed (13339392vl)
For each of the candidate solutions, a surface-projector homography Hsp and surface-camera homography Hsc are determined in accordance with Equations (24) and (25), as follows:
-262017204848 13 Jul 2017
Hsc = K*st(Rv+^Pn T} \ d ) (24)
B$p HcpHsc (25)
In addition, the image of the absolute conic for both the projector 111 and camera 121 are determined in accordance with Equations (26) and (27) as follows.
____-j
IACc = ^stK^stT>) (26)
IACp = (27)
Using the image of the absolute conic for the projector 111 and the camera 121, the two circular points (1 +i) which lie on the projection surface 160 at infinity (1^), are projected onto the camera image 621 and projector image 611, using the surface to device homographies (surfaceprojector homography and surface-camera homography) determined in Equations (24) and (25). The two projections provide the following constraints, which are used in a circular points cost function.
circular_pointscost = H^1·1 IACc H^1-1 + Hsp^ IACp Hsp^ (28) where Hsc^ and Hsp^ correspond to the first column of the matrices Hsc and Hsp, respectively.
Equation (28) evaluates to 0 (zero) when the homographies Hsp and Hsc are correct (i.e. the solution is perfect). In practice, however, the estimated normal vector has an error, and hence the normal solution with the least error (which corresponds to the least cost) is selected at step 770.
Following step 770, the method 700 then returns to decision step 740. If it is decided at decision step 740 that there are no more projector intrinsic matrices to process, then the method 700 proceeds to clustering step 780, where the filtered solutions are clustered.
Once the normal vectors are filtered using the optional step 770, one candidate solution remains, per estimated pinhole model (intrinsic matrix) of the projector 111. Otherwise, if the optional filtering step 770 is skipped, two candidate solutions remain per estimated pinhole
P264804_Speci_As Filed (13339392vl)
-27(29)
2017204848 13 Jul 2017 model (intrinsic matrix) of the projector 111. For each such an estimated solution (or two solutions when step 770 is skipped), the azimuth and inclination angles corresponding to the normal vector are determined in accordance with Equations (29) and (30), as follows.
Azimuth — tan 1 —
Νχ
Inclination — tan 1
(30)
In other arrangements, the azimuth and inclination angles corresponding to the normal vector may be determined using Equations (29a) and (30a) as follows:
Azimuth — tan 1 —
Nz _,Ny
Inclination — tan —
NZ (29a) (30a)
Certain physical configurations of the projector-camera system 100 produce candidate normal vector solutions that form a very closely packed cluster around the true solution. In other words, the solutions tend to cluster around the true normal vector solution. This follows from the observation that for a camera 121 in a given position, there is a family of projector intrinsic parameters and corresponding projector positions (physical positions) which results in approximately the same homography between the camera 121 and projector 111. For example, moving the projector 111 away from the projection surface 160 whilst simultaneously adjusting the zoom (projector intrinsic parameter) results in nearly the same homography between camera 121 and projector 111 at the original position of projector 111.
When the candidate solutions are represented in a two-dimensional azimuth and inclination space, the representation forms a 2D normal vector map. The normal vector for the projection surface 160 can be estimated by determining the closely packed cluster in the normal vector map.
P264804_Speci_As Filed (13339392vl)
The process of estimating the normal vector using clustering is as follows:
-282017204848 13 Jul 2017
1. Form a 2D normal vector map where one dimension is azimuth as calculated using Equation (29) and the second dimension is inclination as calculated using Equation (30). In another arrangement, Equation (29a) and Equation (30a) may be used.
2. Apply a clustering algorithm to determine the location of the closely packed cluster within the normal vector map according to a cost value. The cost value is a measure of how closely packed the cluster is.
The cost value for evaluating a cluster may be based on a combination of one or more of the following.
(i) Size of the cluster which indicates the number of normal vector solutions in the cluster.
(ii) A measure of the spread of the cluster. For example the standard deviation or variance in the azimuth or inclination values of the normal solutions within the cluster.
(iii) Density of the cluster.
a. Density may be measured as a ratio of the size of the cluster to the area occupied by the cluster.
b. Alternatively, density at any given location within the normal vector may also be measured by taking a weighted sum of normal solutions using a fixed window.
c. Weights may be determined using a Gaussian function in which case, determining density becomes a Gaussian convolution filter operation. All normal solutions within a window may be assigned equal weights in which case the density determination is equivalent to running a box filter on the normal vector map.
Different clustering algorithms may be used to determine the location of the cluster from the normal vector map. In particular, the Gaussian convolution, DBSCAN and K-Means may be used to determine the location of the cluster from the normal vector map.
Alternatively, the normal vectors may be represented in a 3D coordinate system where the three dimensions correspond to the three coordinates of the normal vector (X, Y, Z). In this case, the
P264804_Speci_As Filed (13339392vl)
-292017204848 13 Jul 2017 clusters may be determined in the three dimensional space adapting the same techniques used for two dimensional data.
The process of performing step 780 using Gaussian convolution will now be described with reference to Figs. 11A and 1 IB. Fig. 11 A shows a normal vector map 1100 in 2D. As shown in Fig. 11 A, it is clear where the clusters are located. However, it is often difficult to determine where the densest location is located within the map 1100. It becomes clear where the densest location is located within the map 1100 if the 2D normal vector map is turned into a histogram. In such a histogram, the densest location is likely to be at or near the highest peak of the histogram as shown using a heat map representation 1101 in Fig. 1 IB.
The Gaussian convolution is generally applied to smooth or blur an image. Specifically, Gaussian convolution is suitable for densest cluster detection because Gaussian convolution calculates a Gaussian weighted sum at every normal vector location, which allows density within a neighbourhood of locations to be estimated.
Another advantage of Gaussian convolution is simplicity of implementation, requiring only two parameter choices: the Gaussian kernel size and its sigma value (i.e., the standard deviation of the Gaussian signal). The kernel size determines the neighbourhood which is considered for determining the density at a point. The sigma value is an input parameter to the Gaussian function for determining the weights to be used for estimating the density.
The process of normal vector estimation using Gaussian convolution has three steps as follows:
1. Create a 3D histogram of normal vectors. The histogram may be represented as a 2D array, where the rows and columns are inclination and azimuth angles of the candidate normal vector solutions, and the values of the array elements correspond to the frequency (or count) at that particular location.
2. Apply Gaussian convolution to the histogram (represented as a 2D array). In another arrangement, a box filter may be used.
3. Determine the location within the normal vector map where the result of the Gaussian convolution has a peak value or the maximum value (i.e. the maximum weighted sum). In another arrangement, the accuracy of the resultant estimated normal vector can be further improved by performing peak interpolation to determine the highest peak location, which corresponds to the normal vector solution. The peak value is a cost metric where cost is the
P264804_Speci_As Filed (13339392vl)
-302017204848 13 Jul 2017 density of the normal vector solutions expressed as a weighted sum of the neighbourhood solutions.
The output of the Gaussian convolution process on the normal vector map is one or more peaks determined through Gaussian convolution. Each such peak location corresponds to a cluster centre of a cluster of solutions. The cluster centre is a point in the azimuth-inclination 2D space. In one arrangement, the cluster corresponding to the highest peak is selected based on the peak value. Thus, a set of closely spaced normal vector solutions results in a higher peak value when compared to a set of sparsely spaced solutions.
In another arrangement, the Gaussian convolution may be performed using different sigma and 10 kernel sizes (in increasing order) and peaks may be determined for each combination of sigma and kernel size. Performing the Gaussian convolution using different sigma and kernel sizes is similar to performing convolution at different scales. Peaks that occur consistently at different scales may be used to select the location of the peaks.
Alternatively, the clustering step 780 may be performed using DBSCAN (i.e., density-based spatial clustering of applications with noise) as described below. DBSCAN is a data clustering algorithm based on the density of the data and DBSCAN is particularly effective for identifying clusters of arbitrary shape.
DBSCAN is used to separate object points (which form clusters) from noise points. DBSCAN treats points in high density regions as object points and points in low density regions as noise. 20 Following is a simplified description of the DBSCAN algorithm.
Density of a spatial location is defined as the number of points within a specified radius (Eps -stands for radius) of that spatial location. A point is a core point if the point has more than a predefined minimum number of points (MinPts) within Eps. A border point has fewer than MinPts within its Eps, but it is in the neighbourhood (or reachable) of a core point. A noise (or 25 outlier) point is any other point.
The DBSCAN algorithm performs the following three steps:
1. Label points as core, border or noise based on radius Eps and MinPts
2. Connect core points that are neighbours to form a cluster
3. Associate border points to one of their reachable clusters, and remove noise.
P264804_Speci_As Filed (13339392vl)
-312017204848 13 Jul 2017
It is clear from the above description of the DBSCAN algorithm that performance of the DBSCAN algorithm is highly dependent on the choice of Eps and MinPts. These two parameters, Eps and MinPts, may be selected empirically. For example, the elbow method may be used to determine the value of MinPts. One such value for MinPts may be five (5). This means that clusters of size 5 or more are considered. Since, it is not possible to choose a specific value for Eps which corresponds to the radius, it is useful to attempt to determine clusters at the following radii: 0.01, 0.05, 0.1, 0.2, 0.5,1.0, 2.0, 3.0. It has been empirically determined that the range of radii is enough to determine clusters in the normal vector map.
The clustering step 780 determines a cost for any cluster found using DBSCAN using Equation 10 (31), as follows:
costC| + sdinciination + density) (31) s(^azimuth is standard deviation of azimuth values corresponding to the normal solutions.
sdincitnation is standard deviation of inclination values corresponding to the normal solutions.
density is the ratio of the number of solutions in the cluster to the area occupied by the cluster.
Costd for e.g. is the cost for cluster 1
The clustering step 780 executes DBSCAN across all radii until DBSCAN determines two consecutive radii at which clusters may be found. For example, no clusters may be found at 0.01 and 0.05 and clusters may be found at 0.1 and 0.2. Then, for each cluster at the smaller radius (e.g. 0.1), a matching cluster at the larger radius is determined (e.g. 0.2). Matching is based on a distance between the two cluster centres. In one arrangement of step 780, the distance may set to three (3) degrees (in azimuth and inclination). For each of the matched clusters at the smaller radius, a cost ratio is determined in accordance with Equation (32), as follows:
cost ratio , COStCl
COStcl
P264804_Speci_As Filed (13339392vl) — clusterdistance (32)
-322017204848 13 Jul 2017 where eland cl are clusters determined at the smaller radius and larger radius, respectively, and there cluster centres are within three (3) degrees of each other. clusterdistance is the Euclidean distance between the two cluster centres eland cl.
The clusters at the smaller radius are then sorted according to the cost ratio determined in Equation (32) and clusters with no matching clusters at larger radius are discarded. From the sorted clusters, a final sorted list of clusters of normal vector solutions is determined. Each such cluster is characterised by the cluster centroid which is a point in the azimuth-inclination
2D space.
The reasons for determining the cost ratio above are three-fold, as follows:
• Filter out clusters that are not formed at two different radii - indicating that the cluster is isolated.
• Clusters that increase in size uniformly are more likely to correspond to consistent normal vector solutions - the cost ratio ensures that clusters that increase in size while minimizing cluster cost are considered.
• The cost ratio is weighted by cluster distance so that cluster centres that move too much from one radius to the next are penalised accordingly.
Alternatively, the clustering step 780 may be performed using K-means. With the candidate normal vector solutions expressed in azimuth and inclination, the normal solutions may be clustered using the K-Means algorithm. Note that since the number of clusters is not known in advance, an empirically determined distance may be used to ensure minimum separation between cluster centres.
In another arrangement, the elbow method of plotting cluster size and k-means squared error may be used to determine the number of clusters. In another arrangement, the separation of cluster centres may be determined dynamically or using a threshold specified by the user. The clusters may then be evaluated using a cost function in accordance with Equation (33), as follows:
where
P264804_Speci_As Filed (13339392vl) (33) • sd-azimuth is the standard deviation of azimuth values corresponding to the normal
-332017204848 13 Jul 2017 solutions.
• ^inclination is the standard deviation of inclination values corresponding to the normal solutions.
• density is the ratio of the number of solutions in the cluster to the area occupied by the cluster.
The cost of the top two clusters is compared, and if the cost of the rank-1 cluster is less than one-fifth (or a similar empirically determined threshold) that of the rank-2 cluster, then the rank-1 cluster is used to determine the normal vector solution. Otherwise, both clusters are used and one of the solutions from the two clusters may be selected.
In another arrangement, instead of considering the top two clusters, top “N” clusters may be considered depending on how separated the clusters are in terms of cost.
Thus, regardless of the method used to determine clusters in the normal vector map, the final result of clustering step 780 is a set of cluster locations corresponding to candidate clusters. The set of cluster locations are ordered by a cost metric starting with the cluster corresponding to a rank-1 solution followed by a rank-2 solution and so on. In the case of Gaussian convolution, the cost metric is the result of peak interpolation resulting in a peak value. In the case of DBSCAN, the cost metric is a value determined according to Equation (32). In the case of K-Means, the cost metric is a cost metric according to the Equation (33).
After step 780, the method 700 proceeds to step 790 where from the ranked list of one or more clusters, at least one normal vector solution is determined. In step 790, for a cluster, the solution which is closest to the cluster centre is selected as the final solution. The determination of the closest solution may be performed by calculating a Euclidean distance between the cluster centre and the solutions within the cluster and then selecting the solution with the smallest Euclidean distance. In another arrangement, any other solution within the rank-1 cluster may be selected as the final solution.
In practice, it has been found that when the cost of a rank-1 cluster is much smaller than rank-2 cluster, the rank-1 cluster solution achieves a very good rectification result. In this case, the rank-1 cluster may be selected as the solution.
P264804_Speci_As Filed (13339392vl)
-342017204848 13 Jul 2017
On the other hand, if the two clusters (i.e., rank-1 and rank-2) have costs that are close to each other, either of the clusters may be used to determine the final solution. However, both the rank-1 cluster and the rank-2 cluster have some limitation such as slight rotation or one of the comers not being correctly rectified. In general, when there are several clusters with similar cost values, the rectification results are also similar. In this case, the user may be presented with an option of selecting one of the many possible solutions of rectification results.
Selection of a cluster of solutions and representative solution within a cluster is described in further detail below by way of example with reference to Fig. 9.
The result of the method 700 is the selection of the most appropriate solution, comprising the intrinsic parameters of the camera 121, the intrinsic parameters of the projector 111, and the normal vector of the planar projection surface 160. The intrinsic parameters and the normal vector determined in accordance with the method 700 form the device calibration data 506.
A method 800 of rectifying an image will now be described with reference to Fig. 8. The method 800 may be implemented as one or more software code modules of the image rectifier 15 module 507 resident in the hard disk drive 1010 and being controlled in its execution by the processor 1005.
The method 800 starts and proceeds to determining step 810, where a surface-projector homography is determined under execution of the processor 1005. A surface-projector homography maps points between the projector image 611 and the projection surface 160. The 20 surface-projector homography is determined by multiplying the camera-projector homography, determined at step 710 of the method 700, by the camera-virtual camera homography, determined at step 760 of the method 700.
The method 800 then proceeds to mapping step 820, where the corners of the projector image
611 are mapped to the projection surface 160 using the surface-projector homography. The comers of the projector image plane are located at the coordinates (0, 0), (width, 0), (width, height) and (0, height), where width and height are the number of pixels in the projector image plane in the horizontal and vertical direction, respectively. For example, with reference to Fig. 2B and Fig. 2C, corners 251-254 of the projector image 231 are mapped to positions 261-264 which are the corners of the projection region 211 on the projection surface 160. The mapping is performed by multiplying the coordinates of the points by the surface-projector homography.
P264804_Speci_As Filed (13339392vl)
-352017204848 13 Jul 2017
The method 800 then proceeds to step 830, where the maximal rectangle is determined on the surface 160. The maximal rectangle is the largest upright rectangle, with the same aspect ratio as the content image 200, that fits inside the quadrangle formed by the mapped projector comers. For example, with reference to Fig. 2, the maximal rectangle 270 is created on projection surface 160. The maximal rectangle 270 defines the region of the projection surface 160 in which the projector 111 will project the content image 200.
The method 800 then proceeds to mapping step 840, where the maximal rectangle is mapped to the projector image. The mapping is performed at step 840 by multiplying the coordinates of the corners of the maximal rectangle (in surface coordinates) by the inverse of the surface10 projector homography. For example, with reference to Fig. 2B, the mapping of the maximal rectangle 270 to the projector image 231 results in the region 241.
The method 800 then proceeds to rendering step 850, where the content image is rendered to the region of the projector image corresponding to the maximal rectangle. For example, with reference to Fig. 2B, the content image 200 is rendered so that the content image 200 fills the 15 region 241. Any suitable rendering method may be used as step 850. For example, any suitable method of interpolating between the corners of a quadrangle to select content image pixels, for each projector pixel, may be used at step 850. The resulting rendered image, when projected by the projector 111, will appear as an upright, rectangular, rectified image on the projection surface 160.
An example of clustering solutions based on azimuth and inclination, as performed at step 780 of the method 700, will now be described with reference to Fig. 9. Fig. 9 shows a plot 900 with the azimuth of a normal vector of a solution as the x-axis, and the inclination of a normal vector of the solution as they-axis. Individual solutions are shown on the plot 900 as solid diamonds.
As described previously with reference to Fig. 7, clustering is used to identify solutions (calibrations) that will result in a preferred rectified image. Typically, similar solutions will be close together in terms of normal vector’s azimuth and inclination. Therefore, the described methods identify clusters that contain many solutions that are very similar (have low standard deviation), and optionally whose solutions have low cost (as determined at step 770 of the method 700).
Fig. 9 shows an example of three clusters of solutions 901-903. Cluster 901 contains many similar solutions. In step 780, the clusters 901, 902 and 903 are determined and ranked. If
P264804_Speci_As Filed (13339392vl)
-362017204848 13 Jul 2017
Gaussian Convolution was used in step 780, then the ranking is based on the peak interpolation value. If DBSCAN is used the cost is based on Equations (31), (32) and in the case of k-means the cost is based on Equation (33). Based on step 780, it may be determined that cluster 901 is rank-1 followed by cluster 902 which is rank-2 and cluster 903 which is rank-3. If cluster 901 is selected as the final cluster at step 780, then the normal solution 911 which corresponds to the solution closest to the cluster centre is selected as the solution at step 790. The normal solution
911 which corresponds to the solution closest to the cluster centre will result in a rectified image on the projection surface 160. As seen in Fig. 9, the clusters 902, 903 have a larger spread (standard deviation) when compared to cluster 901. Qualitatively, regardless of the 10 method used to determine clusters and the cost in step 780, the cost is determined based on the spread of the solutions and the density within the cluster. Therefore, the solutions in clusters 902, 903 may not be considered close to the solution that will result in a preferred rectified image.
Further, if the cost of cluster 901 (i.e., the rank-1 cluster) is much smaller (or have a larger peak 15 value in the case of Gaussian convolution) than the cost of cluster 902 (i.e., the rank-2 cluster), then the confidence that solutions belonging to cluster 901 are accurate is increased.
As described with reference to steps 780 and 790, the solutions belonging to the selected cluster or clusters are used to determine the final solution to use during rectification. It is preferable that the solution closest to the cluster centre is selected as the final solution. For example, 20 solution 911 is closest to the centre of cluster 901, and therefore solution 911 may be selected as the final solution.
The device calibration method 700, described with reference to Fig. 7, estimates the camera intrinsic parameters once, at step 720. Several projector intrinsic matrices are produced (at step 730), and are used to determine the solution that will result in a best rectified image.
In an alternative arrangement, several camera intrinsic parameters are also produced, and are used to find a better solution. After determining the projector intrinsic parameters at step 780, several camera intrinsic matrices may be estimated, in the vicinity of the original camera intrinsic parameters. Steps 750 to 770 may then be performed for each of the estimated camera intrinsic matrices. Finally, clustering may be used to re-determine the solution that will result in 30 a best rectified image. After performing steps 750 to 770 one or more times, alternating between estimating several camera intrinsic matrices and several projector intrinsic matrices,
P264804_Speci_As Filed (13339392vl)
-372017204848 13 Jul 2017 the solution that will result in a best rectified image is expected to converge. The resulting projected image may therefore be better-rectified.
In some arrangements, multiple projectors are used to expand the overall projection region, or to increase the overall brightness of the projected content. The device calibration method 700, described with reference to Fig. 7, may be performed for each projector separately, typically using the same camera. To further improve the calibration, the intrinsic matrices of the projectors may be refined, so that the difference between their separate normal vectors is minimised. For example, the projector intrinsic parameters may be iteratively adjusted using any non-linear optimisation method (e.g. the Levenberg-Marquardt algorithm).
The arrangements described are applicable to the computer and data processing industries and particularly for the image processing.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word comprising, such as “comprise” and “comprises” have correspondingly varied meanings.
Claims (10)
1. A method of projecting a rectified image on a surface using a projector and a camera, the method comprising:
determining a camera-projector homography by capturing with the camera a calibration 5 pattern projected by the projector on to the surface;
determining a plurality of normal vectors for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
selecting one of a plurality of clusters of the normal vectors according to a cost value of
10 the cluster; and projecting an image using the projector, wherein the projected image is rectified according to a normal vector from the selected cluster to the surface.
2. The method according to claim 1, wherein the clusters are formed according to azimuth and inclination angles corresponding to the normal vectors.
15
3. The method according to claim 1, wherein the cost value is based on one or more of (a) size of the cluster, (b) spread of the cluster, or (c) density of the cluster.
4. The method according to claim 3, wherein the density is determined as a ratio of the size of the cluster to the area occupied by the cluster.
5 code for determining a camera-projector homography by capturing with the camera a calibration pattern projected by the projector on to the surface;
code for determining a plurality of normal vectors for the surface, each of the normal vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
5 10. The method according to claim 1, further comprising determining a warp map between pixels of an image captured by the camera and pixels in an image projected by the projector.
11. The method according to claim 1, wherein the camera parameters are determined by querying the camera.
12. The method according to claim 1, wherein the camera is modelled using a pinhole
10 model.
13. The method according to claim 1, wherein the projector is modelled using a pinhole model.
14. The method according to claim 1, wherein the parameters of the projector are sampled at several random values within pre-defined ranges of a focal length and principal point of the
15 projector.
15. The method according to claim 1, further comprising decomposing the homography.
16. The method according to claim 1, further comprising determining the position of a virtual camera for use in selecting the cluster.
17. The method according to claim 1, further comprising filtering the normal vectors of the 20 selected cluster based on a circular points cost metric.
18. The method according to claim 1, wherein the normal vector is closest to a centre of the selected cluster.
19. The method according to claim 1, further comprising determining a surface-projector homography.
P264804_Speci_As Filed (13339392vl)
-402017204848 13 Jul 2017
20. The method according to claim 1, wherein a plurality of projectors is used to project the image.
21. The method according to claim 20, further comprising selecting a cluster of vectors for each of the projectors.
5
22. A method according to claim 21, where a cluster is selected by further comparing the cost value associated with at least one other cluster.
23. A method according to claim 21 where a cluster is selected by using DBSCAN algorithm at two different radii and comparing the cost value of the clusters at the two different radii.
10
24. A method according to claim 1, further comprising performing a Gaussian convolution on the plurality of normal vectors.
25. A system for projecting a rectified image on a surface using a projector and a camera, the system comprising:
a memory for storing data and a computer program;
15 a processor coupled to the memory for executing the program, the program comprising instructions for:
determining a camera-projector homography by capturing with the camera a calibration pattern projected by the projector on to the surface;
determining a plurality of normal vectors for the surface, each of the normal
20 vectors being determined using an estimate of projector parameters for the projector that decompose the determined camera-projector homography using camera parameters;
selecting one of a plurality of clusters of the normal vectors according to a cost value of the cluster;
projecting an image using the projector, wherein the projected image is rectified
25 according to a vector from the selected cluster to the surface.
P264804_Speci_As Filed (13339392vl)
-412017204848 13 Jul 2017
27. A non-transitory computer readable medium having a program stored on the medium for projecting a rectified image on a surface using a projector and a camera, the program comprising:
5. The method according to claim 4, wherein the density is determined as a weighted sum 20 of the number of normal vector solutions within a neighbourhood.
6. The method according to claim 5, wherein the weights for determining the weighted sum is based on a Gaussian function.
7. The method according to claim 1, further comprising selecting a normal vector from the selected cluster to determine the homography.
P264804_Speci_As Filed (13339392vl)
-392017204848 13 Jul 2017
8. The method according to claim 7, wherein the normal vector from the selected cluster is selected based on distance from the cluster centre.
9. The method according to claim 7, wherein the normal vector from the selected cluster is selected based minimizing a circular point cost.
10 code for selecting one of a plurality of clusters of the normal vectors according to a cost value of the cluster;
code for projecting an image using the projector, wherein the projected image is rectified according to a vector from the selected cluster to the surface.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2017204848A AU2017204848A1 (en) | 2017-07-13 | 2017-07-13 | Projecting rectified images on a surface using uncalibrated devices |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2017204848A AU2017204848A1 (en) | 2017-07-13 | 2017-07-13 | Projecting rectified images on a surface using uncalibrated devices |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| AU2017204848A1 true AU2017204848A1 (en) | 2019-01-31 |
Family
ID=65137546
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2017204848A Abandoned AU2017204848A1 (en) | 2017-07-13 | 2017-07-13 | Projecting rectified images on a surface using uncalibrated devices |
Country Status (1)
| Country | Link |
|---|---|
| AU (1) | AU2017204848A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113160049A (en) * | 2021-03-05 | 2021-07-23 | 深圳市普汇智联科技有限公司 | Multi-projector seamless splicing and fusing method based on splicing and fusing system |
| CN113645456A (en) * | 2021-09-22 | 2021-11-12 | 业成科技(成都)有限公司 | Projection image correction method, projection system, and readable storage medium |
-
2017
- 2017-07-13 AU AU2017204848A patent/AU2017204848A1/en not_active Abandoned
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113160049A (en) * | 2021-03-05 | 2021-07-23 | 深圳市普汇智联科技有限公司 | Multi-projector seamless splicing and fusing method based on splicing and fusing system |
| CN113160049B (en) * | 2021-03-05 | 2022-12-30 | 深圳市普汇智联科技有限公司 | Multi-projector seamless splicing and fusing method based on splicing and fusing system |
| CN113645456A (en) * | 2021-09-22 | 2021-11-12 | 业成科技(成都)有限公司 | Projection image correction method, projection system, and readable storage medium |
| CN113645456B (en) * | 2021-09-22 | 2023-11-07 | 业成科技(成都)有限公司 | Projection image correction method, projection system and readable storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10401716B2 (en) | Calibration of projection systems | |
| US20200082496A1 (en) | Method and system for reproducing visual content | |
| US9303525B2 (en) | Method and arrangement for multi-camera calibration | |
| JP4573085B2 (en) | Position and orientation recognition device, position and orientation recognition method, and position and orientation recognition program | |
| US9756303B1 (en) | Camera-assisted automatic screen fitting | |
| JP5822322B2 (en) | Network capture and 3D display of localized and segmented images | |
| US9578295B1 (en) | Calibration feature masking in overlap regions to improve mark detectability | |
| US10663291B2 (en) | Method and system for reproducing visual content | |
| US9756261B2 (en) | Method for synthesizing images and electronic device thereof | |
| CN108734743A (en) | Method, apparatus, medium and electronic equipment for demarcating photographic device | |
| US9639948B2 (en) | Motion blur compensation for depth from defocus | |
| CN112241933A (en) | Face image processing method and device, storage medium and electronic equipment | |
| AU2011253973A1 (en) | Keyframe selection for parallel tracking and mapping | |
| US20200184697A1 (en) | Image Modification Using Detected Symmetry | |
| CN109194954B (en) | Method, device and equipment for testing performance parameters of fisheye camera and storable medium | |
| US20140003740A1 (en) | Block patterns as two-dimensional ruler | |
| AU2019201822A1 (en) | BRDF scanning using an imaging capture system | |
| CN115002430A (en) | Projection method, projector, and computer-readable storage medium | |
| AU2017204848A1 (en) | Projecting rectified images on a surface using uncalibrated devices | |
| AU2019201825A1 (en) | Multi-scale alignment pattern | |
| WO2024011756A1 (en) | Image acquisition parameter adjustment method and system, electronic device, and storage medium | |
| AU2018202801A1 (en) | Method, apparatus and system for producing a foreground map | |
| AU2011265340A1 (en) | Method, apparatus and system for determining motion of one or more pixels in an image | |
| CN112669337B (en) | A self-iterative local green screen matting method | |
| EP1384204A2 (en) | Apparatus and method for boundary detection in vector sequences and edge detection in color image signals |