[go: up one dir, main page]

WO2013114988A1 - Information display device, information display system, information display method and program - Google Patents

Information display device, information display system, information display method and program Download PDF

Info

Publication number
WO2013114988A1
WO2013114988A1 PCT/JP2013/051044 JP2013051044W WO2013114988A1 WO 2013114988 A1 WO2013114988 A1 WO 2013114988A1 JP 2013051044 W JP2013051044 W JP 2013051044W WO 2013114988 A1 WO2013114988 A1 WO 2013114988A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
character
information
collation
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2013/051044
Other languages
French (fr)
Japanese (ja)
Inventor
尚司 谷内田
娜 劉
大輔 西脇
達勇 秋山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of WO2013114988A1 publication Critical patent/WO2013114988A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography

Definitions

  • the present invention relates to a technique for translating character information copied in an image.
  • the above software translates the characters described in the target from a captured image obtained by capturing the target with characters such as a signboard, poster, menu, etc.
  • the method of analyzing words and morphemes has a problem that character information is translated into a meaning different from the original meaning or translated into an unnatural expression.
  • collation image data including the same character information as the character information in the captured image data is retrieved from the database, and the translation image data previously associated with the retrieved image data indicates the translation. An image is displayed. For this reason, character information is not translated into a meaning different from the original meaning or translated into an unnatural expression.
  • Patent Document 1 aims to translate and display place names described on a map, and has two maps, an original map and a translated map in which place names described on the original map are translated.
  • An information display device to be displayed on each screen is disclosed. In this information display device, it is possible to easily compare the character information in the captured image with the translated character information in the translated image, so that the correspondence between the character information in the translated image and the character information in the captured image is It can be easy to understand.
  • An object of the present invention is to provide an information display device, an information display system, and an information display method capable of easily displaying character information even if the lengths of character information before translation and character information after translation change significantly. And to provide a program.
  • An information display device includes a display unit, an imaging unit that performs imaging and outputs captured image data, collation image data indicating a collation image including character information, and translated character information obtained by translating the character information.
  • a translation image data indicating a translation image including a reference point in a collation character range where the character information exists on the collation image and a reference point in a translation character range where the translation character information exists on the translation image A storage unit for storing character position information indicating a correspondence relationship; and comparing the captured image data with the verification image data to determine whether or not the character information is included in the captured image indicated by the captured image data.
  • the reference points of the collation character range and the translated character range are associated with each other based on the character position information, and
  • An information display system is an information display system having a collation device and an information display device, wherein the collation device is obtained by translating collation image data indicating a collation image including character information and the character information.
  • Translated image data indicating a translated image including translated character information, a reference point in the collated character range where the character information exists on the collated image, and a translated character range within which the translated character information exists on the translated image
  • a storage unit for storing character position information indicating a correspondence relationship with a reference point, a first communication unit for receiving captured image data from the information display device, and comparing the captured image data with the verification image data.
  • a collation unit that determines whether or not the character information is included in the captured image indicated by the captured image data; and the collation image data when the character information is included in the captured image.
  • a first control unit that acquires the translated image data and the character position information from the storage unit and transmits the retrieved image data and the character position information as search information to the information display device.
  • the information display device includes a display unit and an imaging unit. An imaging unit that outputs the captured image data, a second communication unit that transmits the captured image data to the verification device, and that receives the search information from the verification device, and the second communication
  • a second control unit that displays the collation image and the translation image on the display unit in association with the reference points of the collation character range and the translation character range based on the search information received by the unit; Have
  • the information display method includes collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and the character information on the collation image.
  • Character position information indicating a correspondence relationship between a reference point in the collated character range in which the character string exists and a reference point in the translated character range in which the translated character information exists on the translated image is stored, and imaging is performed.
  • the program according to the present invention includes collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and the character information on the collation image.
  • a computer connected to a storage unit for storing character position information indicating a correspondence relationship between a reference point in the collated character range to be performed and a reference point in the translated character range where the translated character information exists on the translated image;
  • the reference points of the collation character range and the translated character range are associated with each other based on the character position information.
  • the present invention even if the length of the character information before translation and the character information after translation changes significantly, it becomes possible to display the character information in an easy-to-see manner.
  • FIG. 1 is a diagram illustrating the configuration of the information display apparatus according to the present embodiment.
  • the information display device 1 shown in FIG. 1 includes an input button unit 11, a rear camera unit 12, a display unit 13, a control signal input unit 14, a storage unit 15, an image collation unit 16, and a control unit 17.
  • the information display device 1 is, for example, a smartphone, a tablet terminal, or an information processing device having the same size as those.
  • the input button unit 11 When the input button unit 11 is pressed by an operator who uses the information display device 1, the input button unit 11 outputs a pressing signal indicating that the input button unit 11 has been pressed.
  • the rear camera unit 12 is an imaging unit that performs imaging and outputs captured image data.
  • the rear camera unit 12 captures an imaging target such as a menu or a signboard, and outputs captured image data indicating a captured image obtained by capturing the imaging target. It is assumed that character information is included in the shooting target.
  • the display unit 13 displays various images.
  • the display unit 13 is set to one display mode of one screen display mode for displaying one image and two screen display mode for displaying two images.
  • the display unit 13 displays a translation result image including translated character information obtained by translating the character information copied in the photographed image in the one-screen display mode, and the translation result image and the photographed image in the two-screen display mode.
  • a collation result image including the character information copied to. Detailed descriptions of the translation result image and the collation result image will be described later.
  • the display unit 13 is provided on a predetermined surface (hereinafter referred to as the front surface) of the information display device 1, and the rear camera unit 12 is provided on the back surface that is the surface opposite to the front surface of the information display device 1.
  • control signal input unit 14 is a touch panel that detects contact or proximity by an input unit such as a finger or a stylus and outputs a detection signal indicating the contact or proximity position.
  • the control signal input unit 14 is provided so as to overlap the display unit 13.
  • the storage unit 15 stores storage information D1 that is a database for translating character information included in captured image data into another language.
  • the storage information D1 has data strings C11 to C1n as records.
  • Each data string C1i includes collation image data Xi, translated image data X1i, image position feature information Yi, and an image position conversion formula Zi.
  • n is an integer of 2 or more
  • i is an integer of 1 to n. There may be only one data string.
  • the collation image data Xi is image data that is collated with the captured image data output from the rear camera unit 12, and includes character information.
  • the translated image data X1i is image data including translated character information obtained by translating character information in the collation image data Xi into another language.
  • the language of character information and translated character information is not particularly limited as long as they are different languages.
  • the image position feature information Yi and the image position conversion formula Zi include a collation character range in which character information in the collation image indicated by the collation image data Xi exists and a translation in which translation character information in the translation image indicated by the translation image data X1i exists. Character position information indicating a character range and correspondence is configured.
  • the image position feature information Yi indicates the correspondence between the reference point in the collation character range and the reference point in the translated character range.
  • the reference point is, for example, a writing point or end point of character information (translated character information) or a midpoint of the writing point and end point.
  • the image position conversion formula Zi is information indicating the ratio between the size of the collation character range in the collation image data Xi and the size of the translation character range in the translation image data X1i.
  • the image position conversion formula Zi indicates the ratio of the length from the writing point in the collation image to the end point of the character information and the length from the writing point in the translated image to the end point of the translated character information.
  • collation image data X1 to Xn constitute a collation image information group D11.
  • the translation image data X11 to X1n constitute a translation image information group D12.
  • the image position feature information Y1 to Yn constitutes an image position feature information group D13.
  • the image position conversion formulas Z1 to Zn constitute an image position conversion formula group D14.
  • the image collation unit 16 collates the captured image data output from the rear camera unit 12 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 15, and thereby the characters in the captured image data.
  • the collation image data having the same character information as the information is searched. Then, the image collation unit 16 outputs the retrieved collation image data as retrieval image data.
  • the control unit 17 controls the entire information display device 1 by performing transmission and reception of signals and image data with each unit of the information display device 1.
  • the control unit 17 includes a CPU 18 and a calculation unit 19 having the following functions.
  • the CPU 18 receives a press signal from the input button unit 11, receives a detection signal from the control signal input unit 14, and receives search image data from the image matching unit 16.
  • the CPU 18 confirms the display mode of the display unit 13.
  • the CPU 18 acquires the translation image data corresponding to the search image data from the storage unit 15 and displays the translation image indicated by the translation image data on the display unit 13 as a translation result image.
  • the CPU 18 displays each of the translation images indicated by the plurality of translation image data corresponding to the plurality of search image data as a translation result image.
  • the CPU 18 acquires translated image data and image position feature information corresponding to the search image data from the storage unit 15. Then, based on the image position feature information, the CPU 18 associates the reference position in the collation character range of the search image data with the reference position in the translation character range of the translation image data, so that the search image data and the translation image
  • the collation image and the translation image indicated by each data are displayed on the display unit 13 as a collation result image and a translation result image.
  • the CPU 18 sets a collation display area for displaying the collation result image and a translation display area for displaying the translation result image on the display unit 13, displays the collation image in the collation display area, and displays the translation display. Display translated images in the area.
  • the CPU 18 associates the display position of the reference point of the character information with the display position of the translation reference point, which is the reference point of the translated character information, based on the image position feature information, and displays the collation image and the translated image. indicate. More specifically, the CPU 18 matches the display position of the reference point in the collation display area with the display position of the translation reference point in the translation display area, and displays the collation result image and the translation result image. For example, when the display position of the reference point is set to the center of the collation display area, the CPU 18 sets the display position of the translation reference point to the center of the translation display area.
  • the CPU 18 includes an image movement mode and a pen input mode as operation modes for performing an operation according to the detection signal, and operates in one operation mode.
  • the image movement mode is an operation mode in which the collation result image and the translation result image are moved and scaled
  • the pen input mode is an operation mode in which the operator writes an image.
  • the CPU 18 When the display mode of the display unit 13 is the single screen display mode, the CPU 18 operates in the image moving mode. In this case, the CPU 18 moves and enlarges / reduces the translation result image according to the detection signal.
  • the operator when moving the translation result image, the operator performs a first movement operation to input a first movement instruction to move the translation result image to the control signal input unit 14.
  • the first movement operation is, for example, an operation of bringing the input unit into contact with the translation display area and then moving the input unit to move the contact position between the input unit and the translation display area.
  • the CPU 18 receives a detection signal corresponding to the first movement operation from the control signal input unit 14 as a first movement instruction. In this case, the CPU 18 moves the translation result image displayed on the display unit 13 in the movement distance and the movement direction according to the first movement instruction.
  • the operator when enlarging / reducing the translation result image, the operator performs a first enlargement / reduction operation for inputting a first enlargement / reduction instruction for enlarging / reducing the translation result image to the control signal input unit 14.
  • the first enlargement / reduction operation is, for example, an operation of bringing the input unit into contact with a slider area set in advance in the control signal input unit 14.
  • the CPU 18 receives a detection signal corresponding to the first enlargement / reduction operation from the control signal input unit 14 as a first enlargement / reduction instruction. In this case, the CPU 18 enlarges / reduces the translation result image displayed on the display unit 13 at an enlargement / reduction ratio corresponding to the first enlargement / reduction instruction.
  • the CPU 18 operates in the image moving mode or the pen input mode.
  • the CPU 18 moves and scales the collation result image and the translation result image in accordance with the detection signal.
  • the CPU 18 performs the second movement operation.
  • the movement target image is moved according to the movement instruction, and based on the character position information corresponding to the movement target image, different from the movement target image so that the reference points of the collation character range and the translation character range correspond to each other.
  • the collation image or the translation image is moved.
  • the second moving operation is, for example, an operation in which the input unit is brought into contact with the collation display region or the translation display region for displaying the target image, and then the input unit is moved to move the contact position of the input unit. is there.
  • the CPU 18 receives a detection signal corresponding to the second movement operation from the control signal input unit 14 as a second movement instruction.
  • the CPU 18 moves the translation result image that is the movement target image in accordance with the second movement instruction, and also displays the display position in the translation display area of the translation reference point of the translated translation result image and the reference point of the matching result image
  • the collation result image is moved so that the display position in the collation display area matches.
  • the CPU 18 When the target image is a collation image, the CPU 18 performs the same process as the above process, and moves both the collation result image and the translation result image.
  • the CPU 18 When the operator performs a second enlargement / reduction operation for inputting a second enlargement / reduction instruction to enlarge / reduce the enlargement / reduction target image, which is a collation result image or a translation result image, to the control signal input unit 14, the CPU 18 performs the second enlargement / reduction operation.
  • the expansion instruction not only the enlargement / reduction target image but also the collation result image or the translation result image different from the enlargement / reduction target image are enlarged / reduced.
  • the second enlargement / reduction operation is, for example, an operation of bringing the input unit into contact with the slider area in the control signal input unit 14.
  • the slider area may be provided for each of the collation result image and the translation result image, or may be provided for one of the collation result image and the translation result image.
  • the CPU 18 when a second enlargement / reduction operation using the enlargement / reduction target image as a translation result image is performed, the CPU 18 outputs a detection signal corresponding to the second enlargement / reduction operation from the control signal input unit 14 to the second enlargement / reduction instruction. As received.
  • the CPU 18 determines the enlargement / reduction ratio according to the second enlargement / reduction instruction, enlarges / reduces the translation result image displayed on the display unit 13 with the enlargement / reduction ratio, and uses the calculation unit 19 to enlarge / reduce the image.
  • a collation character range in the collation result image different from the image is specified.
  • the CPU 18 enlarges / reduces the collation result image based on the collation character range so that the collation character range corresponds to the translation character range. For example, the CPU 18 expands the collation result image so that the expanded collation character range and the translated character range have the same size. At this time, the CPU 18 may enlarge or reduce each of the collation result image and the translation result image around the reference point and the translation reference point, or may enlarge or reduce around the point designated by the user. In addition, the process of the calculating part 19 is mentioned later.
  • the CPU 18 when operating in the pen input mode, the CPU 18 performs drawing on the collation result image and the translation result image according to the detection signal.
  • the CPU 18 checks not only the drawing target image but also a matching different from the target image. Drawing is also performed on the result image or the translation result image.
  • the drawing operation is, for example, an operation in which the input unit is brought into contact with the collation display region or the translation display region for displaying the drawing target image, and then the input unit is moved to move the contact position of the input unit.
  • the CPU 18 receives a detection signal corresponding to the drawing operation from the control signal input unit 14 as a drawing instruction. A drawing instruction is received.
  • the CPU 18 draws a figure corresponding to the drawing instruction in the translated character range in the translation result image that is the drawing target image. Then, the CPU 18 uses the calculation unit 19 to identify a collation character range in the collation result image that is different from the drawing target image, and draws the graphic in the identified collation character range.
  • the CPU 18 sets the operation mode and the operation mode based on the detection signal that is set on the control signal input unit 14 and indicates the position in the region for switching the operation mode of the CPU 18 or the display mode of the display unit 13.
  • the display mode can be switched.
  • the calculation unit 19 is used when the CPU 18 determines the collation character range or the translation character range.
  • the CPU 18 transmits translation coordinate information indicating the size of the translation character range to the calculation unit 19, and when determining the translation character range, the collation coordinates indicating the size of the collation character range. Information is transmitted to the calculation unit 19.
  • the calculation unit 19 acquires the image position conversion formula from the storage unit 15 when receiving the translated coordinate information from the CPU 18, for example.
  • the computing unit 19 calculates collation coordinate information indicating the size of the collation character range associated with the character position information based on the acquired image position conversion formula, and transmits the collation coordinate information to the CPU 18.
  • the CPU 18 determines the collation character range based on the received collation coordinate information and the image position feature information acquired from the storage unit 15 via the CPU 18.
  • the calculation unit 19 can calculate the translation coordinate information and transmit it to the CPU 18 by the same process as described above even when the collation coordinate information is received from the CPU 18.
  • the CPU 18 uses the calculation unit 19 from the translation coordinate information indicating the size of the translated character range, and the size of the collation character range associated with the translated character range in the character position information.
  • the CPU 18 and the calculation unit 19 function as the control unit 17 that determines the collation character range associated with the translation character range in the character position information from the translation character range.
  • FIG. 2 is a diagram showing an example of a user interface screen displayed on the display unit 13 in the single screen display mode.
  • the user interface screen shown in FIG. 2 has a 1-screen / 2-screen switch display area 101, a pen input switch display area 102, a translation display area 103, and a slider area 105.
  • the 1-screen / 2-screen switch display area 101 is an area for switching the display mode of the display unit 13.
  • the CPU 18 receives a detection signal indicating a position in the 1-screen / 2-screen switch display area 101, the CPU 18 switches the display mode of the display unit 13.
  • the pen input switch display area 102 is an area for switching the operation mode of the CPU 18.
  • the CPU 18 operates in the screen movement mode. Therefore, even when the detection signal indicating the position in the pen input switch display area 102 is received, the operation mode is not switched.
  • the translation display area 103 is an area for displaying a translation result image.
  • the translation result image corresponding to the captured image obtained by capturing the menu 2 is shown.
  • the slider area 105 is an area for enlarging / reducing the translation result image.
  • the CPU 18 receives the detection signal indicating the position in the slider area 105, the CPU 18 enlarges or reduces the translation result image according to the detection signal.
  • the CPU 18 enlarges the translation result image, and the contact position indicated by the detection signal is on the ⁇ side from the center in the slider area 105. In the case, the translation result image is reduced. At this time, the CPU 18 determines the scaling factor in accordance with the distance from the center of the slider area 105 to the contact position.
  • FIG. 3 is a diagram illustrating an example of a user interface screen displayed on the display unit 13 in the two-screen display mode.
  • the user interface screen shown in FIG. 3 includes a one-screen / 2-screen switch display area 101, a pen input switch display area 102, a translation display area 103, a collation display area 104, a slider area 105, and a pen display area 106. And have.
  • FIG. 3 the same components as those in FIG. 2 are denoted by the same reference numerals, and the description thereof is omitted.
  • the collation display area 104 is an area for displaying a collation result image.
  • FIG. 3 shows a matching result image corresponding to a captured image obtained by capturing the menu 2.
  • the CPU 18 since the CPU 18 operates in the image movement mode or the pen input mode, when the detection signal indicating the position in the pen input switch display area 102 is received, the CPU 18 switches the operation mode of the CPU 18 itself. Therefore, every time the pen input switch display area 102 comes into contact with the input means, the operation mode of the CPU 18 is switched between the image movement mode and the pen input mode, so that the pen input switch display area 102 is equivalent to a toggle type switch. It will have a function.
  • the CPU 18 displays the pen display area 106 on the translation display area 103.
  • the CPU 18 matches the tip of the pen display area 106 with the position.
  • the CPU 18 When the collation character range is displayed at the position indicated by the detection signal, the CPU 18 confirms the translated character range corresponding to the collation character range based on the character position information. The CPU 18 draws the same graphic on the collation character range and the translated character range.
  • an underline is drawn in the translated character information in the translation display area 103
  • an underline is drawn in the character information in the collation display area 104.
  • the CPU 18 transmits translated coordinate information indicating the size of the translated character range equal to the length of the underline to the computing unit 19.
  • the computing unit 19 calculates the size of the collation character range corresponding to the translated character range based on the character position information from the received translation coordinate information based on the image position conversion formula, and transmits it to the CPU 18.
  • the CPU 18 draws an underline having a length equal to the size of the collation character range.
  • the CPU 18 matches the position of the tip of the pen display area 106 to the pen display area 106. 106 is displayed.
  • the CPU 18 when the CPU 18 receives a detection signal indicating the position in the slider area 105, the CPU 18 scales the collation result image and the translation result image according to the detection signal.
  • the CPU 18 enlarges the translation result image and positions closer to ⁇ than the center in the slider area 105.
  • the translation result image is reduced.
  • the CPU 18 determines the scaling factor according to the distance from the center in the slider area 105 to the contact position.
  • the CPU 18 can perform pen input to the translation display area 103 and can enlarge or reduce or move an image on the collation display area 104. It is possible to set the operation mode.
  • FIG. 4 is a flowchart for explaining an example of the operation of the information display apparatus 1 of the present embodiment.
  • the rear camera unit 12 captures an object and generates captured image data (step S301).
  • the rear camera unit 12 outputs the captured image data to the CPU 18.
  • the CPU 18 When accepting the captured image data, the CPU 18 outputs the captured image data to the image collating unit 16 (step S302).
  • the image collation unit 16 collates the captured image data with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 15, and character information included in the captured image data. Search for collation image data having the same character information. Then, the image matching unit 16 transmits the search result to the CPU 18 (step S303). At this time, if the collation image data can be retrieved, the image collation unit 16 transmits the retrieved collation image data as a search result. If the collation image data cannot be retrieved, the image collation unit 16 retrieves that there is no search image data. Send as a result.
  • the CPU 18 When the CPU 18 receives the search result from the image collating unit 16, the CPU 18 confirms the search result and determines whether there is search image data in the collation image information group D11 (step S304).
  • the CPU 18 displays selection information for requesting selection of whether or not to perform imaging again, and that there is no collation image data having the same character information as the character information included in the captured image data. (Step S313).
  • step S301 the process of step S301 is performed.
  • step S304 If there is search image data in step S304, the CPU 18 sets the image movement mode in the CPU 18 itself, and sets the one-screen display mode in the display unit 13. Then, the CPU 18 extracts translation image data corresponding to the search image data from the storage information D1 stored in the storage unit 15, and displays the translation image indicated by the translation image data on the display unit 13 as a translation result image ( Step S305).
  • step S305 the CPU 18 confirms the presence or absence of a detection signal (step S306).
  • the CPU 18 confirms whether or not there is a pressing signal and determines whether or not the input button unit 11 has been pressed (step S307).
  • step S306 the CPU 18 determines whether or not the 1-screen / 2-screen switch display area 101 is touched based on the detection signal (step S308).
  • step S308 When the 1-screen / 2-screen switch display area 101 is touched in step S308, the CPU 18 switches the display mode of the display unit 13 and displays an image corresponding to the display mode after switching on the display unit 13 (step S309). .
  • the CPU 18 displays the translation result image indicated by the translation image data extracted in step 305 on the display unit 13.
  • the CPU 18 confirms the image position feature information corresponding to the translation result image data in the storage information D1 stored in the storage unit 15. Based on the image position feature information, the CPU 18 associates the display position of the translation reference point in the translation result image data with the display position of the reference point in the matching result image data that is the search image data, and the translation result.
  • the image data and the collation result image data are displayed on the display unit 13.
  • step S308 the CPU 18 determines whether or not the operation mode is the pen input mode (step S310).
  • the CPU 18 When the operation mode is the pen input mode, the CPU 18 superimposes and displays the pen display area 106 on the translation result image displayed on the display unit 13 (step S311).
  • the CPU 18 operates in the image movement mode when not operating in the pen input mode (S312).
  • CPU18 returns to the process of step S306, when the process of step S309, S311 and S312 is complete
  • the storage information D1 may include a plurality of translated image information groups D12 having different translated character information languages.
  • the CPU 18 selects one translated image information group D12 from the plurality of translated image information groups D12 based on the language used by the operator set in the information display device 1, and the selected translated image information. What is necessary is just to display the translation result image according to group D12.
  • the CPU 18 may display an area for switching to a translation result image in another language on the display unit 13 so that the operator can select a language.
  • the reference point of the collation character range in which the character information in the collation image data having the same character information as the captured image data exists, and the translated image in which the characters of the collation image data are translated A collation image indicated by the collation image data and a translation image indicated by the translation image data are displayed in correspondence with the translation reference point of the translation character range in which the translation character information in the data exists.
  • the character information and the translated character information can be displayed in an easy-to-read manner.
  • the collation image and the translation image are moved and scaled so that the reference point and the translation reference point correspond to each other. Therefore, even when the image is moved or scaled, the character information and the translated character information It becomes possible to make the correspondence easy to understand.
  • the information display device 1 further includes a control signal input unit 14, and the control unit 17 is a collation image or a translation image displayed on the display unit with respect to the control signal input unit 14.
  • the collation image and the translation image are enlarged / reduced around the reference points of the collation image and the translation image in accordance with the enlargement / reduction instruction.
  • the collation image and the translation image are enlarged / reduced with the enlargement / reduction centers associated with each other, so that the character information and the translated character information can be displayed in correspondence with each other. Therefore, it becomes easier for the operator to compare the positions of the character information or the translated character information that the operator wants to enlarge or reduce.
  • the character position information further indicates the ratio of the sizes of the character ranges
  • the control unit 17 responds to the enlargement / reduction instruction.
  • the enlargement / reduction target image is enlarged / reduced around a reference point on the enlargement / reduction target image, and a matching image or translation image different from the enlargement / reduction target image is associated with each character range on the display unit based on character position information.
  • the image is scaled around the reference point on the collation image or translation image.
  • the collation image and the translation image are enlarged / reduced with the character range associated with each other, the character information and the translation character information can be displayed in correspondence with each other. Therefore, it becomes easier for the operator to compare the character ranges that the operator wants to see.
  • the information display device 1 further includes a control signal input unit 14, the character position information further indicates a ratio between the size of the character range and the size of the translated character range, and the control unit 17
  • the control signal input unit 14 When a drawing instruction for drawing in the target character range that is the collation character range or the translated character range is input to the control signal input unit 14, a figure corresponding to the drawing instruction is drawn in the target character range. And, based on the character position information, the graphic is drawn in a character range different from the target character range.
  • FIG. 5 is a diagram illustrating a configuration of the information display device of the present embodiment.
  • the control unit 42 includes a CPU 43 and a calculation unit 19. That is, the information display device 4 is different from the information display device 1 according to the first embodiment shown in FIG. 1 in that a front camera unit 41 is added and the CPU 18 is replaced with a CPU 43.
  • the front camera unit 41 is a second imaging unit that performs imaging and outputs second captured image data. Specifically, the front camera unit 41 is provided on the front surface of the information display device 4 on which the display unit 13 is provided. The front camera unit 41 captures a person-captured image data obtained by capturing a person on the front side and copying the person. 2 as image data.
  • the CPU 43 has the same function as the CPU 18 shown in FIG. 1, and further has the following functions.
  • the CPU 43 specifies the number of persons included as subjects in the person captured image indicated by the person captured image data output from the front camera unit 41. Specifically, the CPU 43 detects the face of a person included as a subject in the person captured image, and specifies the number of faces as the number of persons.
  • And CPU43 displays either one or both of a translation result image and a collation result image on the display part 13 according to the specified number of people.
  • the CPU 43 sets the display mode of the display unit 13 to the single screen display mode, and displays either the translation result image or the collation result image on the display unit 13.
  • the CPU 43 determines whether or not there is a person facing each other based on the detection result, and displays the collation result image and the translation result image in the direction according to the determination result. To do.
  • the CPU 43 sets the display mode of the display unit 13 to the two-screen facing display mode.
  • the two-screen facing display mode is a display mode for displaying the matching result image and the translation result image so that the matching result image and the translation result image are opposite to each other.
  • the CPU 43 sets the display mode of the display unit 13 to the two-screen display mode, and displays both the translation result image and the collation result image in the same direction.
  • FIG. 6 is a diagram illustrating an example of a user interface screen displayed on the display unit 13 in the two-screen facing display mode.
  • the CPU 43 displays the matching result image and the translation result image on the display unit 13 so that the matching result image and the translation result image are opposite to each other.
  • the collation result image and the translation result image can be displayed in directions that are easy for the operator 201 and the viewer 202 facing the operator 201 to view.
  • a translation result image menu is displayed for the operator 201 and a collation result image menu is displayed for the viewer 202 who is a waiter. Therefore, when the operator 201 who is the orderer switches the screen to the pen input mode and underlines the name of the dish to be ordered on the menu in the translation result image, the name of the menu in the matching result image facing the waiter is underlined. Will be drawn and will definitely be able to place an order.
  • FIG. 7 is a flowchart showing a part of the operation of the information display device 4 of the present embodiment.
  • step S304 if there is search image data in the collation image information group D11, the CPU 43 drives the front camera unit 41 to cause the front camera unit 41 to image a person.
  • the front camera unit 41 transmits person-captured image data indicating a person-captured image obtained by capturing the person to the CPU 43 (step S401).
  • the CPU 43 When receiving the human captured image data, the CPU 43 detects a human face included as a subject in the human captured image indicated by the human captured image data (step S402).
  • the CPU 43 identifies the number of persons based on the detection result, and determines whether the number is one (step S403).
  • step S404 determines whether or not there is a person facing each other.
  • the CPU 43 displays the translation result image and the collation result image so that the translation result image faces in the direction viewed by the operator 201 and the collation result image faces in the direction viewed by the viewer 202. 13 (step S405).
  • the CPU 43 displays the translation result image and the collation result image on the display unit 13 in the same direction (step S406).
  • step S306 is performed when step S405 and step S406 are complete
  • the front camera unit 41 images a person and transmits the person captured image data
  • the control unit 42 indicates the person captured image data.
  • a person's face included as a subject is detected in the person-captured image, the number of persons is specified based on the detection result, and either one of the translated image and the collation image is displayed on the display unit 13 according to the number of persons. Display both.
  • the display mode can be switched according to the number of operators or the like, the character information or the translated character information can be displayed according to the number of operators and can be used according to the usage of the operator. .
  • the control unit 42 determines the orientation of the translated image or the collation image depending on whether or not there are a plurality of persons facing each other. Instead, the translated image and the collation image are displayed.
  • the character information and the translated character information are displayed in an easy-to-read manner depending on whether the operator or the viewer is facing the user, so that the convenience of the operator or the viewer to communicate can be improved. It becomes possible.
  • FIG. 8 is a diagram showing the configuration of the information display system of this embodiment.
  • the information display system shown in FIG. 8 includes a collation device 3 and an information display device 5.
  • the same components as those in FIG. 1 are denoted by the same reference numerals, and description thereof may be omitted.
  • the collation device 3 includes a communication unit 30, a storage unit 31, a CPU 32, and an image collation unit 33.
  • the information display device 5 includes an input button unit 11, a rear camera unit 12, a display unit 13, a control signal input unit 14, a communication unit 51, a storage unit 52, and a control unit 53.
  • the control unit 53 includes a CPU 54 and a calculation unit 19.
  • the communication unit 51 of the information display device 5 communicates with the verification device 3. For example, the communication unit 51 transmits the captured image data output from the rear camera unit 12 to the verification device 3. Further, the communication unit 51 receives search information corresponding to the transmitted captured image data from the verification device 3.
  • the search information is, for example, collation image data corresponding to the captured image data, translation image data, and character position information.
  • the storage unit 52 temporarily stores the search information received by the communication unit 51.
  • the CPU 54 displays the collation result image data and the translation result image data based on the search information stored in the storage unit 52.
  • the collation device 3 collates the captured image data with the image data stored in advance in the database, and searches for image data having character information that matches the character information in the captured image data.
  • the communication unit 30 of the verification device 3 communicates with the information display device 5.
  • the communication unit 30 receives captured image data from the information display device 5.
  • the storage unit 31 stores the storage information D1 similarly to the storage unit 15 of the first embodiment.
  • the image collation unit 33 collates the captured image data received by the communication unit 30 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 31, and character information included in the captured image data Search for collation image data having the same character information. Then, the image collation unit 33 acquires the retrieved collation image data from the storage unit 31 and outputs it to the CPU 32.
  • CPU32 will acquire the translation image data and character position information corresponding to the collation image data from the memory
  • FIG. 9 is a sequence diagram showing a part of the operation of the information display device 5 of the present embodiment.
  • the operation of the information display device 5 of the present embodiment shown in FIG. 9 is the same as that in the flowchart shown in FIG. 4 except that the processing between step S301 and step S304 is shown in the sequence diagram shown in FIG. 4 is the same as the operation shown in the flowchart shown in FIG.
  • CPU 18 in the flowchart shown in FIG. 4 is replaced with CPU 54, and storage unit 15 is replaced with storage unit 53.
  • the CPU 54 proceeds to the process of step S501 after the process of step S301 is completed.
  • the rear camera unit 12 transmits the captured image data to the CPU 54.
  • the CPU 54 receives the captured image data and transmits it to the communication unit 60 (step S501).
  • the communication unit 60 receives the captured image data and transmits it to the communication unit 30 of the verification device 3 (step S502).
  • the communication unit 30 receives the captured image data and transmits it to the image collation unit 33 via the CPU 32 (step S503).
  • the image collation unit 33 collates the captured image data generated by the rear camera unit 12 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 31, and the characters included in the captured image data The collated image data having the character information that matches the information is searched. The image collation unit 33 then outputs the retrieved collation image data to the CPU 32 (step S504).
  • the CPU32 will acquire the translation image data and character position information corresponding to the collation image data from the memory
  • the communication unit 30 receives the search information and transmits the search information to the communication unit 51 of the information display device 5.
  • the communication unit 51 receives the search information (step S506).
  • the communication unit 51 transmits the search information to the CPU 54.
  • the CPU 54 receives the search information and temporarily stores the search information in the storage unit 52 (step S507).
  • CPU 54 proceeds to the process of step S304 after the process of step S507 is completed.
  • the collation device 3 different from the information display device 5 stores the stored information D and collates the captured image data and the collation image data. It becomes possible to collate data with more image data.
  • the functions of the information display devices 1, 4, 5 and the collation device 3 described above are recorded on a computer-readable recording medium by recording a program for realizing the functions.
  • the program may be read by a computer and executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Description

情報表示装置、情報表示システム、情報表示方法およびプログラムInformation display device, information display system, information display method and program

 本発明は、画像に写された文字情報を翻訳する技術に関する。 The present invention relates to a technique for translating character information copied in an image.

 近年、携帯電話端末としてスマートフォンのような処理性能が高いものが利用されており、その結果、携帯電話端末に実装されるソフトウェアにも様々なものが開発されている。 In recent years, high-performance mobile phone terminals such as smartphones have been used, and as a result, various software implemented on mobile phone terminals have been developed.

 上記のソフトウェアには、看板、ポスターまたはメニューなどのような文字が記載されている対象を撮像した撮像画像から、その対象に記載された文字を別の言語に翻訳し、その元の撮影画像と、文字が翻訳された撮影画像とを表示するソフトウェアがある。このようなソフトウェアは、携帯電話端末の操作者が海外旅行した際などに重宝される。 The above software translates the characters described in the target from a captured image obtained by capturing the target with characters such as a signboard, poster, menu, etc. There is software that displays captured images with translated characters. Such software is useful when an operator of a mobile phone terminal travels abroad.

 文字情報を翻訳する方法としては、文字情報内の単語や形態素を解析する方法や、文字情報が予め翻訳された翻訳文字情報を含む翻訳画像データを用いる方法が知られている。 As a method for translating character information, a method for analyzing words and morphemes in the character information and a method using translated image data including translated character information in which the character information is translated in advance are known.

 単語や形態素を解析する方法では、文字情報が本来の意味とは異なる意味に翻訳されたり、不自然な表現に翻訳されたりするという問題がある。 The method of analyzing words and morphemes has a problem that character information is translated into a meaning different from the original meaning or translated into an unnatural expression.

 一方、翻訳画像データを用いる方法では、撮像画像データ内の文字情報と同じ文字情報を含む照合画像データがデータベースから検索され、その検索された画像データに予め対応付けられた翻訳画像データが示す翻訳画像が表示される。このため、文字情報が本来の意味とは異なる意味に翻訳されたり、不自然な表現に翻訳されたりすることはない。 On the other hand, in the method using translated image data, collation image data including the same character information as the character information in the captured image data is retrieved from the database, and the translation image data previously associated with the retrieved image data indicates the translation. An image is displayed. For this reason, character information is not translated into a meaning different from the original meaning or translated into an unnatural expression.

 しかしながら、撮像画像と翻訳画像が別々に表示されるだけなので、翻訳画像内の文字情報と、撮影画像内の翻訳文字情報との対応が分かりにくいという問題があった。 However, since the captured image and the translated image are only displayed separately, there is a problem that it is difficult to understand the correspondence between the character information in the translated image and the translated character information in the photographed image.

 これに対して特許文献1には、地図に記載された地名を翻訳して表示することを目的とし、元の地図と、元の地図に記載された地名が翻訳された翻訳地図とを2つの画面にそれぞれに表示させる情報表示装置が開示されている。この情報表示装置では、撮影画像内の文字情報と翻訳画像内の翻訳文字情報とを容易に見比べることが可能になるので、翻訳画像内の文字情報と、撮影画像内の文字情報との対応を分かりやすくすることができる。 On the other hand, Patent Document 1 aims to translate and display place names described on a map, and has two maps, an original map and a translated map in which place names described on the original map are translated. An information display device to be displayed on each screen is disclosed. In this information display device, it is possible to easily compare the character information in the captured image with the translated character information in the translated image, so that the correspondence between the character information in the translated image and the character information in the captured image is It can be easy to understand.

特開2003-330588号公報JP 2003-330588 A

 地名は一般的に短く、言語によって長さはあまり変化しないため、地名を文字情報として翻訳する場合、翻訳前後で文字情報の長さにはあまり変化がない。このため、特許文献1に開示された情報表示装置では、地名が表示される範囲は、翻訳前後であまり変化しない。 Since place names are generally short and the length does not change much depending on the language, when the place name is translated as character information, the length of the character information does not change much before and after translation. For this reason, in the information display device disclosed in Patent Document 1, the range in which the place name is displayed does not change much before and after translation.

 しかしながら、メニューに記載されている料理名や、名所や旧跡などに設置されている看板に記載された案内文などは、言語によって長さが大幅に変化することがある。このため、特許文献1に開示された情報表示装置を、料理名や案内文の翻訳に適用すると、翻訳前後で文字情報が表示される範囲が大幅に変化してしまい、翻訳前後で文字情報の対応が分かりにくくなってしまうことがある。 However, the length of the names of dishes listed on the menus and the texts written on the signs installed on famous places and historic sites may vary greatly depending on the language. For this reason, when the information display device disclosed in Patent Document 1 is applied to the translation of a dish name or a guide sentence, the range in which the character information is displayed before and after the translation changes greatly, and the character information before and after the translation is changed. The correspondence may be difficult to understand.

 本発明の目的は、翻訳前の文字情報と翻訳後の文字情報との長さが大幅に変化しても、文字情報を見やすく表示することが可能な情報表示装置、情報表示システム、情報表示方法およびプログラムを提供することである。 An object of the present invention is to provide an information display device, an information display system, and an information display method capable of easily displaying character information even if the lengths of character information before translation and character information after translation change significantly. And to provide a program.

 本発明による情報表示装置は、表示部と、撮像を行い、撮像画像データを出力する撮像部と、文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と、前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する照合部と、前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を前記表示部に表示する制御部と、を有する。 An information display device according to the present invention includes a display unit, an imaging unit that performs imaging and outputs captured image data, collation image data indicating a collation image including character information, and translated character information obtained by translating the character information. A translation image data indicating a translation image including a reference point in a collation character range where the character information exists on the collation image and a reference point in a translation character range where the translation character information exists on the translation image A storage unit for storing character position information indicating a correspondence relationship; and comparing the captured image data with the verification image data to determine whether or not the character information is included in the captured image indicated by the captured image data. When the character information is included in the captured image and the collation unit to be determined, the reference points of the collation character range and the translated character range are associated with each other based on the character position information, and The focus image and the translated image and a control unit for displaying on the display unit.

 本発明による情報表示システムは、照合装置と、情報表示装置とを有する情報表示システムであって、前記照合装置は、文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と、前記情報表示装置から撮像画像データを受信する第1の通信部と、前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する照合部と、前記撮像画像内に前記文字情報が含まれる場合、前記照合画像データ、前記翻訳画像データおよび前記文字位置情報を前記記憶部から取得して、検索情報として前記情報表示装置に送信する第1の制御部と、を有し、前記情報表示装置は、表示部と、撮像を行い、前記撮像画像データを出力する撮像部と、前記撮像画像データを前記照合装置に送信し、また、前記照合装置から前記検索情報を受信する第2の通信部と、前記第2の通信部が受信した検索情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を前記表示部に表示する第2の制御部と、を有する。 An information display system according to the present invention is an information display system having a collation device and an information display device, wherein the collation device is obtained by translating collation image data indicating a collation image including character information and the character information. Translated image data indicating a translated image including translated character information, a reference point in the collated character range where the character information exists on the collated image, and a translated character range within which the translated character information exists on the translated image A storage unit for storing character position information indicating a correspondence relationship with a reference point, a first communication unit for receiving captured image data from the information display device, and comparing the captured image data with the verification image data. A collation unit that determines whether or not the character information is included in the captured image indicated by the captured image data; and the collation image data when the character information is included in the captured image. A first control unit that acquires the translated image data and the character position information from the storage unit and transmits the retrieved image data and the character position information as search information to the information display device. The information display device includes a display unit and an imaging unit. An imaging unit that outputs the captured image data, a second communication unit that transmits the captured image data to the verification device, and that receives the search information from the verification device, and the second communication A second control unit that displays the collation image and the translation image on the display unit in association with the reference points of the collation character range and the translation character range based on the search information received by the unit; Have

 本発明による情報表示方法は、文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶し、撮像を行い、撮像画像データを出力し、前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断し、前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を表示する。 The information display method according to the present invention includes collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and the character information on the collation image. Character position information indicating a correspondence relationship between a reference point in the collated character range in which the character string exists and a reference point in the translated character range in which the translated character information exists on the translated image is stored, and imaging is performed. Outputting image data, comparing the captured image data with the collation image data, determining whether or not the character information is included in the captured image indicated by the captured image data, and the character in the captured image When information is included, the collation image and the translation image are displayed in correspondence with the reference points of the collation character range and the translation character range based on the character position information.

 本発明によるプログラムは、文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と接続されたコンピュータに、撮像を行い、撮像画像データを出力する手順と、前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する手順と、前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を表示する手順と、を実行させる。 The program according to the present invention includes collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and the character information on the collation image. A computer connected to a storage unit for storing character position information indicating a correspondence relationship between a reference point in the collated character range to be performed and a reference point in the translated character range where the translated character information exists on the translated image; The procedure for performing imaging and outputting the captured image data and collating the captured image data with the verification image data to determine whether or not the character information is included in the captured image indicated by the captured image data When the character information is included in the captured image and the procedure, the reference points of the collation character range and the translated character range are associated with each other based on the character position information. To execute a procedure for displaying the collation image and the translated image.

 本発明によれば、翻訳前の文字情報と翻訳後の文字情報との長さが大幅に変化しても、文字情報を見やすく表示することが可能になる。 According to the present invention, even if the length of the character information before translation and the character information after translation changes significantly, it becomes possible to display the character information in an easy-to-see manner.

本発明の第1の実施形態の情報表示装置の構成を示す図である。It is a figure which shows the structure of the information display apparatus of the 1st Embodiment of this invention. 1画面表示モードにおけるユーザーインターフェイス画面の一例を示す図である。It is a figure which shows an example of the user interface screen in 1 screen display mode. 2画面表示モードにおけるユーザーインターフェイス画面の一例を示す図である。It is a figure which shows an example of the user interface screen in 2 screen display mode. 本発明の第1の実施形態の情報表示装置の動作の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of operation | movement of the information display apparatus of the 1st Embodiment of this invention. 本発明の第2の実施形態の情報表示装置の構成を示す図である。It is a figure which shows the structure of the information display apparatus of the 2nd Embodiment of this invention. 2画面対向表示モードにおけるユーザーインターフェイス画面の一例を示す図である。It is a figure which shows an example of the user interface screen in 2 screen opposing display mode. 本発明の第2の実施形態の情報表示装置の動作の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of operation | movement of the information display apparatus of the 2nd Embodiment of this invention. 本発明の第3の実施形態の情報表示システムの構成を示す図である。It is a figure which shows the structure of the information display system of the 3rd Embodiment of this invention. 本発明の第3の実施形態の情報表示システムの動作の一例を説明するためのシーケンス図である。It is a sequence diagram for demonstrating an example of operation | movement of the information display system of the 3rd Embodiment of this invention.

 以下、本発明の実施形態について図面を参照して説明する。なお、以下の説明では、同じ機能を有するものには同じ符号を付け、その説明を省略する場合がある。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following description, components having the same function may be denoted by the same reference numerals and description thereof may be omitted.

 (第1の実施の形態)
 図1は、本実施形態の情報表示装置の構成を示す図である。図1に示す情報表示装置1は、入力ボタン部11と、背面カメラ部12と、表示部13と、制御信号入力部14と、記憶部15と、画像照合部16と、制御部17とを有する。なお、情報表示装置1は、例えば、スマートフォン、タブレット端末、または、それらと同程度の大きさを有する情報処理装置である。
(First embodiment)
FIG. 1 is a diagram illustrating the configuration of the information display apparatus according to the present embodiment. The information display device 1 shown in FIG. 1 includes an input button unit 11, a rear camera unit 12, a display unit 13, a control signal input unit 14, a storage unit 15, an image collation unit 16, and a control unit 17. Have. The information display device 1 is, for example, a smartphone, a tablet terminal, or an information processing device having the same size as those.

 入力ボタン部11は、情報表示装置1を使用する操作者にて押下されると、押下されたことを示す押下信号を出力する。 When the input button unit 11 is pressed by an operator who uses the information display device 1, the input button unit 11 outputs a pressing signal indicating that the input button unit 11 has been pressed.

 背面カメラ部12は、撮像を行い、撮像画像データを出力する撮像部である。本実施形態では、背面カメラ部12は、メニューや看板のような撮像対象を撮像し、その撮像対象を写した撮像画像を示す撮像画像データを出力するものとする。なお、撮影対象には、文字情報が含まれているものとする。 The rear camera unit 12 is an imaging unit that performs imaging and outputs captured image data. In the present embodiment, the rear camera unit 12 captures an imaging target such as a menu or a signboard, and outputs captured image data indicating a captured image obtained by capturing the imaging target. It is assumed that character information is included in the shooting target.

 表示部13は、種々の画像を表示する。本実施形態では、表示部13には、1つの画像を表示する1画面表示モードと、2つの画像を表示する2画面表示モードのいずれかの表示モードが設定される。そして、表示部13は、1画面表示モードでは、撮影画像に写された文字情報を翻訳した翻訳文字情報を含む翻訳結果画像を表示し、2画面表示モードでは、その翻訳結果画像と、撮影画像に写された文字情報を含む照合結果画像とを表示する。翻訳結果画像および照合結果画像の詳細な説明は後述する。 The display unit 13 displays various images. In the present embodiment, the display unit 13 is set to one display mode of one screen display mode for displaying one image and two screen display mode for displaying two images. The display unit 13 displays a translation result image including translated character information obtained by translating the character information copied in the photographed image in the one-screen display mode, and the translation result image and the photographed image in the two-screen display mode. And a collation result image including the character information copied to. Detailed descriptions of the translation result image and the collation result image will be described later.

 なお、表示部13は、情報表示装置1の所定の面(以下、正面とする)に設けられ、背面カメラ部12は、情報表示装置1の正面とは反対側の面である背面に設けられているとする。 The display unit 13 is provided on a predetermined surface (hereinafter referred to as the front surface) of the information display device 1, and the rear camera unit 12 is provided on the back surface that is the surface opposite to the front surface of the information display device 1. Suppose that

 制御信号入力部14には、操作者から種々の情報が入力される。本実施形態では、制御信号入力部14は、指やスタイラスのような入力手段による接触または近接を検出し、その接触または近接された位置を示す検出信号を出力するタッチパネルであるとする。また、制御信号入力部14は、表示部13と重ねて設けられているものとする。 Various information is input to the control signal input unit 14 from the operator. In the present embodiment, it is assumed that the control signal input unit 14 is a touch panel that detects contact or proximity by an input unit such as a finger or a stylus and outputs a detection signal indicating the contact or proximity position. The control signal input unit 14 is provided so as to overlap the display unit 13.

 記憶部15は、撮像画像データに含まれる文字情報を別の言語に翻訳するためのデータベースである記憶情報D1を記憶する。 The storage unit 15 stores storage information D1 that is a database for translating character information included in captured image data into another language.

 記憶情報D1は、レコードとして、データ列C11~C1nを有する。各データ列C1iは、照合画像データXiと、翻訳画像データX1iと、画像位置特徴情報Yiと、画像位置変換式Ziとを有する。なお、本実施形態では、nは2以上の整数であり、iは1以上n以下の整数であるとする。なお、データ列は一つだけでもよい。 The storage information D1 has data strings C11 to C1n as records. Each data string C1i includes collation image data Xi, translated image data X1i, image position feature information Yi, and an image position conversion formula Zi. In the present embodiment, n is an integer of 2 or more, and i is an integer of 1 to n. There may be only one data string.

 照合画像データXiは、背面カメラ部12から出力された撮像画像データと照合される画像データであり、文字情報を含む。 The collation image data Xi is image data that is collated with the captured image data output from the rear camera unit 12, and includes character information.

 翻訳画像データX1iは、照合画像データXi内の文字情報が他の言語に翻訳された翻訳文字情報を含む画像データである。なお、文字情報および翻訳文字情報の言語は、互いに異なる言語であれば、特に限定されない。 The translated image data X1i is image data including translated character information obtained by translating character information in the collation image data Xi into another language. The language of character information and translated character information is not particularly limited as long as they are different languages.

 画像位置特徴情報Yiおよび画像位置変換式Ziは、照合画像データXiが示す照合画像内の文字情報が存在する照合文字範囲と、翻訳画像データX1iが示す翻訳画像内の翻訳文字情報が存在する翻訳文字範囲と対応を示す文字位置情報を構成する。 The image position feature information Yi and the image position conversion formula Zi include a collation character range in which character information in the collation image indicated by the collation image data Xi exists and a translation in which translation character information in the translation image indicated by the translation image data X1i exists. Character position information indicating a character range and correspondence is configured.

 具体的には、画像位置特徴情報Yiは、照合文字範囲内の基準点と、翻訳文字範囲内の基準点との対応関係を示す。基準点は、例えば、文字情報(翻訳文字情報)の書き出し点、終点、または、書き出し点と終点の中点などである。 Specifically, the image position feature information Yi indicates the correspondence between the reference point in the collation character range and the reference point in the translated character range. The reference point is, for example, a writing point or end point of character information (translated character information) or a midpoint of the writing point and end point.

 画像位置変換式Ziは、照合画像データXi内の照合文字範囲の大きさと、翻訳画像データX1i内の翻訳文字範囲との大きさとの比を示す情報である。例えば、画像位置変換式Ziは、照合画像内の書き出し点から文字情報の終点までの長さと、翻訳画像内の書き出し点から翻訳文字情報の終点までの長さの比を示す。 The image position conversion formula Zi is information indicating the ratio between the size of the collation character range in the collation image data Xi and the size of the translation character range in the translation image data X1i. For example, the image position conversion formula Zi indicates the ratio of the length from the writing point in the collation image to the end point of the character information and the length from the writing point in the translated image to the end point of the translated character information.

 なお、照合画像データX1~Xnは照合画像情報群D11を構成する。翻訳画像データX11~X1nは翻訳画像情報群D12を構成する。画像位置特徴情報Y1~Ynは画像位置特徴情報群D13を構成する。画像位置変換式Z1~Znは画像位置変換式群D14を構成する。 Note that the collation image data X1 to Xn constitute a collation image information group D11. The translation image data X11 to X1n constitute a translation image information group D12. The image position feature information Y1 to Yn constitutes an image position feature information group D13. The image position conversion formulas Z1 to Zn constitute an image position conversion formula group D14.

 画像照合部16は、背面カメラ部12から出力された撮像画像データを、記憶部15に記憶された記憶情報D1内の照合画像データX1~Xnのそれぞれと照合して、撮像画像データ内の文字情報と同じ文字情報を有する照合画像データを検索する。そして、画像照合部16は、その検索された照合画像データを検索画像データとして出力する。 The image collation unit 16 collates the captured image data output from the rear camera unit 12 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 15, and thereby the characters in the captured image data. The collation image data having the same character information as the information is searched. Then, the image collation unit 16 outputs the retrieved collation image data as retrieval image data.

 制御部17は、情報表示装置1の各部と信号や画像データの送受信などを行うことで、情報表示装置1全体を制御する。具体的には、制御部17は、以下の機能を有するCPU18および演算部19を有する。 The control unit 17 controls the entire information display device 1 by performing transmission and reception of signals and image data with each unit of the information display device 1. Specifically, the control unit 17 includes a CPU 18 and a calculation unit 19 having the following functions.

 CPU18は、入力ボタン部11から押下信号を受け付け、制御信号入力部14から検出信号を受け付け、画像照合部16から検索画像データを受け付ける。 The CPU 18 receives a press signal from the input button unit 11, receives a detection signal from the control signal input unit 14, and receives search image data from the image matching unit 16.

 検索画像データを受け付けた場合、CPU18は、表示部13の表示モードを確認する。 When the search image data is received, the CPU 18 confirms the display mode of the display unit 13.

 表示モードが1画面表示モードの場合、CPU18は、記憶部15から検索画像データと対応する翻訳画像データを取得し、その翻訳画像データが示す翻訳画像を翻訳結果画像として表示部13に表示する。なお、検索画像データが複数ある場合、CPU18は、その複数の検索画像データに対応する複数の翻訳画像データにて示される翻訳画像のそれぞれを翻訳結果画像として表示する。 When the display mode is the single screen display mode, the CPU 18 acquires the translation image data corresponding to the search image data from the storage unit 15 and displays the translation image indicated by the translation image data on the display unit 13 as a translation result image. When there are a plurality of search image data, the CPU 18 displays each of the translation images indicated by the plurality of translation image data corresponding to the plurality of search image data as a translation result image.

 表示モードが2画面表示モードの場合、CPU18は、記憶部15から検索画像データと対応する翻訳画像データおよび画像位置特徴情報を取得する。そして、CPU18は、その画像位置特徴情報に基づいて、検索画像データの照合文字範囲内の基準位置と、翻訳画像データの翻訳文字範囲内の基準位置とを対応させて、検索画像データおよび翻訳画像データのそれぞれが示す照合画像および翻訳画像を、照合結果画像および翻訳結果画像として表示部13に表示する。 When the display mode is the two-screen display mode, the CPU 18 acquires translated image data and image position feature information corresponding to the search image data from the storage unit 15. Then, based on the image position feature information, the CPU 18 associates the reference position in the collation character range of the search image data with the reference position in the translation character range of the translation image data, so that the search image data and the translation image The collation image and the translation image indicated by each data are displayed on the display unit 13 as a collation result image and a translation result image.

 より具体的には、CPU18は、照合結果画像を表示する照合表示領域と、翻訳結果画像を表示する翻訳表示領域とを表示部13に設定し、照合表示領域に照合画像を表示し、翻訳表示領域に翻訳画像を表示する。 More specifically, the CPU 18 sets a collation display area for displaying the collation result image and a translation display area for displaying the translation result image on the display unit 13, displays the collation image in the collation display area, and displays the translation display. Display translated images in the area.

 このとき、CPU18は、画像位置特徴情報に基づいて、文字情報の基準点の表示位置と、翻訳文字情報の基準点である翻訳基準点の表示位置とを対応付けて、照合画像および翻訳画像を表示する。より具体的には、CPU18は、照合表示領域における基準点の表示位置と、翻訳表示領域における翻訳基準点の表示位置とを一致させて、照合結果画像および翻訳結果画像を表示する。例えば、CPU18は、基準点の表示位置を照合表示領域の中央にした場合、翻訳基準点の表示位置を翻訳表示領域の中央にする。 At this time, the CPU 18 associates the display position of the reference point of the character information with the display position of the translation reference point, which is the reference point of the translated character information, based on the image position feature information, and displays the collation image and the translated image. indicate. More specifically, the CPU 18 matches the display position of the reference point in the collation display area with the display position of the translation reference point in the translation display area, and displays the collation result image and the translation result image. For example, when the display position of the reference point is set to the center of the collation display area, the CPU 18 sets the display position of the translation reference point to the center of the translation display area.

 また、CPU18は、検出信号に応じて動作を行う動作モードとして、画像移動モードとペン入力モードとを備え、一方の動作モードで動作する。画像移動モードは、照合結果画像および翻訳結果画像の移動および拡縮を行う動作モードであり、ペン入力モードは、操作者が画像を書き込む動作モードである。 Further, the CPU 18 includes an image movement mode and a pen input mode as operation modes for performing an operation according to the detection signal, and operates in one operation mode. The image movement mode is an operation mode in which the collation result image and the translation result image are moved and scaled, and the pen input mode is an operation mode in which the operator writes an image.

 表示部13の表示モードが1画面表示モードの場合、CPU18は、画像移動モードで動作する。この場合、CPU18は、検出信号に応じて、翻訳結果画像の移動および拡縮を行う。 When the display mode of the display unit 13 is the single screen display mode, the CPU 18 operates in the image moving mode. In this case, the CPU 18 moves and enlarges / reduces the translation result image according to the detection signal.

 例えば、翻訳結果画像を移動させる場合、操作者は、制御信号入力部14に対して翻訳結果画像を移動させる第1の移動指示を入力する第1の移動操作を行う。第1の移動操作は、例えば、翻訳表示領域に入力手段を接触させ、その後、入力手段を移動させて、入力手段と翻訳表示領域との接触位置を移動させる操作である。 For example, when moving the translation result image, the operator performs a first movement operation to input a first movement instruction to move the translation result image to the control signal input unit 14. The first movement operation is, for example, an operation of bringing the input unit into contact with the translation display area and then moving the input unit to move the contact position between the input unit and the translation display area.

 第1の移動操作が行われた場合、CPU18は、制御信号入力部14から第1の移動操作に応じた検出信号を第1の移動指示として受け付ける。この場合、CPU18は、第1の移動指示に応じた移動距離および移動方向に、表示部13に表示されている翻訳結果画像を移動させる。 When the first movement operation is performed, the CPU 18 receives a detection signal corresponding to the first movement operation from the control signal input unit 14 as a first movement instruction. In this case, the CPU 18 moves the translation result image displayed on the display unit 13 in the movement distance and the movement direction according to the first movement instruction.

 また、翻訳結果画像を拡縮する場合、操作者は、制御信号入力部14に対して翻訳結果画像を拡縮させる第1の拡縮指示を入力する第1の拡縮操作を行う。第1の拡縮操作は、例えば、制御信号入力部14内に予め設定されたスライダ領域に入力手段を接触させる操作である。 Also, when enlarging / reducing the translation result image, the operator performs a first enlargement / reduction operation for inputting a first enlargement / reduction instruction for enlarging / reducing the translation result image to the control signal input unit 14. The first enlargement / reduction operation is, for example, an operation of bringing the input unit into contact with a slider area set in advance in the control signal input unit 14.

 第1の拡縮操作が行われた場合、CPU18は、制御信号入力部14から第1の拡縮操作に応じた検出信号を第1の拡縮指示として受け付ける。この場合、CPU18は、第1の拡縮指示に応じた拡縮倍率で、表示部13に表示されている翻訳結果画像を拡縮する。 When the first enlargement / reduction operation is performed, the CPU 18 receives a detection signal corresponding to the first enlargement / reduction operation from the control signal input unit 14 as a first enlargement / reduction instruction. In this case, the CPU 18 enlarges / reduces the translation result image displayed on the display unit 13 at an enlargement / reduction ratio corresponding to the first enlargement / reduction instruction.

 また、表示モードが2画面表示モードの場合、CPU18は、画像移動モードまたはペン入力モードで動作する。 Further, when the display mode is the two-screen display mode, the CPU 18 operates in the image moving mode or the pen input mode.

 画像移動モードで動作する場合、CPU18は、検出信号に応じて、照合結果画像および翻訳結果画像の移動および拡縮を行う。 When operating in the image movement mode, the CPU 18 moves and scales the collation result image and the translation result image in accordance with the detection signal.

 例えば、操作者が制御信号入力部14に対して照合結果画像または翻訳結果画像である移動対象画像を移動させる第2の移動指示を入力する第2の移動操作を行うと、CPU18は、第2の移動指示に応じて移動対象画像を移動させるとともに、移動対象画像と対応する文字位置情報に基づいて、照合文字範囲および翻訳文字範囲のそれぞれの基準点が対応するように、移動対象画像と異なる照合画像または前記翻訳画像を移動させる。なお、第2の移動操作は、例えば、入力手段を、対象画像を表示する照合表示領域または翻訳表示領域に接触させ、その後、入力手段を移動させて、入力手段の接触位置を移動させる操作である。 For example, when the operator performs a second movement operation for inputting a second movement instruction to move the movement target image, which is a collation result image or a translation result image, to the control signal input unit 14, the CPU 18 performs the second movement operation. The movement target image is moved according to the movement instruction, and based on the character position information corresponding to the movement target image, different from the movement target image so that the reference points of the collation character range and the translation character range correspond to each other. The collation image or the translation image is moved. Note that the second moving operation is, for example, an operation in which the input unit is brought into contact with the collation display region or the translation display region for displaying the target image, and then the input unit is moved to move the contact position of the input unit. is there.

 例えば、対象画像を翻訳結果画像とする第2の移動操作が行われた場合、CPU18は、制御信号入力部14から第2の移動操作に応じた検出信号を第2の移動指示として受信する。CPU18は、第2の移動指示に応じて移動対象画像である翻訳結果画像を移動させるとともに、移動後の翻訳結果画像の翻訳基準点の翻訳表示領域内の表示位置と、照合結果画像の基準点の照合表示領域内の表示位置とが一致するように、照合結果画像を移動させる。 For example, when a second movement operation using the target image as a translation result image is performed, the CPU 18 receives a detection signal corresponding to the second movement operation from the control signal input unit 14 as a second movement instruction. The CPU 18 moves the translation result image that is the movement target image in accordance with the second movement instruction, and also displays the display position in the translation display area of the translation reference point of the translated translation result image and the reference point of the matching result image The collation result image is moved so that the display position in the collation display area matches.

 なお、対象画像が照合画像の場合、CPU18は、上記の処理と同様な処理を行い、照合結果画像および翻訳結果画像の両方を移動させる。 When the target image is a collation image, the CPU 18 performs the same process as the above process, and moves both the collation result image and the translation result image.

 また、操作者が制御信号入力部14に対して照合結果画像または翻訳結果画像である拡縮対象画像を拡縮させる第2の拡縮指示を入力する第2の拡縮操作を行うと、CPU18は、第2の拡張指示に応じて拡縮対象画像だけでなく、拡縮対象画像とは異なる照合結果画像または翻訳結果画像も拡縮させる。第2の拡縮操作は、例えば、制御信号入力部14内のスライダ領域に入力手段を接触させる操作である。なお、スライダ領域は、照合結果画像および翻訳結果画像のそれぞれに対して設けられていてもよいし、照合結果画像および翻訳結果画像の一方に対して設けられていてもよい。 When the operator performs a second enlargement / reduction operation for inputting a second enlargement / reduction instruction to enlarge / reduce the enlargement / reduction target image, which is a collation result image or a translation result image, to the control signal input unit 14, the CPU 18 performs the second enlargement / reduction operation. In response to the expansion instruction, not only the enlargement / reduction target image but also the collation result image or the translation result image different from the enlargement / reduction target image are enlarged / reduced. The second enlargement / reduction operation is, for example, an operation of bringing the input unit into contact with the slider area in the control signal input unit 14. The slider area may be provided for each of the collation result image and the translation result image, or may be provided for one of the collation result image and the translation result image.

 具体的には、拡縮対象画像を翻訳結果画像とする第2の拡縮操作が行われた場合、CPU18は、制御信号入力部14から第2の拡縮操作に応じた検出信号を第2の拡縮指示として受信する。この場合、CPU18は、第2の拡縮指示に応じて拡縮倍率を決定し、その拡縮倍率で、表示部13に表示されている翻訳結果画像を拡縮するとともに、演算部19を用いて、拡縮対象画像とは異なる照合結果画像内の照合文字範囲を特定する。そして、CPU18は、その照合文字範囲に基づいて、照合結果画像を、照合文字範囲と翻訳文字範囲とが対応するように拡縮する。例えば、CPU18は、拡張後の照合文字範囲および翻訳文字範囲が同じ大きさになるように、照合結果画像を拡張する。このとき、CPU18は、照合結果画像および翻訳結果画像のそれぞれを、基準点および翻訳基準点を中心として拡縮してもよいし、ユーザにて指定された点を中心として拡縮してもよい。なお、演算部19の処理については後述する。 Specifically, when a second enlargement / reduction operation using the enlargement / reduction target image as a translation result image is performed, the CPU 18 outputs a detection signal corresponding to the second enlargement / reduction operation from the control signal input unit 14 to the second enlargement / reduction instruction. As received. In this case, the CPU 18 determines the enlargement / reduction ratio according to the second enlargement / reduction instruction, enlarges / reduces the translation result image displayed on the display unit 13 with the enlargement / reduction ratio, and uses the calculation unit 19 to enlarge / reduce the image. A collation character range in the collation result image different from the image is specified. Then, the CPU 18 enlarges / reduces the collation result image based on the collation character range so that the collation character range corresponds to the translation character range. For example, the CPU 18 expands the collation result image so that the expanded collation character range and the translated character range have the same size. At this time, the CPU 18 may enlarge or reduce each of the collation result image and the translation result image around the reference point and the translation reference point, or may enlarge or reduce around the point designated by the user. In addition, the process of the calculating part 19 is mentioned later.

 また、ペン入力モードで動作する場合、CPU18は、検出信号に応じて、照合結果画像および翻訳結果画像に対して描画を行う。 Further, when operating in the pen input mode, the CPU 18 performs drawing on the collation result image and the translation result image according to the detection signal.

 例えば、操作者が制御信号入力部14に対して照合結果画像または翻訳結果画像である描画対象画像に描画する描画操作を行うと、CPU18は、描画対象画像だけでなく、対象画像とは異なる照合結果画像または翻訳結果画像にも描画を行う。なお、描画操作は、例えば、入力手段を、描画対象画像を表示する照合表示領域または翻訳表示領域に接触させ、その後、入力手段を移動させて、入力手段の接触位置を移動させる操作である。 For example, when the operator performs a drawing operation for drawing on the drawing target image, which is a matching result image or a translation result image, on the control signal input unit 14, the CPU 18 checks not only the drawing target image but also a matching different from the target image. Drawing is also performed on the result image or the translation result image. The drawing operation is, for example, an operation in which the input unit is brought into contact with the collation display region or the translation display region for displaying the drawing target image, and then the input unit is moved to move the contact position of the input unit.

 より具体的には、描画対象画像を翻訳結果画像とする描画操作が行われた場合、CPU18は、制御信号入力部14から描画操作に応じた検出信号を描画指示として受信する。描画指示を受信する。 More specifically, when a drawing operation using the drawing target image as a translation result image is performed, the CPU 18 receives a detection signal corresponding to the drawing operation from the control signal input unit 14 as a drawing instruction. A drawing instruction is received.

 CPU18は、描画対象画像である翻訳結果画像内の翻訳文字範囲に、描写指示に応じた図形を描画する。そして、CPU18は、演算部19を用いて、描画対象画像とは異なる照合結果画像内の照合文字範囲を特定し、その特定した照合文字範囲に上記の図形を描画する。 The CPU 18 draws a figure corresponding to the drawing instruction in the translated character range in the translation result image that is the drawing target image. Then, the CPU 18 uses the calculation unit 19 to identify a collation character range in the collation result image that is different from the drawing target image, and draws the graphic in the identified collation character range.

 なお、CPU18は、制御信号入力部14上に設定された、CPU18自身の動作モードを切り替えるための領域や表示部13の表示モードを切り替える領域内の位置を示す検出信号に基づいて、動作モードや表示モードを切り替えることができる。 The CPU 18 sets the operation mode and the operation mode based on the detection signal that is set on the control signal input unit 14 and indicates the position in the region for switching the operation mode of the CPU 18 or the display mode of the display unit 13. The display mode can be switched.

 演算部19は、CPU18が照合文字範囲または翻訳文字範囲を決定する際に使用される。なお、CPU18は、照合文字範囲を決定する場合、翻訳文字範囲の大きさを示す翻訳座標情報を演算部19に送信し、翻訳文字範囲を決定する場合、照合文字範囲の大きさを示す照合座標情報を演算部19に送信する。 The calculation unit 19 is used when the CPU 18 determines the collation character range or the translation character range. When determining the collation character range, the CPU 18 transmits translation coordinate information indicating the size of the translation character range to the calculation unit 19, and when determining the translation character range, the collation coordinates indicating the size of the collation character range. Information is transmitted to the calculation unit 19.

 演算部19は、例えば、CPU18から翻訳座標情報を受信すると、記憶部15から画像位置変換式を取得する。演算部19は、取得した画像位置変換式に基づいて、その文字位置情報にて対応付けられている照合文字範囲の大きさを示す照合座標情報を算出し、CPU18に送信する。CPU18は、受信した照合座標情報および、記憶部15からCPU18を介して取得した画像位置特徴情報に基づいて照合文字範囲を決定する。 The calculation unit 19 acquires the image position conversion formula from the storage unit 15 when receiving the translated coordinate information from the CPU 18, for example. The computing unit 19 calculates collation coordinate information indicating the size of the collation character range associated with the character position information based on the acquired image position conversion formula, and transmits the collation coordinate information to the CPU 18. The CPU 18 determines the collation character range based on the received collation coordinate information and the image position feature information acquired from the storage unit 15 via the CPU 18.

 演算部19は、CPU18から照合座標情報を受信した場合でも上記と同様な処理により、翻訳座標情報を算出してCPU18に送信することができる。 The calculation unit 19 can calculate the translation coordinate information and transmit it to the CPU 18 by the same process as described above even when the collation coordinate information is received from the CPU 18.

 以上説明したように、CPU18が、翻訳文字範囲の大きさを表す翻訳座標情報から、演算部19を用いて、文字位置情報にてその翻訳文字範囲と対応付けられている照合文字範囲の大きさを示す照合座標情報を取得するため、CPU18および演算部19は、翻訳文字範囲から、文字位置情報にて翻訳文字範囲と対応付けられている照合文字範囲を決定する制御部17として機能する。 As described above, the CPU 18 uses the calculation unit 19 from the translation coordinate information indicating the size of the translated character range, and the size of the collation character range associated with the translated character range in the character position information. The CPU 18 and the calculation unit 19 function as the control unit 17 that determines the collation character range associated with the translation character range in the character position information from the translation character range.

 以下、表示部13に表示されるユーザーインターフェイス画面について説明する。 Hereinafter, the user interface screen displayed on the display unit 13 will be described.

 図2は、1画面表示モード時に表示部13に表示されるユーザーインターフェイス画面の一例を示す図である。 FIG. 2 is a diagram showing an example of a user interface screen displayed on the display unit 13 in the single screen display mode.

 図2に示されるユーザーインターフェイス画面は、1画面/2画面スイッチ表示領域101と、ペン入力スイッチ表示領域102と、翻訳表示領域103と、スライダ領域105とを有する。 The user interface screen shown in FIG. 2 has a 1-screen / 2-screen switch display area 101, a pen input switch display area 102, a translation display area 103, and a slider area 105.

 1画面/2画面スイッチ表示領域101は、表示部13の表示モードを切り替えるための領域である。CPU18は、1画面/2画面スイッチ表示領域101内の位置を示す検出信号を受信した場合、表示部13の表示モードを切り替える。 The 1-screen / 2-screen switch display area 101 is an area for switching the display mode of the display unit 13. When the CPU 18 receives a detection signal indicating a position in the 1-screen / 2-screen switch display area 101, the CPU 18 switches the display mode of the display unit 13.

 ペン入力スイッチ表示領域102は、CPU18の動作モードを切り替えるための領域である。なお、表示部13が1画面表示モードの場合、CPU18は、画面移動モードで動作するため、ペン入力スイッチ表示領域102内の位置を示す検出信号を受信しても、動作モードを切り替えない。 The pen input switch display area 102 is an area for switching the operation mode of the CPU 18. When the display unit 13 is in the one-screen display mode, the CPU 18 operates in the screen movement mode. Therefore, even when the detection signal indicating the position in the pen input switch display area 102 is received, the operation mode is not switched.

 翻訳表示領域103は、翻訳結果画像を表示するための領域である。図2では、メニュー2を撮像した撮像画像に対応する翻訳結果画像が示されている。 The translation display area 103 is an area for displaying a translation result image. In FIG. 2, the translation result image corresponding to the captured image obtained by capturing the menu 2 is shown.

 スライダ領域105は、翻訳結果画像の拡縮を行うための領域である。CPU18は、スライダ領域105内の位置を示す検出信号を受信した場合、その検出信号に応じて、翻訳結果画像を拡縮する。 The slider area 105 is an area for enlarging / reducing the translation result image. When the CPU 18 receives the detection signal indicating the position in the slider area 105, the CPU 18 enlarges or reduces the translation result image according to the detection signal.

 例えば、CPU18は、検出信号が示す接触位置がスライダ領域105内の中央より+側にある場合、翻訳結果画像を拡大し、検出信号が示す接触位置がスライダ領域105内の中央より-側にある場合、翻訳結果画像を縮小する。このとき、CPU18は、スライダ領域105の中央から接触位置までの距離に応じて、拡縮の倍率を決定する。 For example, when the contact position indicated by the detection signal is on the + side from the center in the slider area 105, the CPU 18 enlarges the translation result image, and the contact position indicated by the detection signal is on the − side from the center in the slider area 105. In the case, the translation result image is reduced. At this time, the CPU 18 determines the scaling factor in accordance with the distance from the center of the slider area 105 to the contact position.

 図3は、2画面表示モード時に表示部13に表示されるユーザーインターフェイス画面の一例を示す図である。 FIG. 3 is a diagram illustrating an example of a user interface screen displayed on the display unit 13 in the two-screen display mode.

 図3に示されるユーザーインターフェイス画面は、1画面/2画面スイッチ表示領域101と、ペン入力スイッチ表示領域102と、翻訳表示領域103と、照合表示領域104と、スライダ領域105と、ペン表示領域106とを有する。なお、図3において、図2と同様の構成については、同じ符号を付し、その説明を省略する。 The user interface screen shown in FIG. 3 includes a one-screen / 2-screen switch display area 101, a pen input switch display area 102, a translation display area 103, a collation display area 104, a slider area 105, and a pen display area 106. And have. In FIG. 3, the same components as those in FIG. 2 are denoted by the same reference numerals, and the description thereof is omitted.

 照合表示領域104は、照合結果画像を表示するための領域である。図3では、メニュー2を撮像した撮像画像に対応する照合結果画像が示されている。 The collation display area 104 is an area for displaying a collation result image. FIG. 3 shows a matching result image corresponding to a captured image obtained by capturing the menu 2.

 2画面表示時モード時には、CPU18は、画像移動モードまたはペン入力モードで動作するため、ペン入力スイッチ表示領域102内の位置を示す検出信号を受信した場合、CPU18自身の動作モードを切り替える。したがって、ペン入力スイッチ表示領域102が入力手段と接触するたびに、CPU18の動作モードが画像移動モードとペン入力モードとの間で切り替わるので、ペン入力スイッチ表示領域102は、トグル型のスイッチと同等な機能を有することとなる。 In the two-screen display mode, since the CPU 18 operates in the image movement mode or the pen input mode, when the detection signal indicating the position in the pen input switch display area 102 is received, the CPU 18 switches the operation mode of the CPU 18 itself. Therefore, every time the pen input switch display area 102 comes into contact with the input means, the operation mode of the CPU 18 is switched between the image movement mode and the pen input mode, so that the pen input switch display area 102 is equivalent to a toggle type switch. It will have a function.

 ペン入力モードでは、CPU18は、翻訳表示領域103上にペン表示領域106を表示する。 In the pen input mode, the CPU 18 displays the pen display area 106 on the translation display area 103.

 ペン表示領域106が表示された後、最初の検出信号が翻訳表示領域103上の位置を示す場合、CPU18は、その位置にペン表示領域106の先端を一致させる。 When the first detection signal indicates the position on the translation display area 103 after the pen display area 106 is displayed, the CPU 18 matches the tip of the pen display area 106 with the position.

 そして、CPU18は、検出信号にて示される位置に照合文字範囲が表示されているとき、文字位置情報にて照合文字範囲と対応する翻訳文字範囲を確認する。CPU18は、その照合文字範囲および翻訳文字範囲上に同じ図形を描画する。 When the collation character range is displayed at the position indicated by the detection signal, the CPU 18 confirms the translated character range corresponding to the collation character range based on the character position information. The CPU 18 draws the same graphic on the collation character range and the translated character range.

 例えば、翻訳表示領域103において、翻訳文字情報に下線が描画された場合は、照合表示領域104において、文字情報に下線が描画される。 For example, when an underline is drawn in the translated character information in the translation display area 103, an underline is drawn in the character information in the collation display area 104.

 このとき、CPU18は、下線の長さに等しい翻訳文字範囲の大きさを示す翻訳座標情報を演算部19に送信する。演算部19は、受信した翻訳座標情報から、画像位置変換式に基づいて、文字位置情報にて翻訳文字範囲と対応する照合文字範囲の大きさを算出し、CPU18に送信する。CPU18は、照合文字範囲に、その大きさに等しい長さの下線を描画する。 At this time, the CPU 18 transmits translated coordinate information indicating the size of the translated character range equal to the length of the underline to the computing unit 19. The computing unit 19 calculates the size of the collation character range corresponding to the translated character range based on the character position information from the received translation coordinate information based on the image position conversion formula, and transmits it to the CPU 18. The CPU 18 draws an underline having a length equal to the size of the collation character range.

 なお、ペン表示領域106が表示された後、最初の検出信号が、照合表示領域104上の位置を示す場合、CPU18は、その位置に、ペン表示領域106の先端を一致させて、ペン表示領域106を表示する。 When the first detection signal indicates the position on the collation display area 104 after the pen display area 106 is displayed, the CPU 18 matches the position of the tip of the pen display area 106 to the pen display area 106. 106 is displayed.

 また、画像移動モードでは、CPU18は、スライダ領域105内の位置を示す検出信号を受信した場合、検出信号に応じて、照合結果画像および翻訳結果画像の拡縮を行う。 In the image movement mode, when the CPU 18 receives a detection signal indicating the position in the slider area 105, the CPU 18 scales the collation result image and the translation result image according to the detection signal.

 例えば、CPU18は、操作者201が入力手段をスライダ領域105内の中央より+に近い位置に接触させた場合には、翻訳結果画像の拡大を行い、スライダ領域105内の中央より-に近い位置に接触させた場合には、翻訳結果画像の縮小を行う。このとき、CPU18は、スライダ領域105内の中央から接触位置までの距離に応じて拡縮の倍率を決定する。 For example, when the operator 201 brings the input unit into contact with a position closer to + than the center in the slider area 105, the CPU 18 enlarges the translation result image and positions closer to − than the center in the slider area 105. When the contact is made, the translation result image is reduced. At this time, the CPU 18 determines the scaling factor according to the distance from the center in the slider area 105 to the contact position.

 なお、表示部13が2画面表示モードにあるとき、翻訳表示領域103にペン入力を行うことが可能であり、かつ、照合表示領域104上の画像の拡縮や移動を行うことが可能であるCPU18の動作モードを設定することが可能である。 Note that when the display unit 13 is in the two-screen display mode, the CPU 18 can perform pen input to the translation display area 103 and can enlarge or reduce or move an image on the collation display area 104. It is possible to set the operation mode.

 図4は、本実施形態の情報表示装置1の動作の一例を説明するためのフローチャートである。 FIG. 4 is a flowchart for explaining an example of the operation of the information display apparatus 1 of the present embodiment.

 先ず、背面カメラ部12は、対象を撮像し、撮像画像データを生成する(ステップS301)。 First, the rear camera unit 12 captures an object and generates captured image data (step S301).

 背面カメラ部12は、撮像画像データをCPU18に出力する。CPU18は、撮像画像データを受け付けると、その撮像画像データを画像照合部16に出力する(ステップS302)。 The rear camera unit 12 outputs the captured image data to the CPU 18. When accepting the captured image data, the CPU 18 outputs the captured image data to the image collating unit 16 (step S302).

 画像照合部16は、撮像画像データを受け付けると、撮像画像データを、記憶部15に記憶された記憶情報D1内の照合画像データX1~Xnのそれぞれと照合し、撮像画像データに含まれる文字情報と同じ文字情報を有する照合画像データを検索する。そして、画像照合部16は、その検索結果をCPU18に送信する(ステップS303)。このとき、画像照合部16は、照合画像データが検索できた場合、その検索された照合画像データを検索結果として送信し、照合画像データが検索できなかった場合、検索画像データがない旨を検索結果として送信する。 When the image collation unit 16 accepts the captured image data, the image collation unit 16 collates the captured image data with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 15, and character information included in the captured image data. Search for collation image data having the same character information. Then, the image matching unit 16 transmits the search result to the CPU 18 (step S303). At this time, if the collation image data can be retrieved, the image collation unit 16 transmits the retrieved collation image data as a search result. If the collation image data cannot be retrieved, the image collation unit 16 retrieves that there is no search image data. Send as a result.

 CPU18は、画像照合部16から検索結果を受信すると、その検索結果を確認して、照合画像情報群D11内に検索画像データがあるか否かを判断する(ステップS304)。 When the CPU 18 receives the search result from the image collating unit 16, the CPU 18 confirms the search result and determines whether there is search image data in the collation image information group D11 (step S304).

 検索画像データがない場合、CPU18は、撮像画像データに含まれる文字情報と同じ文字情報を有する照合画像データが無い旨と、撮像を再度行うか否かの選択を要求する選択情報を表示部13に表示する(ステップS313)。 When there is no search image data, the CPU 18 displays selection information for requesting selection of whether or not to perform imaging again, and that there is no collation image data having the same character information as the character information included in the captured image data. (Step S313).

 その後、CPU18は、制御信号入力部14から検出信号を受信すると、その検出信号を確認して、撮像を再度行うか否かを判断することで、処理を終了するか否かを判断する(ステップS314)。CPU18は、撮像を再度行わない場合、処理を終了する。一方、撮像を再度行う場合、ステップS301の処理が実行される。 Thereafter, when receiving a detection signal from the control signal input unit 14, the CPU 18 confirms the detection signal and determines whether or not to perform imaging again, thereby determining whether or not to end the process (step). S314). CPU18 complete | finishes a process, when not imaging again. On the other hand, when performing imaging again, the process of step S301 is performed.

 ステップS304で検索画像データがある場合、CPU18は、CPU18自身に画像移動モードを設定し、表示部13に1画面表示モードを設定する。そして、CPU18は、記憶部15に記憶された記憶情報D1から、検索画像データに対応する翻訳画像データを抽出し、その翻訳画像データが示す翻訳画像を翻訳結果画像として表示部13に表示する(ステップS305)。 If there is search image data in step S304, the CPU 18 sets the image movement mode in the CPU 18 itself, and sets the one-screen display mode in the display unit 13. Then, the CPU 18 extracts translation image data corresponding to the search image data from the storage information D1 stored in the storage unit 15, and displays the translation image indicated by the translation image data on the display unit 13 as a translation result image ( Step S305).

 ステップS305にて翻訳結果画像を表示すると、CPU18は、検出信号の有無を確認する(ステップS306)。 When the translation result image is displayed in step S305, the CPU 18 confirms the presence or absence of a detection signal (step S306).

 検出信号がない場合、CPU18は、押下信号の有無を確認して、入力ボタン部11が押下されたか否かを判断する(ステップS307)。 If there is no detection signal, the CPU 18 confirms whether or not there is a pressing signal and determines whether or not the input button unit 11 has been pressed (step S307).

 入力ボタン部11が押下された場合、CPU18は、処理を終了する。 When the input button unit 11 is pressed, the CPU 18 ends the process.

 一方、入力ボタン部11が押下されない場合、CPU18は、S306の処理へ戻る。 On the other hand, if the input button unit 11 is not pressed, the CPU 18 returns to the process of S306.

 ステップS306にて検出信号があった場合、CPU18は、その検出信号に基づいて、1画面/2画面スイッチ表示領域101が接触されたか否かを判断する(ステップS308)。 If there is a detection signal in step S306, the CPU 18 determines whether or not the 1-screen / 2-screen switch display area 101 is touched based on the detection signal (step S308).

 ステップS308で1画面/2画面スイッチ表示領域101が接触された場合、CPU18は、表示部13の表示モードを切り替え、切り替え後の表示モードに応じた画像を表示部13に表示する(ステップS309)。 When the 1-screen / 2-screen switch display area 101 is touched in step S308, the CPU 18 switches the display mode of the display unit 13 and displays an image corresponding to the display mode after switching on the display unit 13 (step S309). .

 例えば、表示部13が1画面表示モードになった場合、CPU18は、ステップ305で抽出した翻訳画像データが示す翻訳結果画像を表示部13に表示する。 For example, when the display unit 13 enters the single screen display mode, the CPU 18 displays the translation result image indicated by the translation image data extracted in step 305 on the display unit 13.

 また、表示部13が2画面表示モードになった場合、CPU18は、記憶部15に記憶された記憶情報D1内の、翻訳結果画像データに対応する画像位置特徴情報を確認する。CPU18は、その画像位置特徴情報に基づいて、翻訳結果画像データ内の翻訳基準点の表示位置と、検索画像データである照合結果画像データ内の基準点の表示位置とを対応付けて、翻訳結果画像データおよび照合結果画像データを表示部13に表示する。 Further, when the display unit 13 enters the two-screen display mode, the CPU 18 confirms the image position feature information corresponding to the translation result image data in the storage information D1 stored in the storage unit 15. Based on the image position feature information, the CPU 18 associates the display position of the translation reference point in the translation result image data with the display position of the reference point in the matching result image data that is the search image data, and the translation result. The image data and the collation result image data are displayed on the display unit 13.

 ステップS308で1画面/2画面スイッチ表示領域101が接触されない場合、CPU18は、動作モードがペン入力モードか否かを判断する(ステップS310)。 If the 1-screen / 2-screen switch display area 101 is not touched in step S308, the CPU 18 determines whether or not the operation mode is the pen input mode (step S310).

 動作モードがペン入力モードの場合、CPU18は、表示部13が表示している翻訳結果画像にペン表示領域106を重畳させて表示する(ステップS311)。 When the operation mode is the pen input mode, the CPU 18 superimposes and displays the pen display area 106 on the translation result image displayed on the display unit 13 (step S311).

 CPU18は、ペン入力モードで動作していない場合は、画像移動モードで動作する(S312)。 The CPU 18 operates in the image movement mode when not operating in the pen input mode (S312).

 CPU18は、ステップS309、S311およびS312の処理を終了した場合、ステップS306の処理に戻る。 CPU18 returns to the process of step S306, when the process of step S309, S311 and S312 is complete | finished.

 なお、記憶情報D1には、翻訳文字情報の言語がそれぞれ異なる複数の翻訳画像情報群D12が含まれてもよい。この場合、CPU18は、情報表示装置1に設定された操作者の使用言語に基づいて、複数の翻訳画像情報群D12の中からひとつの翻訳画像情報群D12を選択し、その選択した翻訳画像情報群D12に応じた翻訳結果画像を表示すればよい。このとき、CPU18は、他の言語の翻訳結果画像に切り替えるための領域を表示部13に表示することで、操作者が言語を選択できるようにしてもよい。 Note that the storage information D1 may include a plurality of translated image information groups D12 having different translated character information languages. In this case, the CPU 18 selects one translated image information group D12 from the plurality of translated image information groups D12 based on the language used by the operator set in the information display device 1, and the selected translated image information. What is necessary is just to display the translation result image according to group D12. At this time, the CPU 18 may display an area for switching to a translation result image in another language on the display unit 13 so that the operator can select a language.

 以上説明したように本実施形態によれば、撮像画像データと同じ文字情報を有する照合画像データ内の文字情報が存在する照合文字範囲の基準点と、照合画像データの文字が翻訳された翻訳画像データ内の翻訳文字情報が存在する翻訳文字範囲の翻訳基準点とが対応づけられて、照合画像データが示す照合画像と翻訳画像データが示す翻訳画像とが表示される。 As described above, according to the present embodiment, the reference point of the collation character range in which the character information in the collation image data having the same character information as the captured image data exists, and the translated image in which the characters of the collation image data are translated A collation image indicated by the collation image data and a translation image indicated by the translation image data are displayed in correspondence with the translation reference point of the translation character range in which the translation character information in the data exists.

 このため、照合画像に含まれる文字情報と、翻訳画像に含まれる翻訳文字情報との長さが大幅に変化しても、文字情報および翻訳文字情報を、見やすく表示することが可能になる。 For this reason, even if the lengths of the character information included in the collation image and the translated character information included in the translated image change significantly, the character information and the translated character information can be displayed in an easy-to-read manner.

 また、本実施形態では、照合画像および翻訳画像の移動や拡縮が、基準点および翻訳基準点が対応するように行われるので、画像が移動や拡縮された場合でも、文字情報および翻訳文字情報の対応を分かり易くすることが可能になる。 In the present embodiment, the collation image and the translation image are moved and scaled so that the reference point and the translation reference point correspond to each other. Therefore, even when the image is moved or scaled, the character information and the translated character information It becomes possible to make the correspondence easy to understand.

 また、本実施形態では、情報表示装置1において、制御信号入力部14をさらに有し、制御部17は、制御信号入力部14に対して前記表示部に表示された照合画像または翻訳画像である拡縮対象画像を拡縮させる拡縮指示が入力された場合、その拡縮指示に応じて、照合画像および翻訳画像のそれぞれを、照合画像および翻訳画像のそれぞれの基準点を中心として拡縮する。 In the present embodiment, the information display device 1 further includes a control signal input unit 14, and the control unit 17 is a collation image or a translation image displayed on the display unit with respect to the control signal input unit 14. When an enlargement / reduction instruction for enlarging / reducing the enlargement / reduction target image is input, the collation image and the translation image are enlarged / reduced around the reference points of the collation image and the translation image in accordance with the enlargement / reduction instruction.

 このため、照合画像および翻訳画像が、拡縮の中心が対応付けられたまま拡縮されるので、文字情報および翻訳文字情報を対応させて見やすく表示することが可能になる。したがって、操作者が拡縮して見たい文字情報あるいは翻訳文字情報の位置を見比べ易くなる。 For this reason, the collation image and the translation image are enlarged / reduced with the enlargement / reduction centers associated with each other, so that the character information and the translated character information can be displayed in correspondence with each other. Therefore, it becomes easier for the operator to compare the positions of the character information or the translated character information that the operator wants to enlarge or reduce.

 また、本実施形態では、情報表示装置1において、文字位置情報は、各文字範囲の大きさの比をさらに示し、制御部17は、拡縮指示が入力された場合、その拡縮指示に応じて、拡縮対象画像をその拡縮対象画像上の基準点を中心に拡縮するとともに、拡縮対象画像とは異なる照合画像または翻訳画像を、文字位置情報に基づいて前記表示部上の各文字範囲を対応させて、照合画像または翻訳画像上の基準点を中心に拡縮する。 In the present embodiment, in the information display device 1, the character position information further indicates the ratio of the sizes of the character ranges, and when the enlargement / reduction instruction is input, the control unit 17 responds to the enlargement / reduction instruction. The enlargement / reduction target image is enlarged / reduced around a reference point on the enlargement / reduction target image, and a matching image or translation image different from the enlargement / reduction target image is associated with each character range on the display unit based on character position information. The image is scaled around the reference point on the collation image or translation image.

 このため、照合画像および翻訳画像が、文字範囲が対応付けられたまま拡縮されるので、文字情報および翻訳文字情報を対応させて見やすく表示することが可能になる。したがって、操作者が拡縮して見たい文字範囲を見比べやすくなる。 For this reason, since the collation image and the translation image are enlarged / reduced with the character range associated with each other, the character information and the translation character information can be displayed in correspondence with each other. Therefore, it becomes easier for the operator to compare the character ranges that the operator wants to see.

 また、本実施形態では、情報表示装置1において、制御信号入力部14をさらに有し、文字位置情報は、文字範囲の大きさと、翻訳文字範囲の大きさの比をさらに示し、制御部17は、制御信号入力部14に対して照合文字範囲または翻訳文字範囲である対象文字範囲内に描画する描画指示が入力された場合、その描画指示に応じた図形を、その対象文字範囲内に描画し、かつ、前記文字位置情報に基づいて、その対象文字範囲とは異なる文字範囲内に上記の図形を描画する。 In the present embodiment, the information display device 1 further includes a control signal input unit 14, the character position information further indicates a ratio between the size of the character range and the size of the translated character range, and the control unit 17 When a drawing instruction for drawing in the target character range that is the collation character range or the translated character range is input to the control signal input unit 14, a figure corresponding to the drawing instruction is drawn in the target character range. And, based on the character position information, the graphic is drawn in a character range different from the target character range.

 このため、照合画像および翻訳画像が、文字範囲が対応付けられたまま描画されるので、文字情報および翻訳文字情報の対応を明確にすることが可能になる。 For this reason, since the collation image and the translation image are drawn with the character ranges associated with each other, the correspondence between the character information and the translation character information can be clarified.

 (第2の実施の形態)
 図5は、本実施形態の情報表示装置の構成を示す図である。
(Second Embodiment)
FIG. 5 is a diagram illustrating a configuration of the information display device of the present embodiment.

 図5に示される情報表示装置4は、入力ボタン部11と、背面カメラ部12と、表示部13と、制御信号入力部14と、記憶部15と、画像照合部16と、前面カメラ部41と、制御部42とを有する。また、制御部42は、CPU43と演算部19とを有する。つまり、情報表示装置4は、図1に示した第1の実施形態の情報表示装置1と比べて、前面カメラ部41が追加され、さらにCPU18がCPU43に置き換えられている点が異なる。 5 includes an input button unit 11, a rear camera unit 12, a display unit 13, a control signal input unit 14, a storage unit 15, an image matching unit 16, and a front camera unit 41. And a control unit 42. The control unit 42 includes a CPU 43 and a calculation unit 19. That is, the information display device 4 is different from the information display device 1 according to the first embodiment shown in FIG. 1 in that a front camera unit 41 is added and the CPU 18 is replaced with a CPU 43.

 前面カメラ部41は、撮像を行う、第2の撮像画像データを出力する第2の撮像部である。具体的には、前面カメラ部41は、情報表示装置4における表示部13が設けられた面である正面に設けられ、正面側の人物を撮像し、その人物を写した人物撮像画像データを第2の撮像データとして出力する。 The front camera unit 41 is a second imaging unit that performs imaging and outputs second captured image data. Specifically, the front camera unit 41 is provided on the front surface of the information display device 4 on which the display unit 13 is provided. The front camera unit 41 captures a person-captured image data obtained by capturing a person on the front side and copying the person. 2 as image data.

 CPU43は、図1に示したCPU18と同じ機能を有し、さらに以下の機能を有する。 The CPU 43 has the same function as the CPU 18 shown in FIG. 1, and further has the following functions.

 CPU43は、前面カメラ部41から出力された人物撮像画像データが示す人物撮像画像に被写体として含まれる人物の人数を特定する。具体的には、CPU43は、人物撮像画像に被写体として含まれる人物の顔を検出し、その顔の数を人物の人数として特定する。 The CPU 43 specifies the number of persons included as subjects in the person captured image indicated by the person captured image data output from the front camera unit 41. Specifically, the CPU 43 detects the face of a person included as a subject in the person captured image, and specifies the number of faces as the number of persons.

 そして、CPU43は、特定した人数に応じて、翻訳結果画像および照合結果画像のいずれか一方または両方を表示部13に表示する。 And CPU43 displays either one or both of a translation result image and a collation result image on the display part 13 according to the specified number of people.

 具体的には、人数が一人の場合、CPU43は、表示部13の表示モードを1画面表示モードに設定し、翻訳結果画像および照合結果画像のいずれか一方を表示部13に表示する。 Specifically, when the number of persons is one, the CPU 43 sets the display mode of the display unit 13 to the single screen display mode, and displays either the translation result image or the collation result image on the display unit 13.

 一方、人数が複数の場合、CPU43は、上記の検出結果に基づいて、向かい合わせの人物が存在するか否かを判断し、その判断結果に応じた向きで照合結果画像および翻訳結果画像を表示する。 On the other hand, when there are a plurality of persons, the CPU 43 determines whether or not there is a person facing each other based on the detection result, and displays the collation result image and the translation result image in the direction according to the determination result. To do.

 例えば、向かい合わせの人物が存在する場合、CPU43は、表示部13の表示モードを2画面対向表示モードに設定する。2画面対向表示モードは、照合結果画像および翻訳結果画像が互いに逆向きになるように、照合結果画像および翻訳結果画像を表示するための表示モードである。 For example, when there is a person facing each other, the CPU 43 sets the display mode of the display unit 13 to the two-screen facing display mode. The two-screen facing display mode is a display mode for displaying the matching result image and the translation result image so that the matching result image and the translation result image are opposite to each other.

 向かい合わせの人物が存在しない場合、CPU43は、表示部13の表示モードを2画面表示モードに設定し、翻訳結果画像および照合結果画像の両方を互いに同じ向きに表示する。 When there is no facing person, the CPU 43 sets the display mode of the display unit 13 to the two-screen display mode, and displays both the translation result image and the collation result image in the same direction.

 図6は、2画面対向表示モード時に表示部13に表示されるユーザーインターフェイス画面の一例を示す図である。 FIG. 6 is a diagram illustrating an example of a user interface screen displayed on the display unit 13 in the two-screen facing display mode.

 図6に示すように2画面対向表示モードでは、CPU43は、照合結果画像および翻訳結果画像が互いに逆向きになるように、照合結果画像および翻訳結果画像を表示部13に表示しているため、操作者201と、操作者201と対向する閲覧者202のそれぞれが見やすい方向に照合結果画像および翻訳結果画像を表示することができる。 As shown in FIG. 6, in the two-screen facing display mode, the CPU 43 displays the matching result image and the translation result image on the display unit 13 so that the matching result image and the translation result image are opposite to each other. The collation result image and the translation result image can be displayed in directions that are easy for the operator 201 and the viewer 202 facing the operator 201 to view.

 これにより、例えば、メニューが撮像された場合、操作者201には翻訳結果画像のメニューが表示され、ウエイターである閲覧者202には照合結果画像のメニューが表示される。そこで、注文者である操作者201が画面をペン入力モードに切り替え、翻訳結果画像におけるメニュー上の注文したい料理名に下線を引くと、ウエイター側に向いた照合結果画像におけるメニューの料理名に下線が描画され、間違いなく注文を出すことが可能になる。 Thus, for example, when a menu is imaged, a translation result image menu is displayed for the operator 201 and a collation result image menu is displayed for the viewer 202 who is a waiter. Therefore, when the operator 201 who is the orderer switches the screen to the pen input mode and underlines the name of the dish to be ordered on the menu in the translation result image, the name of the menu in the matching result image facing the waiter is underlined. Will be drawn and will definitely be able to place an order.

 図7は、本実施形態の情報表示装置4の動作の一部を示すフローチャートである。 FIG. 7 is a flowchart showing a part of the operation of the information display device 4 of the present embodiment.

 本実施形態の情報表示装置4の動作は、図4のフローチャートにおいて、ステップS304とステップS305の処理の間に、図7に示されたステップS401~S406の処理が挿入されている。ただし、図4のフローチャートの各ステップのCPU18の処理は、CPU43にて行われる。 In the operation of the information display device 4 of the present embodiment, the processes of steps S401 to S406 shown in FIG. 7 are inserted between the processes of steps S304 and S305 in the flowchart of FIG. However, the processing of the CPU 18 in each step of the flowchart of FIG.

 ステップS304において、照合画像情報群D11内に検索画像データがある場合、CPU43は、前面カメラ部41を駆動して、前面カメラ部41に人物を撮像させる。前面カメラ部41は、その人物を撮像した人物撮像画像を示す人物撮像画像データをCPU43に送信する(ステップS401)。 In step S304, if there is search image data in the collation image information group D11, the CPU 43 drives the front camera unit 41 to cause the front camera unit 41 to image a person. The front camera unit 41 transmits person-captured image data indicating a person-captured image obtained by capturing the person to the CPU 43 (step S401).

 CPU43は、人物撮像画像データを受信すると、人物撮像画像データが示す人物撮像画像に被写体として含まれる人物の顔を検出する(ステップS402)。 When receiving the human captured image data, the CPU 43 detects a human face included as a subject in the human captured image indicated by the human captured image data (step S402).

 CPU43は、その検出結果に基づいて、人物の人数を特定し、人数が1人か否かを判断する(ステップS403)。 The CPU 43 identifies the number of persons based on the detection result, and determines whether the number is one (step S403).

 人数が一人の場合、CPU43は、ステップS305の処理へ進む。一方、人数が一人でない場合、CPU43は、向かい合わせの人物が存在するか否かを判断する(ステップS404)。 If the number of persons is one, the CPU 43 proceeds to the process of step S305. On the other hand, when the number of persons is not one, the CPU 43 determines whether or not there is a person facing each other (step S404).

 向かい合わせの人物が存在する場合、CPU43は、操作者201が見る方向に翻訳結果画像が向き、閲覧者202が見る方向に照合結果画像が向くように、翻訳結果画像および照合結果画像を表示部13に表示する(ステップS405)。 When there is a person facing each other, the CPU 43 displays the translation result image and the collation result image so that the translation result image faces in the direction viewed by the operator 201 and the collation result image faces in the direction viewed by the viewer 202. 13 (step S405).

 向かい合わせの人物が存在しない場合には、CPU43は、翻訳結果画像および照合結果画像を同じ向きに表示部13に表示する(ステップS406)。 When there is no facing person, the CPU 43 displays the translation result image and the collation result image on the display unit 13 in the same direction (step S406).

 なお、ステップS405およびステップS406が終了すると、ステップS306が実行される。 In addition, step S306 is performed when step S405 and step S406 are complete | finished.

 以上説明したように、本実施形態によれば、情報表示装置4において、前面カメラ部41部は、人物を撮像し、人物撮像画像データを送信し、制御部42は、人物撮像画像データが示す人物撮像画像に被写体として含まれる人物の顔を検出し、検出の結果に基づいて、人物の人数を特定し、その人数に応じて、表示部13に、翻訳画像および照合画像のいずれか一方あるいは両方を表示する。 As described above, according to the present embodiment, in the information display device 4, the front camera unit 41 images a person and transmits the person captured image data, and the control unit 42 indicates the person captured image data. A person's face included as a subject is detected in the person-captured image, the number of persons is specified based on the detection result, and either one of the translated image and the collation image is displayed on the display unit 13 according to the number of persons. Display both.

 このため、操作者などの人数に応じて、表示モードが切り替えられるので、操作者の人数に応じて、文字情報あるいは翻訳文字情報を表示して、操作者の用途に応じた利用が可能になる。 Therefore, since the display mode can be switched according to the number of operators or the like, the character information or the translated character information can be displayed according to the number of operators and can be used according to the usage of the operator. .

 また、本実施形態では、情報表示装置4において、制御部42は、人数が複数の場合には、向かい合わせの複数の人物が存在するか否かに応じて、翻訳画像または照合画像の向きを変えて、翻訳画像および照合画像を表示する。 In the present embodiment, in the information display device 4, when there are a plurality of persons, the control unit 42 determines the orientation of the translated image or the collation image depending on whether or not there are a plurality of persons facing each other. Instead, the translated image and the collation image are displayed.

 このため、操作者や閲覧者などが向い合せか否かに応じて、文字情報および翻訳文字情報が見やすく表示されるので、操作者や閲覧者がコミュニケーションを取るなどの利便性を向上させることが可能になる。 For this reason, the character information and the translated character information are displayed in an easy-to-read manner depending on whether the operator or the viewer is facing the user, so that the convenience of the operator or the viewer to communicate can be improved. It becomes possible.

 (第3の実施の形態)
 図8は、本実施形態の情報表示システムの構成を示す図である。
(Third embodiment)
FIG. 8 is a diagram showing the configuration of the information display system of this embodiment.

 図8に示される情報表示システムは、照合装置3および情報表示装置5を含む。なお、図8において、図1と同様の構成については、同じ符号を付し、説明を省略することもある。 The information display system shown in FIG. 8 includes a collation device 3 and an information display device 5. In FIG. 8, the same components as those in FIG. 1 are denoted by the same reference numerals, and description thereof may be omitted.

 照合装置3は、通信部30と、記憶部31と、CPU32と、画像照合部33とを有する。また、情報表示装置5は、入力ボタン部11と、背面カメラ部12と、表示部13と、制御信号入力部14と、通信部51と、記憶部52と、制御部53とを有する。また、制御部53は、CPU54と演算部19とを有する。 The collation device 3 includes a communication unit 30, a storage unit 31, a CPU 32, and an image collation unit 33. The information display device 5 includes an input button unit 11, a rear camera unit 12, a display unit 13, a control signal input unit 14, a communication unit 51, a storage unit 52, and a control unit 53. The control unit 53 includes a CPU 54 and a calculation unit 19.

 情報表示装置5の通信部51は、照合装置3と通信を行う。例えば、通信部51は、背面カメラ部12から出力された撮影画像データを照合装置3に送信する。また、通信部51は、送信した撮像画像データに対応する検索情報を照合装置3から受信する。なお、検索情報は、例えば、撮像画像データに対応する照合画像データ、翻訳画像データおよび文字位置情報である。 The communication unit 51 of the information display device 5 communicates with the verification device 3. For example, the communication unit 51 transmits the captured image data output from the rear camera unit 12 to the verification device 3. Further, the communication unit 51 receives search information corresponding to the transmitted captured image data from the verification device 3. The search information is, for example, collation image data corresponding to the captured image data, translation image data, and character position information.

 記憶部52は、通信部51が受信した検索情報を一時的に記憶する。 The storage unit 52 temporarily stores the search information received by the communication unit 51.

 CPU54は、記憶部52に記憶された検索情報に基づいて、照合結果画像データおよび翻訳結果画像データの表示を行う。 The CPU 54 displays the collation result image data and the translation result image data based on the search information stored in the storage unit 52.

 照合装置3は、撮像画像データを、データベースに予め保持された画像データと照合し、撮像画像データ内の文字情報と一致する文字情報を有する画像データを検索する。 The collation device 3 collates the captured image data with the image data stored in advance in the database, and searches for image data having character information that matches the character information in the captured image data.

 照合装置3の通信部30は、情報表示装置5と通信を行う。例えば、通信部30は、情報表示装置5から撮影画像データを受信する。 The communication unit 30 of the verification device 3 communicates with the information display device 5. For example, the communication unit 30 receives captured image data from the information display device 5.

 記憶部31は、第1の実施形態の記憶部15と同様に記憶情報D1を記憶する。 The storage unit 31 stores the storage information D1 similarly to the storage unit 15 of the first embodiment.

 画像照合部33は、通信部30が受信した撮像画像データを、記憶部31に記憶された記憶情報D1内の照合画像データX1~Xnのそれぞれと照合して、撮像画像データに含まれる文字情報と同じ文字情報を有する照合画像データを検索する。そして、画像照合部33は、その検索した照合画像データを記憶部31から取得してCPU32に出力する。 The image collation unit 33 collates the captured image data received by the communication unit 30 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 31, and character information included in the captured image data Search for collation image data having the same character information. Then, the image collation unit 33 acquires the retrieved collation image data from the storage unit 31 and outputs it to the CPU 32.

 CPU32は、照合画像データを受け付けると、その照合画像データに対応する翻訳画像データおよび文字位置情報を記憶部31から取得する。そして、CPU32は、その照合画像データ、翻訳画像データおよび文字位置情報を検索情報として通信部30を介して情報表示装置5に送信する。 CPU32 will acquire the translation image data and character position information corresponding to the collation image data from the memory | storage part 31, if collation image data is received. And CPU32 transmits the collation image data, translation image data, and character position information to the information display apparatus 5 via the communication part 30 as search information.

 図9は、本実施形態の情報表示装置5の動作の一部を示すシーケンス図である。 FIG. 9 is a sequence diagram showing a part of the operation of the information display device 5 of the present embodiment.

 図9に示される本実施形態の情報表示装置5の動作は、図4に示されるフローチャートにおいて、ステップS301とステップS304の間の処理が、図9に示されるシーケンス図で示される以外は、図4に示されるフローチャートに示される動作と同じである。ただし、図4に示されるフローチャートにおけるCPU18をCPU54に置き換え、記憶部15を、記憶部53に置き換える。 The operation of the information display device 5 of the present embodiment shown in FIG. 9 is the same as that in the flowchart shown in FIG. 4 except that the processing between step S301 and step S304 is shown in the sequence diagram shown in FIG. 4 is the same as the operation shown in the flowchart shown in FIG. However, CPU 18 in the flowchart shown in FIG. 4 is replaced with CPU 54, and storage unit 15 is replaced with storage unit 53.

 図4に示されるフローチャートにおいて、CPU54は、ステップS301の処理が終了したあとは、ステップS501の処理へ進む。 In the flowchart shown in FIG. 4, the CPU 54 proceeds to the process of step S501 after the process of step S301 is completed.

 背面カメラ部12は、撮像画像データを、CPU54へ送信する。CPU54は、撮像画像データを受信して、通信部60に送信する(ステップS501)。 The rear camera unit 12 transmits the captured image data to the CPU 54. The CPU 54 receives the captured image data and transmits it to the communication unit 60 (step S501).

 通信部60は、撮像画像データを受信して、照合装置3の通信部30へ送信する(ステップS502)。 The communication unit 60 receives the captured image data and transmits it to the communication unit 30 of the verification device 3 (step S502).

 通信部30は、撮像画像データを受信して、CPU32を介して画像照合部33へ送信する(ステップS503)。 The communication unit 30 receives the captured image data and transmits it to the image collation unit 33 via the CPU 32 (step S503).

 画像照合部33は、背面カメラ部12が生成した撮像画像データを、記憶部31に記憶された記憶情報D1内の照合画像データX1~Xnのそれぞれと照合して、撮像画像データに含まれる文字情報と一致した文字情報を有する照合画像データを検索する。そして、画像照合部33は、その検索した照合画像データを、CPU32に出力する(ステップS504)。 The image collation unit 33 collates the captured image data generated by the rear camera unit 12 with each of the collation image data X1 to Xn in the storage information D1 stored in the storage unit 31, and the characters included in the captured image data The collated image data having the character information that matches the information is searched. The image collation unit 33 then outputs the retrieved collation image data to the CPU 32 (step S504).

 CPU32は、照合画像データを受け付けると、その照合画像データに対応する翻訳画像データおよび文字位置情報を記憶部31から取得する。そして、CPU32は、その照合画像データ、翻訳画像データおよび文字位置情報を検索情報として通信部30へ送信する(ステップS505)。 CPU32 will acquire the translation image data and character position information corresponding to the collation image data from the memory | storage part 31, if collation image data is received. Then, the CPU 32 transmits the collation image data, translation image data, and character position information as search information to the communication unit 30 (step S505).

 通信部30は、検索情報を受信し、その検索情報を情報表示装置5の通信部51に送信する。通信部51は、検索情報を受信する(ステップS506)。 The communication unit 30 receives the search information and transmits the search information to the communication unit 51 of the information display device 5. The communication unit 51 receives the search information (step S506).

 通信部51は、検索情報を、CPU54へ送信する。CPU54は、検索情報を受信し、その検索情報を記憶部52に一次的に記憶する(ステップS507)。 The communication unit 51 transmits the search information to the CPU 54. The CPU 54 receives the search information and temporarily stores the search information in the storage unit 52 (step S507).

 CPU54は、ステップS507の処理が終了したあとは、ステップS304の処理へ進む。 CPU 54 proceeds to the process of step S304 after the process of step S507 is completed.

 以上説明したように本実施形態によれば、情報表示装置5とは別の照合装置3が記憶情報Dを記憶して撮像画像データと照合画像データとを照合させるため、操作者が、撮像画像データをより多くの画像データと照合することが可能になる。 As described above, according to the present embodiment, the collation device 3 different from the information display device 5 stores the stored information D and collates the captured image data and the collation image data. It becomes possible to collate data with more image data.

 なお、以上説明した情報表示装置1、4、5および照合装置3の機能は、その機能を実現するためのプログラムを、コンピュータにて読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータに読み込ませ、実行するものであってもよい。 The functions of the information display devices 1, 4, 5 and the collation device 3 described above are recorded on a computer-readable recording medium by recording a program for realizing the functions. The program may be read by a computer and executed.

 以上説明した各実施形態において、図示した構成は単なる一例であって、本発明はその構成に限定されるものではない。 In each embodiment described above, the illustrated configuration is merely an example, and the present invention is not limited to the configuration.

 本出願は、2012年2月3日に出願された日本出願特願2012-22087を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-22087 filed on Feb. 3, 2012, the entire disclosure of which is incorporated herein.

Claims (10)

 表示部と、
 撮像を行い、撮像画像データを出力する撮像部と、
 文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と、
 前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する照合部と、
 前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を前記表示部に表示する制御部と、を有する情報表示装置。
A display unit;
An imaging unit that performs imaging and outputs captured image data;
Collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and a collation character range in which the character information on the collation image exists. A storage unit for storing character position information indicating a correspondence relationship between a reference point and a reference point within a translated character range where the translated character information on the translated image exists;
A collation unit that collates the captured image data with the collation image data and determines whether or not the character information is included in the captured image indicated by the captured image data;
When the captured image includes the character information, the reference image and the translated image are displayed on the display by associating the reference points of the collated character range and the translated character range with each other based on the character position information. And an information display device.
 請求項1に記載の情報表示装置において、
 制御信号入力部をさらに有し、
 前記制御部は、前記制御信号入力部に対して、前記表示された前記照合画像または前記翻訳画像である移動対象画像を移動させる移動指示が入力された場合、前記移動指示に応じて前記移動対象画像を移動させるとともに、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点が対応するように、前記移動対象画像と異なる前記照合画像または前記翻訳画像を移動させる、情報表示装置。
The information display device according to claim 1,
A control signal input unit;
When the movement instruction to move the movement target image that is the displayed collation image or the translated image is input to the control signal input unit, the control unit moves the movement target according to the movement instruction. The image is moved, and based on the character position information, the collation image or the translation image different from the movement target image is moved so that the reference points of the collation character range and the translation character range correspond to each other. Information display device.
 請求項1または2に記載の情報表示装置において、
 制御信号入力部をさらに有し、
 前記制御部は、前記制御信号入力部に対して前記表示部に表示された前記照合画像または前記翻訳画像である拡縮対象画像を拡縮させる拡縮指示が入力された場合、前記拡縮指示に応じて、前記照合画像および前記翻訳画像のそれぞれを、前記照合画像および前記翻訳画像のそれぞれの基準点を中心として拡縮する、情報表示装置。
The information display device according to claim 1 or 2,
A control signal input unit;
When the enlargement / reduction instruction to enlarge / reduce the enlargement / reduction image that is the collation image or the translation image displayed on the display unit is input to the control signal input unit, the control unit, according to the enlargement / reduction instruction, An information display device that enlarges / reduces each of the collation image and the translation image around a reference point of the collation image and the translation image.
 請求項3に記載の情報表示装置において、
 前記文字位置情報は、前記照合文字範囲および前記翻訳文字範囲の大きさの比をさらに示し、
 前記制御部は、前記拡縮指示に応じて前記拡縮対象画像を拡縮するとともに、前記文字位置情報に基づいて、前記拡縮対象画像とは異なる前記照合画像または前記翻訳画像を、前記照合文字範囲および前記翻訳文字範囲のそれぞれが対応するように拡縮する、情報表示装置。
The information display device according to claim 3,
The character position information further indicates a size ratio of the collation character range and the translated character range,
The control unit enlarges / reduces the enlargement / reduction object image according to the enlargement / reduction instruction, and, based on the character position information, converts the collation image or the translation image different from the enlargement / reduction object image, the collation character range and the An information display device that expands and contracts so that each translated character range corresponds.
 請求項1ないし4のいずれか1項に記載の情報表示装置において、
 制御信号入力部をさらに有し、
 前記文字位置情報は、前記照合文字範囲および前記翻訳文字範囲の大きさの比をさらに示し、
 前記制御部は、前記制御信号入力部に対して前記照合文字範囲または前記翻訳文字範囲である対象文字範囲内に描画する描画指示が入力された場合、前記描画指示に応じた図形を前記対象文字範囲内に描画するとともに、前記文字位置情報に基づいて、前記対象文字範囲とは異なる前記照合文字範囲または前記翻訳文字範囲に前記図形を描画する、情報表示装置。
The information display device according to any one of claims 1 to 4,
A control signal input unit;
The character position information further indicates a size ratio of the collation character range and the translated character range,
When a drawing instruction for drawing within the target character range that is the collation character range or the translated character range is input to the control signal input unit, the control unit displays a graphic corresponding to the drawing instruction as the target character. An information display device that draws within a range and draws the figure in the collated character range or the translated character range different from the target character range based on the character position information.
 請求項1ないし5のいずれか1項に記載の情報表示装置において、
 撮像を行い、第2の撮像画像データを出力する第2の撮像部をさらに有し、
 前記制御部は、前記第2の撮像画像データが示す第2の撮像画像に被写体として含まれる人物の人数を特定し、当該人数に応じて、前記翻訳画像および前記照合画像のいずれか一方または両方を前記表示部に表示する、情報表示装置。
The information display device according to any one of claims 1 to 5,
A second imaging unit that performs imaging and outputs second captured image data;
The control unit specifies the number of persons included as a subject in the second captured image indicated by the second captured image data, and either one or both of the translated image and the collation image according to the number of persons. Is displayed on the display unit.
 請求項6に記載の情報表示装置において、
 前記制御部は、前記人物が複数いる場合、向かい合わせの人物が存在するか否かを判断し、当該判断結果に応じた向きに、前記翻訳画像および前記照合画像のそれぞれを表示する、情報表示装置。
The information display device according to claim 6,
The control unit determines whether there is a person facing each other when there are a plurality of persons, and displays each of the translated image and the matching image in a direction according to the determination result. apparatus.
 照合装置と、情報表示装置とを有する情報表示システムであって、
 前記照合装置は、
 文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と、
 前記情報表示装置から撮像画像データを受信する第1の通信部と、
 前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する照合部と、
 前記撮像画像内に前記文字情報が含まれる場合、前記照合画像データ、前記翻訳画像データおよび前記文字位置情報を前記記憶部から取得して、検索情報として前記情報表示装置に送信する第1の制御部と、を有し、
 前記情報表示装置は、
 表示部と、
 撮像を行い、前記撮像画像データを出力する撮像部と、
 前記撮像画像データを前記照合装置に送信し、また、前記照合装置から前記検索情報を受信する第2の通信部と、
 前記第2の通信部が受信した検索情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を前記表示部に表示する第2の制御部と、を有する、情報表示システム。
An information display system having a verification device and an information display device,
The verification device is
Collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and a collation character range in which the character information on the collation image exists. A storage unit for storing character position information indicating a correspondence relationship between a reference point and a reference point within a translated character range where the translated character information on the translated image exists;
A first communication unit that receives captured image data from the information display device;
A collation unit that collates the captured image data with the collation image data and determines whether or not the character information is included in the captured image indicated by the captured image data;
When the character information is included in the captured image, a first control that acquires the collation image data, the translated image data, and the character position information from the storage unit and transmits the information as search information to the information display device. And
The information display device includes:
A display unit;
An imaging unit that performs imaging and outputs the captured image data;
A second communication unit that transmits the captured image data to the verification device and receives the search information from the verification device;
Based on the search information received by the second communication unit, the reference image and the translated image are displayed on the display unit in correspondence with the reference points of the collated character range and the translated character range. And an information display system.
 文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶し、
 撮像を行い、撮像画像データを出力し、
 前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断し、
 前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を表示する、情報表示方法。
Collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and a collation character range in which the character information on the collation image exists. Storing character position information indicating a correspondence relationship between a reference point and a reference point within a translated character range where the translated character information exists on the translated image;
Take an image, output the captured image data,
Collating the captured image data with the collated image data to determine whether the character information is included in the captured image indicated by the captured image data;
When the character information is included in the captured image, based on the character position information, the reference image and the translated image are displayed in correspondence with the reference points of the collated character range and the translated character range. Information display method.
 文字情報を含む照合画像を示す照合画像データと、前記文字情報が翻訳された翻訳文字情報を含む翻訳画像を示す翻訳画像データと、前記照合画像上の前記文字情報が存在する照合文字範囲内の基準点と前記翻訳画像上の前記翻訳文字情報が存在する翻訳文字範囲内の基準点との対応関係を示す文字位置情報と、を記憶する記憶部と接続されたコンピュータに、
 撮像を行い、撮像画像データを出力する手順と、
 前記撮像画像データを前記照合画像データと照合して、前記撮像画像データが示す撮像画像内に前記文字情報が含まれるか否かを判断する手順と、
 前記撮像画像内に前記文字情報が含まれる場合、前記文字位置情報に基づいて、前記照合文字範囲および前記翻訳文字範囲のそれぞれの基準点を対応させて、前記照合画像および前記翻訳画像を表示する手順と、を実行させるためのプログラム。
Collation image data indicating a collation image including character information, translation image data indicating a translation image including translation character information obtained by translating the character information, and a collation character range in which the character information on the collation image exists. A computer connected to a storage unit for storing a reference point and character position information indicating a correspondence relationship between a reference point in a translated character range where the translated character information exists on the translated image,
A procedure for performing imaging and outputting captured image data;
A procedure for collating the captured image data with the collation image data and determining whether or not the character information is included in a captured image indicated by the captured image data;
When the character information is included in the captured image, based on the character position information, the reference image and the translated image are displayed in correspondence with the reference points of the collated character range and the translated character range. A program for executing the procedure.
PCT/JP2013/051044 2012-02-03 2013-01-21 Information display device, information display system, information display method and program Ceased WO2013114988A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-022087 2012-02-03
JP2012022087 2012-02-03

Publications (1)

Publication Number Publication Date
WO2013114988A1 true WO2013114988A1 (en) 2013-08-08

Family

ID=48905030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/051044 Ceased WO2013114988A1 (en) 2012-02-03 2013-01-21 Information display device, information display system, information display method and program

Country Status (1)

Country Link
WO (1) WO2013114988A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016058057A (en) * 2014-09-09 2016-04-21 株式会社T.J.Promotion Translation system, translation method, computer program, and storage medium readable by computer
JP2016072146A (en) * 2014-09-30 2016-05-09 三菱電機株式会社 Cooker
JP2016128733A (en) * 2015-01-09 2016-07-14 三菱電機株式会社 Cooker
JP2016181198A (en) * 2015-03-25 2016-10-13 株式会社リクルートホールディングス Computer program, information search system, and control method of the same
JP2021089515A (en) * 2019-12-03 2021-06-10 ソースネクスト株式会社 Translation result display control system, translation result display control method and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH056396A (en) * 1991-12-06 1993-01-14 Toshiba Corp Machine translation device
JP2000023012A (en) * 1998-07-06 2000-01-21 Olympus Optical Co Ltd Camera having translating function
JP2005134968A (en) * 2003-10-28 2005-05-26 Sony Corp Portable information terminal device, information processing method, recording medium, and program
JP2006048324A (en) * 2004-08-04 2006-02-16 Hitachi Omron Terminal Solutions Corp Document translation system
JP2006146454A (en) * 2004-11-18 2006-06-08 Sony Corp Information conversion apparatus and information conversion method
JP2011134144A (en) * 2009-12-25 2011-07-07 Square Enix Co Ltd Real-time camera dictionary

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH056396A (en) * 1991-12-06 1993-01-14 Toshiba Corp Machine translation device
JP2000023012A (en) * 1998-07-06 2000-01-21 Olympus Optical Co Ltd Camera having translating function
JP2005134968A (en) * 2003-10-28 2005-05-26 Sony Corp Portable information terminal device, information processing method, recording medium, and program
JP2006048324A (en) * 2004-08-04 2006-02-16 Hitachi Omron Terminal Solutions Corp Document translation system
JP2006146454A (en) * 2004-11-18 2006-06-08 Sony Corp Information conversion apparatus and information conversion method
JP2011134144A (en) * 2009-12-25 2011-07-07 Square Enix Co Ltd Real-time camera dictionary

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016058057A (en) * 2014-09-09 2016-04-21 株式会社T.J.Promotion Translation system, translation method, computer program, and storage medium readable by computer
JP2016072146A (en) * 2014-09-30 2016-05-09 三菱電機株式会社 Cooker
JP2016128733A (en) * 2015-01-09 2016-07-14 三菱電機株式会社 Cooker
JP2016181198A (en) * 2015-03-25 2016-10-13 株式会社リクルートホールディングス Computer program, information search system, and control method of the same
JP2021089515A (en) * 2019-12-03 2021-06-10 ソースネクスト株式会社 Translation result display control system, translation result display control method and program
JP7356332B2 (en) 2019-12-03 2023-10-04 ポケトーク株式会社 Translation result display control system, translation result display control method and program

Similar Documents

Publication Publication Date Title
CN104662492B (en) For information processor, display control method and the program of the rolling for changing the content rolled automatically
KR102147935B1 (en) Method for processing data and an electronic device thereof
US10108869B2 (en) Method and device for reproducing content
US10585488B2 (en) System, method, and apparatus for man-machine interaction
KR102129374B1 (en) Method for providing user interface, machine-readable storage medium and portable terminal
US10528249B2 (en) Method and device for reproducing partial handwritten content
US9703392B2 (en) Methods and apparatus for receiving, converting into text, and verifying user gesture input from an information input device
US20090167882A1 (en) Electronic device and operation method thereof
US20140055343A1 (en) Input method and apparatus of portable device
US9229552B2 (en) System and method for synchronized operation of touch device
CN101452356A (en) Input device, display device, input method, display method, and program
KR102125212B1 (en) Operating Method for Electronic Handwriting and Electronic Device supporting the same
WO2014107079A1 (en) Content zooming method and terminal implementing the same
CN103955339A (en) Terminal operation method and terminal equipment
US10747499B2 (en) Information processing system and information processing method
WO2013114988A1 (en) Information display device, information display system, information display method and program
WO2014122794A1 (en) Electronic apparatus and handwritten-document processing method
JP2015069365A (en) Information processing equipment and control program
CN105278843B (en) Gesture shortcut operation method and system based on remote control
JP2016200860A (en) Information processing apparatus, control method thereof, and program
KR102213861B1 (en) Sketch retrieval system, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
KR20090060698A (en) Interface device and control method using virtual multi-touch screen
KR20100124952A (en) Ar contents providing system and method providing a portable terminal real-time by using letter recognition
JP2017531889A (en) Character string storage method and apparatus
CN114998102A (en) Image processing method, device and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13743413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13743413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP