NL2012827B1 - Method of providing an insert image for in-line use in a text message. - Google Patents
Method of providing an insert image for in-line use in a text message. Download PDFInfo
- Publication number
- NL2012827B1 NL2012827B1 NL2012827A NL2012827A NL2012827B1 NL 2012827 B1 NL2012827 B1 NL 2012827B1 NL 2012827 A NL2012827 A NL 2012827A NL 2012827 A NL2012827 A NL 2012827A NL 2012827 B1 NL2012827 B1 NL 2012827B1
- Authority
- NL
- Netherlands
- Prior art keywords
- image
- face
- contour
- user
- input
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method is provided for providing an insert image for in-line use in a text message. The method comprises receiving a source image comprising face data, retrieving, from the source image, a face region having a face contour, the face region comprising at least part of the face data and retrieving a pre-determined shape from a shape database, the pre-determined shape having a pre-determined contour. The face contour is adjusted to fit the pre-determined contour and adjusting the face data comprised by the face region to follow the adjustment of the face contour for forming the insert image; and the insert image is stored in a memory. In this way, an adapted/deformed picture can be provided, in which the face data may still be recognised. This may result in a caricature picture of a person. This may provided in a semi-automatic way, allowed user selection of a pre-defined shape.
Description
METHOD OF PROVIDING AN INSERT IMAGE FOR IN-LINE USE IN A TEXT MESSAGE
TECHNICAL AREA
The various aspects relate to processing of image data and user interfaces and in particular to processing image data for incorporation of image data in user interfaces for selection for use in messages.
BACKGROUND
Use of instant text messages is known and widely used. Examples for this are, on desktop and laptop computers, ICQ and MSN messenger. For mobile devices, applications like Whatsapp, Telegram, Hemlis or plain SMS (short message services) are available. Furthermore, cross-platform applications like Google hangout are available. Such applications allows users to exchange data in text messages. In the text messages, image data may be incorporated. The message applications mentioned above provide images that may be inserted in-line with text.
SUMMARY
It is preferred to provide an application for providing custom-made images for insertion in text messages and an application in which such images may be used. It would especially be appreciated if such images would comprise a face of a person. A first aspect provides a method of providing an insert image for in-line use in a text message. The method comprises receiving a source image comprising face data, retrieving, from the source image, a face region having a face contour, the face region comprising at least part of the face data and retrieving a pre-determined shape from a shape database, the pre-determined shape having a pre-determined contour. The face contour is adjusted to fit the pre-determined contour and adjusting the face data comprised by the face region to follow the adjustment of the face contour for forming the insert image; and the insert image is stored in a memory.
In this way, an adapted or deformed picture can be provided, in which the face data may still be recognised. This may result in a so-called caricature picture of a person. This may provided in a semi-automatic way, allowed user selection of a pre-defined shape.
In an embodiment of the first aspect, retrieving the face region comprises: displaying at least part of the source image and displaying, over the source image, a selection contour defining a selection area. User alignment input is received for aligning the part of the source image with the selection contour; and user selection input is received for selecting data comprised by the selection contour as the face region.
This embodiment allows for easy selection of face data to be used for the insert image.
Another embodiment of the first aspect comprises determining a transposition function defined by the adjustment from the face contour to the predetermined contour and applying the transposition function to the face data comprised by the face region.
This embodiment allows providing a smoothly formed image of the original image data, allowing recognition of a person of whom image face data is used. A second aspect provides a method of providing a user interface for sending text messages comprising custom-made in-line insert images. The method comprises providing a first input interface enabling a user to select characters for forming a text message and retrieving at least one insert image from a memory. A second input interface is provided comprising at least one thumbnail image representing the custom-made insert message enabling the user to select the insert image for inserting in the text message.
This aspect allows insertion of custom-made images.
An embodiment of the second aspect comprises providing the first input interface by displaying the first input interface on a touch-sensitive screen and providing an image selection icon for selecting the second input interface. A user input command is received for selecting the image selection icon and the second input interface is provided upon receiving the user input command selecting the image selection icon.
Another embodiment of the second aspect comprises the method according to the first aspect for receiving at least one insert image in the memory.
This embodiment allows for insertion of specific custom-made image, providing also all advantages of the first aspect. A third aspect provides a device for providing an insert image for in-line use in a text message. The device comprises an input module arranged to receive a source image comprising face data and a processing unit. The processing unit is arranged to retrieve, from the source image, a face region having a face contour, the face region comprising at least part of the face data; retrieve a pre-determined shape from a shape database, the pre-determined shape having a pre-determined contour and adjust the face contour to fit the predetermined contour and adjusting the face data comprised by the face region to follow the adjustment of the face contour for forming the insert image. The device further comprises a memory module arranged to store the insert image. A fourth embodiment comprises a device for providing an insert image for inline use in a text message. The device comprises: an input module arranged to receive a source image comprising face data and a processing unit. The processing unit is arranged to retrieve, from the source image, a face region having a face contour, the face region comprising at least part of the face data; retrieve a pre-determined shape from a shape database, the pre-determined shape having a pre-determined contour and adjust the face contour to fit the pre-determined contour and adjusting the face data comprised by the face region to follow the adjustment of the face contour for forming the insert image. The device further comprises a memory module arranged to store the insert image.
An embodiment of the fourth aspect further comprises an input module arranged to receive a source image comprising face data. The processing unit is further arranged to retrieve, from the source image, a face region having a face contour, the face region comprising at least part of the face data, retrieve a pre-determined shape from a shape database, the pre-determined shape having a pre-determined contour and adjust the face contour to fit the predetermined contour and adjusting the face data comprised by the face region to follow the adjustment of the face contour for forming the insert image. A fifth aspect provides a computer programme product comprising computer executable code for programming a processing module enabling the processing module to execute the method according to the first aspect. A sixth aspect provides computer programme product comprising computer executable code for programming a processing module enabling the processing module to execute the method according to the second aspect.
BRIEF DESCRIPTION OF THE DRAWINGS
The various aspects and embodiments thereof will now be discussed in conjunctions with Figures. In the Figures,
Figure 1: shows a schematic view of a mobile telephone;
Figure 2: shows a first flowchart;
Figure 3 A: shows a view of an image with a selection contour;
Figure 3 B: shows selected face image data Figure 3 C: shows a selection of pre-defined shapes;
Figure 3 D: shows face image data transposed to pre-defined shapes;
Figure 3 E: shows a compound insert image;
Figure 4: shows a second flowchart;
Figure 5 A: shows a text user interface; and Figure 5 B: shows an image user interface.
DETAILED DESCRIPTION
Figure 1 shows a schematic view of a mobile telephone 100 as an example of an device for sending text messages with embedded images. The mobile telephone 100 comprises a processing unit 110 for controlling the mobile telephone 100 and various components thereof. The processing unit 110 comprises dedicated sub-units for controlling specific processes and for executing various steps thereof. These steps will be discussed below in further detail. The sub-units may be provided as dedicated programmed parts of the processing unit 110, either permanently or temporarily available, or as hard-wired dedicated circuits, either provided by design or by blowing fuses, other, or a combination thereof. The mobile telephone 100 also comprises a camera 126 as an input module for providing image data, a memory 122 and a touch screen 124. The touch screen 124 is arranged to display data and to receive user input commands. The mobile telephone 100 further comprises a transceiver module 128 that is connected to an antenna 130 for receiving and sending data to and from a network.
Figure 2 shows a flowchart 200 depicting a process for providing an character image for insertion in-line in a text message. The various steps of the flowchart 200 will be discussed in conjunction with Figure 3 A, Figure 3 B, Figure 3 C, Figure 3 D and Figure 3 E. The table below provides short summaries of the various parts of the process depicted by the flowchart 200. 202 start process 204 receive image 206 display image 208 display selection contour 210 receive alignment input 212 receive selection command 214 define face region 216 retrieve pre-defined shapes 218 display pre-defined shapes 220 receive selection of pre-defined shape 222 adjust face data shape 224 retrieve accessory data 226 display accessory image data 228 receive selection of accessory 230 add accessory image data to the adjusted face data 232 store final image in memory 234 end of process
The process starts in a starting terminator 202 and continues to step 204 in which image data is received. The image data may be received form the camera module 126, retrieved from the memory 122, received from another device via the transceiver module 128, otherwise, or a combination thereof. The image data comprises at least one face, preferably from the front - a so-called mug-shot.
In step 206, the image data is displayed as a picture. Figure 3 A shows the a picture 302, showing a person 310 with a face 312. The picture 302 is displayed on the screen 124. In step 208, a selection contour 320 is shown on the screen 124. In step 210, alignment input is received from a user in step 210. The alignment input may be received by means of the touch sensitive display 124. Based on the alignment input, the alignment contour may be positioned over the face 312. The shape of the alignment contour 320 may also be modified. Once the alignment contour 320 is correctly positioned over the face 312 in the picture 302, the touch screen 124 may receive a confirmation command in step 212 for confirming selection of face data as defined within the alignment contour 320. Figure 3 B shows the face image data that is defined as the image portion captured within the alignment contour 320 in step 214.
The face image data may be selected from a picture already taken by the camera 126. Alternatively, the camera 126 may continuously capture image data that is shown on the touchscreen 124. Additionally, the touchscreen 124 shows the alignment contour 320. The alignment contour 320 may be at a fixed position on the touchscreen 124 and the user is to position is or her head relative to the camera 126 in such way a desired portion of the face 312 is within the alignment contour 320. This means either camera 126 may be moved, the user may move, or both, in order to align the alignment contour 320 with the face 312 as captured by the camera 126 and displayed on the touchscreen 124. Once the desired portion of the face 312 is within the alignment contour, a selection command is provided by means of the touchscreen 124, a hardware input device like a button, or another input device.
Once the face image data has been determined, the face region image data may be stored in the memory 122. Subsequently, pre-defined shapes for transposing the face image data to are retrieved from the memory 122 in step 216. At least part of the retrieved pre-defined shapes are displayed on the touchscreen 124 for selection by the user in step 218. An example of this is shown by Figure 3 C. Figure 3 C shows a rectangle 442 as a first pre-defined shape, a triangle 444 as a second pre-defined shape, an ellipse 446 as a third pre-defined shape, a circle 448 as a fourth pre-defined shape, an upside-down triangle 450 as a fifth pre-defined shape and a square 452 as a sixth pre-defined shape.
In step 220, a selection of a pre-defined shape is received. The selection may be established by the user tapping on the preferred pre-defined shape displayed on the touchscreen 124. After the preferred pre-defined shape has been retrieved in step 220, the face image data is adjusted to the selected pre-defined shape in step 222 for forming an insert image. The adjustment of the face image data may be done in several ways. For example, the contour of the face image data - as defined by the selection contour 320 - is modified to fit the perimeter of the selected pre-defined shape. The rest of the face image data is distributed over the pre-defined shape. This may be done by smearing out the image face data over the pre-defined shape in a similar way the perimeter of the face image data has been adjusted to the perimeter of the pre-selected shape.
The adjustment may be done in a circular way. For this, a centre point is defined in both the face image data and the pre-defined shape. If a specific point on the perimeter of the face image data has to be moved away from the centre by 50%, also other face data on a line between the centre point and the specific point may be moved away from the centre by 50%. This a form of linear interpolation. Also other ways of interpolation may be used, like quadratic interpolation. If additional image data points - pixels - may be required for forming a proper and smooth image. Figure 3 D provides a schematic indication of how adjustment may take place for various pre-defined shapes. The adjustment processing may be provided by an image adjustment sub-unit 112 (Figure 1).
After the face image data has been adjusted, accessory image data may be retrieved in step 224 and subsequently shown on the touchscreen 124 in step 226. Accessories may subsequently be retrieved for enhancing the insert image. A selection of one or more accessories to be added to the insert image for forming a compound insert image is received in step 228. The accessories are added to the insert image in step 230. Figure 3 E shows a compound insert image 390 comprising an insert image 380 and accessory image data for spectacles 384, a ball 386 and a body 382. The compound insert image may be shown on the touchscreen 124 and is stored in the memory 122 for later retrieval in step 232. Subsequently, the procedure ends in terminator 234.
Figure 4 shows a flowchart 400 depicting a message composition procedure for providing a message with optional text and an insert image - either a plain insert image or provided with accessories. The table below provides a short description of the steps of the procedure. The flowchart 400 will be discussed in further detail in conjunction with Figure 1, Figure 5 A and Figure 5 B. 402 start 404 provide text input user interface 406 Provide image selection icon 408 Receive text input for message 410 Receive image input selection command 412 Retrieve images from memory 414 Form thumbnails 416 Form image selection user interface 418 Provide image selection user interface 420 Receive image selection 422 Insert selected image in message 424 Receive send command 426 Send message 428 end
The procedure starts in terminator 402 by receiving an indication that a user intends to send a text message. This indication may be provided by the user tapping an icon on the touchscreen 124. Subsequently, a text input user interface 502 is provided on the touchscreen 124 in step 404, as shown by Figure 5 A. Furthermore, an input bar 504 is provided to show a preview of a message that is composed. And additionally, an image selection icon 506 is provided on the touchscreen 124 as well in step 406.
Having provided the text input user interface 502, the user is provided with means for entering a text message. Input for the text message may be received in step 408. At the moment the user wants to insert an image in-line with the text, the user selects the image selection icon 506. The command initiated by selecting the image selection icon 506 is received in step 410. Upon receiving this command, data for one or more images available for insertion are retrieved from the memory 122 in step 412. From the images, thumbnails may be formed in step 414. Thumbnails are miniature versions of the actual images that have been retrieved. In case the actual images are too large to display a desired amount all in one view, this may be advantageous. If the images stored and retrieved are of a convenient size, forming thumbnails may not be required.
Having generated thumbnails or having retrieved the images in proper size, an image selection user interface is formed in step 416 by means of a user interface processing sub-unit 114 (Figure 1). The image selection user interface 512 is shown in Figure 5 B. Alternatively, the image user selection interface 512 has already been formed at an earlier stage and is provided directly in response to selection of the image selection icon 506. With the image selection user interface 512, a user may select an image for insertion in-line with text already provided in the input bar 504. The selection may be done as discussed above and the selection is received instep 420. The selected image is inserted in the message in step 422. In Figure 5 B, an inserted image 514 is shown in the input bar 504, in-line with text. Alternatively, a message only comprises the insert image 516. In the image selection user interface, also a text input selection icon 516 may be provided to enable the user to switch back to the text input user interface 502.
When the user is finished composing the message, the user instructs the mobile telephone 100 to send the message. The instruction is received in step 424, upon which the message is sent via the transceiver module 128 in step 426. Subsequently, the process ends in terminator 428.
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL2012827A NL2012827B1 (en) | 2014-05-16 | 2014-05-16 | Method of providing an insert image for in-line use in a text message. |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL2012827A NL2012827B1 (en) | 2014-05-16 | 2014-05-16 | Method of providing an insert image for in-line use in a text message. |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| NL2012827B1 true NL2012827B1 (en) | 2016-03-02 |
Family
ID=50981820
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| NL2012827A NL2012827B1 (en) | 2014-05-16 | 2014-05-16 | Method of providing an insert image for in-line use in a text message. |
Country Status (1)
| Country | Link |
|---|---|
| NL (1) | NL2012827B1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001059709A1 (en) * | 2000-02-11 | 2001-08-16 | Make May Toon, Corp. | Internet-based method and apparatus for generating caricatures |
| US20070223827A1 (en) * | 2004-04-15 | 2007-09-27 | Takashi Nishimori | Face Image Creation Device and Method |
| US20090110246A1 (en) * | 2007-10-30 | 2009-04-30 | Stefan Olsson | System and method for facial expression control of a user interface |
| US20100177116A1 (en) * | 2009-01-09 | 2010-07-15 | Sony Ericsson Mobile Communications Ab | Method and arrangement for handling non-textual information |
| US20120059787A1 (en) * | 2010-09-07 | 2012-03-08 | Research In Motion Limited | Dynamically Manipulating An Emoticon or Avatar |
| US20120069028A1 (en) * | 2010-09-20 | 2012-03-22 | Yahoo! Inc. | Real-time animations of emoticons using facial recognition during a video chat |
-
2014
- 2014-05-16 NL NL2012827A patent/NL2012827B1/en not_active IP Right Cessation
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2001059709A1 (en) * | 2000-02-11 | 2001-08-16 | Make May Toon, Corp. | Internet-based method and apparatus for generating caricatures |
| US20070223827A1 (en) * | 2004-04-15 | 2007-09-27 | Takashi Nishimori | Face Image Creation Device and Method |
| US20090110246A1 (en) * | 2007-10-30 | 2009-04-30 | Stefan Olsson | System and method for facial expression control of a user interface |
| US20100177116A1 (en) * | 2009-01-09 | 2010-07-15 | Sony Ericsson Mobile Communications Ab | Method and arrangement for handling non-textual information |
| US20120059787A1 (en) * | 2010-09-07 | 2012-03-08 | Research In Motion Limited | Dynamically Manipulating An Emoticon or Avatar |
| US20120069028A1 (en) * | 2010-09-20 | 2012-03-22 | Yahoo! Inc. | Real-time animations of emoticons using facial recognition during a video chat |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9448686B2 (en) | Mobile terminal and method for controlling chat content based on different touch actions for a specific key | |
| CN107924113B (en) | User interface for camera effects | |
| TWI602071B (en) | Method of messaging, non-transitory computer readable storage medium and electronic device | |
| CN111901475A (en) | User interface for capturing and managing visual media | |
| US20180246632A1 (en) | Method and device for generating mobile terminal theme, and electronic device | |
| US9274749B2 (en) | Mobile terminal and controlling method thereof | |
| US20140049611A1 (en) | Mobile terminal and controlling method thereof | |
| US20140250406A1 (en) | Method and apparatus for manipulating data on electronic device display | |
| US20130141605A1 (en) | Mobile terminal and control method for the same | |
| KR101528312B1 (en) | Method for editing video and apparatus therefor | |
| JP7253535B2 (en) | Method, device, device terminal and storage medium for processing images in application | |
| US20160173789A1 (en) | Image generation method and apparatus, and mobile terminal | |
| CN109286836B (en) | Multimedia data processing method and device, intelligent terminal and storage medium | |
| KR20160132808A (en) | Method and apparatus for identifying audio information | |
| US10897435B2 (en) | Instant messaging method and system, and electronic apparatus | |
| JP2017529031A (en) | COMMUNICATION METHOD, DEVICE, PROGRAM, AND RECORDING MEDIUM BASED ON IMAGE | |
| WO2016107055A1 (en) | Processing method and device for image splicing | |
| RU2677613C1 (en) | Image processing method and device | |
| WO2015100594A1 (en) | Display method and terminal | |
| CN104144297A (en) | System and method for automatically adding watermarks to shot pictures | |
| US12363227B2 (en) | Video call method and apparatus | |
| EP4472223A1 (en) | Photographing method and apparatus, and electronic device | |
| JP6198983B1 (en) | System, method, and program for distributing video | |
| US20150146071A1 (en) | Mobile terminal and method for controlling the same | |
| US10575030B2 (en) | System, method, and program for distributing video |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MM | Lapsed because of non-payment of the annual fee |
Effective date: 20180601 |