WO2025234261A1 - Information processing device, information processing method, and information processing program - Google Patents
Information processing device, information processing method, and information processing programInfo
- Publication number
- WO2025234261A1 WO2025234261A1 PCT/JP2025/014499 JP2025014499W WO2025234261A1 WO 2025234261 A1 WO2025234261 A1 WO 2025234261A1 JP 2025014499 W JP2025014499 W JP 2025014499W WO 2025234261 A1 WO2025234261 A1 WO 2025234261A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information processing
- poster
- information
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Definitions
- the present invention relates to an information processing device, an information processing method, and an information processing program.
- Technology for displaying an image to which a poster's fashion measures, such as makeup, have been applied is known.
- a technology is known in which a user can specify an image taken by the poster, and an image of the user to which the poster's fashion measures have been applied is displayed.
- conventional technology can promote improved usability for users, there is room for further improvement to promote improved usability for contributors.
- conventional technology has room for further improvement to make it easier for contributors to satisfy their desire for financial or psychological recognition, such as wanting to spread their fashion initiatives to others.
- conventional technology has room for further improvement to enable contributors to determine, before posting, whether a fashion initiative is likely to lead to conversions.
- This application was made in light of the above, and aims to enable posters to determine before posting whether a fashion initiative is likely to lead to conversions.
- the information processing device is characterized by having an acquisition unit that acquires images taken by a poster to which a fashion measure has been applied, a first display unit that extracts and displays at least one thumbnail image of a model that is a candidate for applying the fashion measure based on predetermined conditions, and a second display unit that, when the poster specifies the thumbnail image, displays an image in which the fashion measure has been applied to the model.
- One aspect of this embodiment has the effect of enabling posters to determine, before posting, whether a fashion initiative is likely to lead to conversions.
- FIG. 1 is a diagram illustrating an example of the configuration of an information processing system according to an embodiment.
- FIG. 2A is a diagram showing an example of the overall flow of the user interface on the contributor side.
- FIG. 2B is a diagram showing an example of the overall flow of the user interface on the user side.
- FIG. 3 is a diagram illustrating an example of information processing according to the embodiment.
- FIG. 4 is a diagram illustrating an example of a preview screen according to the embodiment.
- FIG. 5 is a diagram illustrating an example of the configuration of a terminal device according to the embodiment.
- FIG. 6 is a diagram illustrating an example of the configuration of an information processing apparatus according to the embodiment.
- FIG. 7 is a diagram illustrating an example of a model information storage unit according to the embodiment.
- FIG. 8 is a diagram illustrating an example of an evaluation information storage unit according to the embodiment.
- FIG. 9 is a flowchart (1) illustrating an example of information processing according to the embodiment.
- FIG. 10 is a flowchart (2) illustrating an example of information processing according to the embodiment.
- FIG. 11 is a hardware configuration diagram illustrating an example of a computer that realizes the functions of the information processing device.
- the information processing system 1 includes a terminal device 10 and an information processing device 100.
- the terminal device 10 and the information processing device 100 are connected to each other via a predetermined communication network (network N) so as to be able to communicate with each other via wired or wireless communication.
- Fig. 1 is a diagram showing an example of the configuration of the information processing system 1 according to an embodiment.
- the terminal device 10 is an information processing device used by a poster who wishes to spread their own fashion initiatives (makeup, etc.) to others via SNS or the like.
- a poster who uses the terminal device 10 may be, for example, an influencer who disseminates their own fashion initiatives over the Internet. For example, an influencer who distributes videos related to the fashion initiatives they undertake.
- the terminal device 10 may be any device that can implement the processing in the embodiment.
- the terminal device 10 may also be a device such as a smartphone, tablet device, notebook PC, desktop PC, mobile phone, or PDA.
- Figure 3 shows a case where the terminal device 10 is a smartphone.
- the terminal device 10 is, for example, a smart device such as a smartphone or tablet, and is a portable terminal device that can communicate with any server device via a wireless communication network such as 3G to 5G (Generation) or LTE (Long Term Evolution).
- the terminal device 10 also has a screen such as an LCD display with touch panel functionality, and may accept various operations on displayed data such as content, such as tapping, sliding, and scrolling, performed by the contributor using a finger or stylus. In Figure 3, the terminal device 10 is used by contributor U1.
- the information processing device 100 is an information processing device that aims to make it easier for posters to satisfy their desire for financial or psychological recognition, such as wanting to spread their fashion initiatives to others, and to enable posters to determine before posting whether a fashion initiative is likely to lead to conversions, thereby promoting improved usability for posters.
- the information processing device 100 may be any device that can implement the processing in the embodiments.
- the information processing device 100 is implemented by a server device, cloud system, or the like that provides a predetermined service (which may be a web service or an app service) that enables fashion initiatives to be posted (registered) and tried on.
- service W1 will be used as an example of such a service. That is, service W1 is a predetermined service that enables fashion initiatives to be posted and tried on.
- This application was made in light of the above, and aims to make it easier for posters to satisfy their desire for financial or psychological recognition, such as wanting to spread their own fashion initiatives to others, and to enable posters to determine before posting whether their fashion initiatives are likely to lead to conversions.
- the application of fashion measures is not limited to actual actions, but may also include virtual actions. In other words, it may be an action that the poster actually performs in the real world, or an action that they perform virtually via an avatar, etc.
- Fashion measures can be, for example, hair makeup (hair makeup) such as hairstyle and hair color (hair color), or can be not limited to makeup but also trying on (trying on in the real world or virtually) and coordinating clothes (clothing is not limited to one-piece dresses and can be, for example, shoes, socks, accessories, watches, jewelry, etc.).
- the processing on the poster's side who posts the fashion measure is described, rather than the processing on the user's side who applies the fashion measure.
- the poster who posts the fashion measure for example, photographs (scans) the face after the fashion measure and registers the items used.
- the user who applies the fashion measure for example, specifies the poster's post information, tries on the fashion measure (full makeup, etc.), and if they like it, purchases the items used (the items used in that full makeup).
- Figure 2A is a diagram showing an example of the overall flow of the user interface on the poster's side.
- poster U1 clicks "Register" at the bottom of the first screen from the left they will be redirected to the second screen from the left.
- Figure 2B is a diagram showing an example of the overall flow of the user interface on the user side.
- the screen transitions to the second screen from the left.
- the screen transitions to the third screen from the left.
- user P1 can try on the registered makeup on their own face and see the items used.
- the screen transitions to the fourth screen from the left.
- FIG. 3 is a diagram showing an example of information processing in the information processing system 1 according to an embodiment.
- FIG. 3 illustrates an example of information processing on the side of a poster who posts a fashion campaign.
- Poster U1 uploads his/her makeup information to a predetermined service (service W1) provided by the information processing device 100 (step S11).
- the makeup information may be, for example, information captured of the poster's makeup (such as a still image), or information explaining how to apply makeup, such as the order and volume of the makeup (such as a video).
- the makeup information may also be, for example, mask information, which is the makeup applied to the poster's face, or cosmetic information linked to the posted information.
- the mask information may be information about the poster's face after makeup has been applied, or information about the makeup itself obtained from the difference between the poster's face after makeup and the poster's bare face.
- Poster U1 uploads a facial image of his/her face with makeup applied to service W1.
- poster U1 may, for example, register the items used for makeup in association with his/her makeup information.
- the information processing device 100 may cause the poster U1 to register the items used in the makeup so that the items are linked to the makeup information applied by the poster U1.
- the information processing device 100 may identify the makeup information by extracting mask information from the posted information.
- the information processing device 100 may, for example, perform processing to apply makeup information linked to the posted information to the user's facial image, or may perform processing to enable the purchase of items used in the makeup.
- a captured image G1 which may be a still image or a moving image
- the information processing device 100 may enable the application of makeup information used in capturing captured image G1, or may provide a link to a website where the items used can be purchased.
- preview screen pre-post preview screen
- poster U1 uploads captured image G1 of their own makeup to service W1.
- the image will be made public and available for viewing.
- the preview screen is a screen where, for example, after a poster has finished photographing their own makeup and before posting it, they can check how the makeup will look by applying it to multiple face types (fresh, cute, feminine, cool, and androgynous men, etc.).
- FIG. 4 is a diagram showing an example of a preview screen according to an embodiment.
- Screen C1 is a preview screen before the photographed image G1 is posted. In other words, it is a preview screen displayed on the poster's side before posting.
- Screen C1 displays, at the top (main), a photographed image (photographed image G1) of poster U1 taken after applying makeup.
- Screen C1 also includes a button B1 that allows the poster U1 to retake the photograph if they want to change the photographed image, and a button B2 that allows the photographed image to be posted.
- button B1 is operated (clicked, tapped, etc.)
- the preview screen returns to the photographing screen. If the poster U1 takes another photograph here, the preview screen returns and the newly taken photographed image is displayed at the top.
- photographed image G1 is posted.
- photographed image G1 is made public and can be viewed by followers of poster U1, etc.
- the posted captured image G1 may become viewable, or information posted along with the captured image G1 (such as the items used) may become viewable, or information registered in association with the poster U1 (such as a separately registered thumbnail image) may become viewable.
- Screen C1 also includes thumbnail images showing at least one face type.
- FIG 4 from left to right, it includes thumbnail image SG1 with a "Fresh” face type, thumbnail image SG2 with a “Cute” face type, thumbnail image SG3 with a "Feminine” face type, thumbnail image SG4 with a "Cool” face type, and thumbnail image SG5 with a "male with androgynous features” face type.
- these thumbnail images SG1 to SG5 are displayed superimposed on the captured image G1. This displays four female face images according to the four types and one male face image with androgynous features, allowing the user to try on the captured makeup on each image to see how it looks.
- thumbnail images SG1 to SG5 are selectable. For example, when thumbnail image SG1 is operated, thumbnail image SG1 is selected. At this time, the makeup information of poster U1 (the makeup information used in captured image G1) is applied to the facial image of thumbnail image SG1. Then, the facial image with the makeup information of poster U1 applied to the facial image of thumbnail image SG1 is displayed at the top. As a result, poster U1 can check on the preview screen what kind of makeup will actually look like (whether it suits them, etc.) on their own face type of "Fresh.” Similarly, for example, when thumbnail image SG2 is operated, thumbnail image SG2 is selected. At this time, the makeup information of poster U1 is applied to the facial image of thumbnail image SG2.
- poster U1 can check on the preview screen what kind of makeup will actually look like on their own face type of "Cute.”
- users can check, for example, what kind of makeup suits what kind of face type, and what kind of face type their own makeup actually suits, before posting.
- Users can also check, for example, whether their post will appeal to users with the face type they are targeting, and whether their post will appeal to their followers, before posting.
- the facial images of the models in thumbnail images SG1 to SG5 are selection candidates to which poster U1 can apply makeup information, and the facial image of the model to which poster U1 has selected a thumbnail image and applied makeup information is displayed at the top of screen C1 (step S12).
- thumbnail images and face types that can be displayed on screen C1 do not have to be limited to this example. They do not have to be limited to the five images of thumbnail images SG1 to SG5 shown in FIG. 4. For example, a further thumbnail image may be hidden to the right of thumbnail image SG5 and be visible by scrolling.
- the display format of these thumbnail images does not have to be limited to the example shown in FIG. 4, and they may be displayed in any display format.
- face types such as "Fresh,” “Cute,” “Feminine,” “Cool,” and “Androgynous Male.”
- the face type of the thumbnail images displayed on screen C1 may be different each time depending on the face type and makeup information of poster U1.
- the face type of the thumbnail images displayed on screen C1 may be different each time the user slightly changes their makeup and retakes the photo.
- thumbnail images of five face types are displayed: “Fresh,” “Cute,” “Feminine,” “Cool,” and “Androgynous Male.” We will now explain how these five face types are extracted.
- the information processing device 100 acquires the captured image to be displayed on screen C1 (pre-post preview screen) (step S101). In the examples of FIGS. 3 and 4, captured image G1 is acquired. The information processing device 100 then extracts at least one thumbnail image of a model who is a candidate for applying makeup information based on predetermined conditions (step S102). In the examples of FIGS. 3 and 4, thumbnail images SG1 to SG5 are extracted. The information processing device 100 then displays the extracted thumbnail image in a predetermined area of screen C1 (so as to be superimposed on the captured image) (step S103). In the examples of FIGS. 3 and 4, the information processing device 100 displays the extracted thumbnail image in area R1.
- the information processing device 100 When a thumbnail image is selected, the information processing device 100 generates a facial image of the model with makeup information applied from the facial image of the model in the thumbnail image (step S104). The information processing device 100 then displays the generated facial image of the model at the top of screen C1 (step S105).
- step S104 an example of a method for generating a facial image of a model with makeup information applied is a method called PSGAN (Pose and Expression Robust Spatial-Aware Generative Adversarial Network), which is disclosed in the above-mentioned non-patent document 1 and which transfers only makeup information from a facial image to another face.
- PSGAN Pane and Expression Robust Spatial-Aware Generative Adversarial Network
- the information processing device 100 uses deep learning and can transfer only makeup information to another face regardless of facial expression or pose.
- the information processing device 100 applies makeup similar to the makeup used by poster U1 in the posted information of poster U1 to the facial image of the model.
- the information processing device 100 generates makeup information for applying makeup similar to the makeup used by poster U1, and applies the generated makeup information to the facial image of the model.
- step S105 the information processing device 100 generates a facial image of the model with makeup information applied in this way, and displays the generated facial image of the model at the top of the screen C1.
- step S102 the process of extracting at least one item based on predetermined conditions in step S102 will be described.
- the information processing device 100 performs the extraction process in step S102 based on various conditions.
- three patterns will be described: using the degree to which makeup suits the subject, using the poster's follower information, and using the model's login information; however, the process is not limited to the following examples.
- the information processing device 100 may perform the extraction process based on, for example, the degree to which makeup suits the face type of the model in the thumbnail image. For example, the information processing device 100 may extract thumbnail images in descending order of the model's rating (score) indicating the degree to which the makeup applied by the poster U1 suits them. For example, the information processing device 100 may extract thumbnail images by estimating the degree to which makeup suits the model using machine learning of the poster U1's rating of the model's face image to which makeup information has been applied.
- the information processing device 100 may extract thumbnail images by estimating the degree to which makeup suits the model using a learning model trained on machine learning, using a case where the poster U1 has rated the model's face image to which makeup information has been applied as "suitable” as a positive example and a case where the poster U1 has rated the model's face image to which makeup information has been applied as "unsuitable” as a negative example.
- the information processing device 100 may have the poster U1 select (or specify) on screen C1 the face type of a model that they judge would suit their makeup, and then learn the selected face type in combination with makeup information to evaluate how well the makeup suits them.
- the information processing device 100 may perform machine learning on a combination of the selected face type and makeup information as a positive example, or on a combination of a face type and makeup information that was not selected as a negative example.
- the information processing device 100 may then estimate a suitable face type by inputting makeup information into a learning model machine-learned in this way.
- the information processing device 100 may perform machine learning on information registered on screen C1 after trying on makeup as a positive example, without having the poster U1 select a face type of a model, for example.
- the information processing device 100 may extract thumbnail images in the order of models whose face type is similar to that of the poster U1. For example, if the face type of the poster U1 is "Cute,” the information processing device 100 may prioritize and extract thumbnail images of models whose face type is "Cute.”
- the information processing device 100 may perform the extraction process based on, for example, the follower information of the poster U1. For example, the information processing device 100 may prioritize extracting thumbnail images of models with the same or similar facial type as the poster U1's followers. Furthermore, for example, the information processing device 100 may prioritize extracting thumbnail images of models with facial types that are of interest to the poster U1's followers. Furthermore, for example, the information processing device 100 may perform the extraction process limited to followers estimated to lead to conversions such as purchases or viewings. In this case, for example, the information processing device 100 may estimate the likelihood of conversions such as purchases or viewings using a model that has learned the relationship between a user's try-on history, purchase history, viewing history, etc. and whether the user has achieved a predetermined conversion, such as making a purchase or viewing a predetermined content, and then estimate followers who will lead to conversions based on the estimated likelihood of conversions.
- the information processing device 100 may perform the extraction process based on, for example, the facial image of a follower rather than the model.
- the information processing device 100 may perform the extraction process based on, for example, a facial image of a model generated based on the facial image of a follower (such as a facial image of a model generated based on generation AI).
- the information processing device 100 may use, for example, the facial image of the follower who has made the most purchases, viewed the most, tried on the most items, or had the most followers, or may use the average facial image of multiple followers. This makes it possible to effectively encourage people to try on items by using, for example, the facial image of a follower with a large number of followers.
- the information processing device 100 may perform the extraction process based on, for example, login information (such as the number of logins and whether the user is a new registrant) of candidate models displayed on the screen C1. For example, the information processing device 100 may prioritize extracting thumbnail images of models with a higher number of logins.
- the model may be, for example, a model that is identical or similar to the facial type of a user with a higher number of logins, a model that is identical or similar to the facial type that the user with a higher number of logins is interested in, or the user himself/herself.
- a high number of logins does not necessarily mean a high number of logins, but may also include a large number of users.
- the information processing device 100 may prioritize extracting thumbnail images of models that are estimated to be more likely to have makeup information applied by the poster U1 if the poster U1 posts a content. Furthermore, for example, the information processing device 100 may prioritize extracting thumbnail images of models that are estimated to be more likely to have the poster U1's makeup information applied ... newly registrants to the service W1. Furthermore, for example, the information processing device 100 may preferentially extract thumbnail images of models who have logged in within a predetermined period (e.g., recently).
- the above describes the process for extracting thumbnail images of models who are candidates for applying makeup information.
- the specified conditions for extracting thumbnail images have been described.
- the process for determining the display mode of thumbnail images after such extraction process (display order (sort order), highlighting, display of positive information such as face type and follower information, etc.).
- the information processing device 100 determines the display mode of thumbnail images for step S103 based on various conditions.
- the display mode determination process we will explain three patterns using the degree to which makeup suits the model, the poster's follower information, and the model's login information, as in the extraction process described above.
- the present invention is not limited to these examples. Furthermore, explanations similar to those for the extraction process described above will be omitted as appropriate.
- the information processing device 100 may perform a display mode determination process based on, for example, the degree to which makeup suits the face type of the model in the thumbnail image. For example, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings indicating the degree to which the makeup applied by the poster U1 suits them are displayed preferentially (e.g., displayed at the top, displayed at the beginning, or highlighted). Furthermore, for example, the information processing device 100 may determine a display mode such that thumbnail images of models with a face type similar to that of the poster U1 are displayed preferentially. Furthermore, the information processing device 100 may perform a display mode determination process by subdividing face types (e.g., subdividing the face into features such as eyes and lips).
- subdividing face types e.g., subdividing the face into features such as eyes and lips.
- the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings for each subtype of face type are displayed preferentially. In this case, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings for each subtype of face type are displayed preferentially. In this case, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings are displayed preferentially, for example, by highlighting the model to indicate that the model suits the model, using a pop-up or other display. For example, type determination may be performed using only specific facial features rather than the entire face, and evaluation may be performed using the specific facial features for preferential display.
- thumbnail images of models with cute eyes may be displayed on the preview screen along with thumbnail images of models with cool eyes
- thumbnail images of models with cute lips and thumbnail images of models with cool lips may be displayed on the preview screen.
- thumbnail images of models with cool eyes may be extracted and displayed.
- the information processing device 100 may perform a display mode determination process based on, for example, follower information of poster U1. For example, the information processing device 100 may perform a display mode determination process based on the facial types of poster U1's followers. For example, the information processing device 100 may determine a display mode such that thumbnail images of models with the same or similar facial types as poster U1's followers are preferentially displayed. Furthermore, for example, the information processing device 100 may perform a display mode determination process based on the proportion of facial types of poster U1's followers.
- the information processing device 100 may determine a display mode such that thumbnail images of "Cute” models are preferentially displayed. Furthermore, for example, if a high proportion of poster U1's followers have the "Cute” facial type, the information processing device 100 may determine a display mode such that thumbnail images of "Cute” models are preferentially displayed. Furthermore, for example, if a high proportion of poster U1's followers have the "Cute” facial type, the information processing device 100 may determine a display mode such that thumbnail images of "Cute” models are preferentially displayed. Furthermore, for example, when there are multiple face types with a high proportion, the information processing device 100 may subdivide the face types and perform the process of determining the display mode.
- the information processing device 100 may, for example, determine to highlight the follower type by displaying it in a pop-up or the like. For example, the information processing device 100 may determine to highlight the follower's face type (indicating that it is the follower's face type) by displaying it in a pop-up or the like, or may determine to highlight the number of followers for each face type. For example, the information processing device 100 may determine to highlight the thumbnail image by changing the color or thickness of the frame. At this time, the information processing device 100 may also, for example, determine to prioritize the display so that face types with high conversion (likelihood of trying on clothes, etc.) are displayed at the top. This allows the poster U1 to check followers who are highly engaged. The information processing device 100 may, for example, determine to notify followers who match the face type selected by the poster U1 that new makeup information has been posted when the poster U1 posts makeup information.
- the information processing device 100 may decide to highlight the follower's face image instead of the model's, for example.
- the information processing device 100 may decide to highlight the follower's face image by displaying a model's face image generated based on the follower's face image (such as a model's face image generated based on generation AI).
- the information processing device 100 may use the face image of the follower who has made the most purchases, viewed the most, tried on the most clothes, or had the most followers, or may use the average face image of multiple followers. This makes it possible to effectively encourage followers to try on clothes, for example, by using the face image of a follower with a large number of followers.
- the information processing device 100 may perform a display mode determination process based on, for example, login information of a candidate model displayed on the screen C1. For example, the information processing device 100 may perform a display mode determination process based on the number of logins of the model.
- the model may be, for example, a model with the same or similar face type as a user with a high number of logins, a model with the same or similar face type as a face type that is of interest to a user with a high number of logins, or the user himself/herself with a high number of logins.
- a high number of logins does not necessarily mean a high number of logins, but may also include a large number of users.
- the information processing device 100 may determine a display mode such that thumbnail images of models with a face type that has a high proportion of logins for the model are displayed preferentially.
- the information processing device 100 may perform a display mode determination process by subdividing the face types. In this case, the information processing device 100 may determine to highlight the model, for example, by displaying a message such as "High likelihood of trying on makeup! in a pop-up or the like.
- the information processing device 100 may determine the display mode so that thumbnail images of models who are new registrants to the service W1 are preferentially displayed.
- the above describes the process for determining the display mode of thumbnail images.
- the information processing device 100 determines the display mode of the thumbnail images, it displays the thumbnail images in a predetermined area of the screen C1 in the determined display mode.
- the model for the thumbnail image is described as a follower of the poster U1, but this example is not particularly limited.
- the model for the thumbnail image may be, for example, only users who have a predetermined relationship with the poster U1. For example, it may be only users who have permitted sharing (e.g., users who have been set as friends). This allows only the faces of users who have permitted sharing to be displayed on the preview screen, making it possible to provide a service that can be enjoyed only by friends.
- the model for the thumbnail image may be, for example, only followers of the poster U1 who are estimated to lead to purchases or views.
- the model of the thumbnail image is a follower of poster U1, i.e., a real model, but this example is not particularly limited.
- the model of the thumbnail image does not have to be a real model, and may be, for example, a model generated by a generation AI or the like.
- the thumbnail image does not have to be a facial image of a real model, and may be, for example, a facial image of a model generated by a generation AI or the like.
- it may be a facial image of a model generated based on follower information of poster U1.
- it may be a facial image of a model generated by taking a weighted average based on the number of followers of poster U1.
- the thumbnail image may be, for example, a facial image of the model generated based on weighting changed based on the facial information registered by the user.
- the facial image of the model may be, for example, a facial image estimated based on user-augmented information based on the user's following relationships, purchasing history, etc. In this way, the facial image of the model may be an actually photographed facial image, a facial image based on facial information selected by the user, or a facial image estimated based on user-augmented information.
- an operation button that enables the application of a fashion measure to be turned on and off may be displayed on the screen C1.
- an explanatory guide for first-time users may be displayed on the screen C1.
- a button for automatically generating a description related to the face type of the thumbnail image may be displayed on the preview screen.
- a button for automatically generating a description such as "This makeup looks good on your XX type, so please try it on” may be displayed.
- tags related to face types may be linked and automatically registered on the preview screen to make it easier to search for makeup information from the tags.
- the frame of the thumbnail image on the preview screen may be shaped to correspond to the face type of each model.
- the frame may be round, and if there are many basic faces, the frame may be basic.
- the shape may also correspond to the characteristics of each model's face type. This makes it possible to determine each model's face type from the frame, reducing the burden of checking the preview when registering.
- the user may be allowed to select according to the purpose of makeup (e.g., date, wedding, girls' night, etc.).
- the user may be allowed to select from options using different clothing and backgrounds depending on the purpose.
- the face type of the thumbnail images may be the same or different depending on the purpose.
- the face type may be the same but the clothing and background may be different depending on the purpose, or the clothing and background may be different for each face type depending on the purpose.
- the density may be adjusted on the preview screen and then registered, and when trying on, the density may be set to the base value.
- face types for which there is little registration of makeup information that suits them may be extracted or displayed preferentially. It is also possible to display the fact that there are few registrations in an easily understandable manner without prioritizing extraction or display. This makes it easier for even posters with few followers (such as posters who are hard to choose) to try on their registered makeup information, and promotes this function by allowing makeup information that suits a variety of face types to be registered.
- multiple preview screens may also be displayed.
- the multiple preview screens may be displayed simultaneously, side by side, or split into left and right halves and displayed separately.
- the degree of suitability may also be displayed. Furthermore, for example, the system may be asked to evaluate which looks better and the results of the comparison may be learned. This makes it possible to achieve highly accurate learning.
- an incentive may be given to the poster when the post leads to trying on clothes, browsing of the product after trying on clothes, purchase of the product after trying on clothes, or reactivation of a dormant follower.
- electronic money or points that can be used for payment at an online shopping mall selling the product may be given. This makes it possible to encourage the poster to post with conversion in mind.
- Display Control Information Processing Variation 8: Display Control
- display control may also be performed by fixing the model without extracting a thumbnail image.
- the poster specifies a thumbnail image to which the fashion measure is to be applied.
- the thumbnail image may be automatically applied to a model with the most suitable face type or a model with the face type of a follower, and displayed to the poster.
- Fig. 5 is a diagram showing an example of the configuration of the terminal device 10 according to the embodiment.
- the terminal device 10 includes a communication unit 11, an input unit 12, an output unit 13, and a control unit 14.
- the communication unit 11 is realized by, for example, a network interface card (NIC), etc.
- the communication unit 11 is connected to a predetermined network N by wire or wirelessly, and transmits and receives information to and from the information processing device 100, etc., via the predetermined network N.
- NIC network interface card
- the input unit 12 accepts various operations from posters.
- the input unit 12 accepts various operations from poster U1.
- the input unit 12 may accept various operations from the poster via a display screen using a touch panel function.
- the input unit 12 may also accept various operations from buttons provided on the terminal device 10 or a keyboard or mouse connected to the terminal device 10.
- the output unit 13 is a display screen of a tablet terminal or the like realized by, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display, and is a display device for displaying various information. For example, the output unit 13 displays information transmitted from the information processing device 100.
- a liquid crystal display or an organic EL (Electro-Luminescence) display is a display device for displaying various information.
- the output unit 13 displays information transmitted from the information processing device 100.
- the control unit 14 is, for example, a controller, and is realized by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like executing various programs stored in a storage device within the terminal device 10 using RAM (Random Access Memory) as a work area.
- these various programs include application programs installed on the terminal device 10.
- these various programs include an application program that displays a preview screen of posted information including thumbnail images of models who are candidates for applying a fashion measure, based on information transmitted from the information processing device 100.
- the control unit 14 is also realized by an integrated circuit, for example, an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
- control unit 14 has a receiving unit 141 and a transmitting unit 142, and realizes or executes the information processing functions described below.
- the receiving unit 141 receives various information from other information processing devices such as the information processing device 100. For example, the receiving unit 141 receives information for displaying a preview screen of posted information including a thumbnail image of a model who is a candidate for applying a fashion measure.
- the transmission unit 142 transmits various types of information to other information processing devices such as the information processing device 100. For example, the transmission unit 142 transmits selection information selected by the poster from among thumbnail images displayed together with the posted information. Furthermore, for example, the transmission unit 142 transmits evaluation information when the poster evaluates how well makeup suits them.
- Fig. 6 is a diagram showing an example of the configuration of the information processing device 100 according to the embodiment.
- the information processing device 100 includes a communication unit 110, a storage unit 120, and a control unit 130.
- the information processing device 100 may also include an input unit (e.g., a keyboard, a mouse, etc.) that accepts various operations from an administrator of the information processing device 100, and a display unit (e.g., a liquid crystal display, etc.) that displays various information.
- an input unit e.g., a keyboard, a mouse, etc.
- a display unit e.g., a liquid crystal display, etc.
- the communication unit 110 is realized by, for example, a NIC etc.
- the communication unit 110 is connected to a network N by wire or wirelessly, and transmits and receives information to and from the terminal device 10 etc. via the network N.
- the storage unit 120 is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 6 , the storage unit 120 has a model information storage unit 121 and an evaluation information storage unit 122.
- the model information storage unit 121 stores model information that is a candidate for display on the preview screen (screen C1).
- Figure 7 shows an example of the model information storage unit 121 according to the embodiment.
- the model information storage unit 121 has items such as "Model ID,” “Captured Image,” “Setting Type Information,” and "Model Information.”
- Model ID indicates identification information for identifying the model (user).
- Photographed image indicates a photographed image of the model that has been registered by the model.
- conceptual information such as "Photographed image #1" and "Photographed image #2” is stored in "Photographed image,” but in reality, image data, etc. is stored.
- the URL Uniform Resource Locator
- Setting type information indicates the type information set by the model.
- Model information indicates model information related to the model, such as the model's following relationships and purchase history.
- conceptual information such as "Model Information #1” and “Model Information #2” is stored in "Model Information,” but in reality, information such as "Following: User P111, User P112, ...; followers: User P211, User P212, ...; Purchase History: Product F1, Product F2, ...; ## is stored.
- the evaluation information storage unit 122 stores evaluation information made by posters (for example, evaluation information indicating how well makeup suits them).
- Figure 8 shows an example of the evaluation information storage unit 122 according to the embodiment. As shown in Figure 8, the evaluation information storage unit 122 has items such as "Evaluation Information ID,” "Contributor ID,” “Model ID,” and "Evaluation Information.”
- Evaluation Information ID indicates identification information for identifying the evaluation information.
- Contributor ID indicates identification information for identifying the contributor who made the evaluation.
- Model ID indicates identification information for identifying the evaluated model.
- Evaluation Information indicates the evaluation information of the contributor. In the example shown in Figure 8, conceptual information such as “Evaluation Information #1” and “Evaluation Information #2” is stored in “Evaluation Information,” but in reality, information such as a combination of makeup information applied to the model and the contributor's evaluation (suits, does not suit, etc.) is stored.
- the control unit 130 is a controller, and is realized, for example, by a CPU, an MPU, or the like, executing various programs stored in a storage device inside the information processing device 100 using RAM as a work area.
- the control unit 130 is also realized, for example, by an integrated circuit such as an ASIC or an FPGA.
- control unit 130 has an acquisition unit 131, an identification unit 132, an extraction unit 133, a first display unit 134, a generation unit 135, a second display unit 136, and a determination unit 137, and realizes or executes the information processing functions described below.
- the internal configuration of the control unit 130 is not limited to the configuration shown in FIG. 6, and may be any other configuration that performs the information processing described below.
- the acquisition unit 131 acquires various types of information from an external information processing device, such as the terminal device 10.
- the acquisition unit 131 acquires various types of information from the storage unit 120.
- the acquisition unit 131 also stores the acquired various types of information in the storage unit 120.
- the acquisition unit 131 acquires images taken by the poster to which the fashion measure has been applied.
- the acquisition unit 131 also acquires thumbnail images of models who are candidates for applying the fashion measure.
- the acquisition unit 131 also acquires selection information selected by the poster from among the thumbnail images.
- the acquisition unit 131 also acquires evaluation information if the poster has made an evaluation.
- the identification unit 132 identifies the makeup information of the poster by extracting mask information from the photographed image of the poster. For example, the identification unit 132 identifies the makeup information by extracting mask information from the photographed image acquired by the acquisition unit 131.
- the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure based on a predetermined condition. For example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure based on a predetermined condition from among the thumbnail images of models who are candidates for applying a fashion measure and acquired by the acquisition unit 131. For example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's highest rating indicating how well makeup suits them (the higher the rating indicating how well makeup suits them, the more priority is given to extracting at least one thumbnail image).
- the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's facial type similar to that of followers (the more similar the facial type of a model is to followers, the more priority is given to extracting at least one thumbnail image). Furthermore, for example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's highest number of logins (the more priority is given to extracting at least one thumbnail image of a model with a highest number of logins).
- the first display unit 134 displays thumbnail images of models who are candidates for applying the fashion measure. For example, the first display unit 134 displays thumbnail images extracted by the extraction unit 133 (at least one thumbnail image extracted based on a predetermined condition). The first display unit 134 may also perform the extraction process of the extraction unit 133.
- the first display unit 134 preferentially displays thumbnail images on the preview screen in a predetermined display format (for example, in a predetermined display format determined by the determination unit 137 described below).
- the first display unit 134 may also perform the determination process of the determination unit 137 described below.
- the first display unit 134 preferentially displays thumbnail images of models highly rated for how well makeup suits them.
- the first display unit 134 preferentially displays thumbnail images of models highly rated for how well makeup suits them from among the thumbnail images extracted by the extraction unit 133.
- the first display unit 134 preferentially displays thumbnail images of models highly rated for how well makeup suits them by sorting the thumbnail images extracted by the extraction unit 133 in descending order of the ratings indicating how well makeup suits them.
- the first display unit 134 also preferentially displays thumbnail images of models highly rated for how well makeup suits them, for example, for each of the subdivided face types.
- the first display unit 134 preferentially displays thumbnail images of models whose facial type is similar to that of the poster.
- the first display unit 134 preferentially displays thumbnail images of models whose facial type is similar to that of the poster from among the thumbnail images extracted by the extraction unit 133.
- the first display unit 134 preferentially displays thumbnail images of models whose facial type is similar to that of the poster by sorting the thumbnail images extracted by the extraction unit 133 in order of similarity of facial type to that of the poster.
- the first display unit 134 preferentially displays thumbnail images of models with a high proportion of followers who have a specific face type. For example, if a specific face type is prevalent among the face types of followers, the first display unit 134 preferentially displays thumbnail images of models with the specific face type. Furthermore, the first display unit 134, for example, based on the model's login information, preferentially displays thumbnail images of models with a specific face type and a high proportion of logins. For example, if a specific face type is prevalent among the face types of models with a high proportion of logins, the first display unit 134 preferentially displays thumbnail images of models with the specific face type. Furthermore, for example, if there are multiple specific face types prevalent among the face types of models with a high proportion of logins, the first display unit 134 preferentially displays thumbnail images of models with the specific face type for each of the subdivisions of the face type.
- the generation unit 135 generates an image to which a fashion measure has been applied. For example, the generation unit 135 generates an image to which a fashion measure has been applied to a thumbnail image selected by a poster, based on the selection information acquired by the acquisition unit 131. For example, the generation unit 135 generates fashion measure information for generating an image to which a fashion measure similar to the fashion measure of the poster has been applied, using a method such as PSGAN. For example, the generation unit 135 generates an image to which the generated fashion measure information has been applied to a model.
- the second display unit 136 displays an image to which the fashion measure has been applied.
- the second display unit 136 displays an image (an image to which the fashion measure has been applied) generated by the generation unit 135.
- the second display unit 136 may also perform the generation process of the generation unit 135.
- the determination unit 137 determines a predetermined display mode for displaying thumbnail images on the preview screen (for example, determines a predetermined display mode for preferential display by the first display unit 134). For example, the determination unit 137 determines the display mode of thumbnail images when displaying them in a predetermined area (area R1) on the screen C1. That is, for example, the determination unit 137 determines thumbnail images to be displayed on the preview screen. For example, the determination unit 137 determines that thumbnail images of models with high ratings indicating how well makeup suits them are to be preferentially displayed. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the thumbnail images in descending order of ratings indicating how well makeup suits them. For example, the determination unit 137 determines a display mode of thumbnail images in which models with higher ratings indicating how well makeup suits them are preferentially displayed by sorting the thumbnail images in descending order of ratings indicating how well makeup suits them.
- the determination unit 137 determines, for example, to preferentially display thumbnail images of models whose facial type is similar to the poster. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the images in order of similarity of facial type to the poster. For example, the determination unit 137 determines a display mode of thumbnail images such that the more similar the facial type to the poster is, the more preferentially displayed the thumbnail images are, by sorting the images in order of similarity of facial type to the poster. Furthermore, the determination unit 137 determines, for example, to preferentially display thumbnail images of models who have a higher proportion of followers with a specific facial type.
- the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the images in order of the highest proportion of followers with a specific facial type. For example, the determination unit 137 determines a display mode of thumbnail images such that the higher the proportion of followers with a specific facial type is, the more preferentially displayed the thumbnail images are, by sorting the images in order of the highest proportion of followers with a specific facial type. Furthermore, the determination unit 137 determines to preferentially display thumbnail images of models who have a higher proportion of logins with a specific facial type. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting them in descending order of the proportion of logins of models with a specific face type. For example, the determination unit 137 determines a display mode for thumbnail images in which models with a specific face type have a higher proportion of logins, by sorting them in descending order of the proportion of logins of models with a specific face type.
- Fig. 9 and Fig. 10 are flowcharts showing the procedure of information processing by the information processing system 1 according to the embodiment.
- Fig. 9 is a flowchart showing the procedure of information processing including processing for extracting thumbnail images of models based on predetermined conditions
- Fig. 10 is a flowchart showing the procedure of information processing including processing for preferentially displaying thumbnail images of models (rearranging them to the top) based on a predetermined display mode.
- the information processing device 100 acquires an image taken by a poster to which a fashion measure has been applied (step S201).
- the information processing device 100 extracts at least one thumbnail image of a model that is a candidate for applying the fashion measure based on predetermined conditions (step S202).
- the information processing device 100 displays the extracted thumbnail image on a preview screen (step S203).
- the information processing device 100 determines whether the poster has selected a thumbnail image (step S204). If the poster has selected a thumbnail image (step S204; YES), the information processing device 100 generates an image to which the fashion measure has been applied (step S205). The information processing device 100 displays the generated image on the preview screen (step S206). On the other hand, if the poster has not selected a thumbnail image (step S204; NO), the information processing device 100 ends information processing. In this case, the poster may post without confirming the application of the fashion measure.
- the information processing device 100 acquires an image taken by the poster to which the fashion measure has been applied (step S301).
- the information processing device 100 extracts at least one thumbnail image of a model who is a candidate for applying the fashion measure (step S302).
- the information processing device 100 preferentially displays the extracted thumbnail image on the preview screen in a predetermined display format (step S303).
- the information processing device 100 determines whether the poster has selected a thumbnail image (step S304). If the poster has selected a thumbnail image (step S304; YES), the information processing device 100 generates an image to which the fashion measure has been applied (step S305). The information processing device 100 displays the generated image on the preview screen (step S306). On the other hand, if the poster has not selected a thumbnail image (step S304; NO), the information processing device 100 ends information processing. In this case, the poster may post without confirming the application of the fashion measure.
- the information processing device 100 includes an acquisition unit 131, a first display unit 134, and a second display unit 136.
- the acquisition unit 131 acquires images taken by the poster to which a fashion measure has been applied.
- the first display unit 134 extracts and displays at least one thumbnail image of a model who is a candidate to apply the fashion measure based on predetermined conditions.
- the second display unit 136 displays an image in which the fashion measure has been applied to the model.
- the information processing device 100 can, for example, check before posting whether a fashion campaign is likely to lead to conversions.
- the first display unit 134 extracts and displays thumbnail images based on an evaluation indicating the degree to which a fashion measure suits a specific type of part of the model's appearance.
- the information processing device 100 can, for example, take into account the degree to which a fashion campaign suits the model and check the type of model that is likely to lead to conversions before posting.
- the first display unit 134 extracts and displays thumbnail images of models whose appearances are similar to those of the poster.
- the information processing device 100 can, for example, take into account the degree of similarity in style with the poster and check before posting whether the fashion campaign is likely to lead to conversions.
- the first display unit 134 extracts and displays thumbnail images of the model based on the poster's follower information.
- the information processing device 100 can, for example, take into account the poster's follower information and check before posting whether the fashion campaign is likely to lead to conversions.
- the first display unit 134 extracts and displays thumbnail images of models whose specific features are similar to those of the poster's followers.
- the information processing device 100 can, for example, take into account the degree of similarity in style between the poster and their followers, and check before posting whether the fashion campaign is likely to lead to conversions.
- the first display unit 134 extracts and displays a thumbnail image of the model based on the model's login information.
- the information processing device 100 can, for example, take into account the model's login information and check before posting whether the fashion campaign is likely to lead to conversions.
- the first display unit 134 extracts and displays thumbnail images of models estimated to be highly likely to try on the fashion campaign.
- the information processing device 100 can, for example, take into account the possibility of trying on clothes and check before posting whether the fashion initiative is likely to lead to conversions.
- Fig. 11 is a hardware configuration diagram showing an example of a computer that realizes the functions of the terminal device 10 and the information processing device 100.
- the computer 1000 has a CPU 1100, a RAM 1200, a ROM 1300, a HDD 1400, a communication interface (I/F) 1500, an input/output interface (I/F) 1600, and a media interface (I/F) 1700.
- the CPU 1100 operates based on programs stored in the ROM 1300 or HDD 1400, and controls each component.
- the ROM 1300 stores a boot program executed by the CPU 1100 when the computer 1000 starts up, as well as programs that depend on the computer 1000's hardware.
- the HDD 1400 stores programs executed by the CPU 1100 and data used by such programs.
- the communication interface 1500 acquires data from other devices via a specified communication network and sends it to the CPU 1100, and transmits data generated by the CPU 1100 to other devices via a specified communication network.
- the CPU 1100 controls output devices such as displays and printers, and input devices such as keyboards and mice, via the input/output interface 1600.
- the CPU 1100 acquires data from input devices via the input/output interface 1600.
- the CPU 1100 also outputs generated data to output devices via the input/output interface 1600.
- Media interface 1700 reads programs or data stored on recording medium 1800 and provides them to CPU 1100 via RAM 1200.
- CPU 1100 loads the programs from recording medium 1800 onto RAM 1200 via media interface 1700 and executes the loaded programs.
- Recording medium 1800 is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or a PD (Phase Change Rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical Disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
- the CPU 1100 of the computer 1000 executes programs loaded onto the RAM 1200 to realize the functions of the control units 14 and 130.
- the CPU 1100 of the computer 1000 reads and executes these programs from the recording medium 1800, but as another example, the CPU 1100 may obtain these programs from another device via a specified communications network.
- each device shown in the figure are functional concepts and do not necessarily have to be physically configured as shown.
- the specific form of distribution and integration of each device is not limited to that shown, and all or part of them can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
- an acquisition unit can be read as an acquisition means or an acquisition circuit.
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Marketing (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Development Economics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
本発明は、情報処理装置、情報処理方法及び情報処理プログラムに関する。 The present invention relates to an information processing device, an information processing method, and an information processing program.
従来、投稿者のメイク(化粧)などのファッション施策を適用した画像を表示させる技術が知られている。例えば、ユーザが投稿者の撮影画像を指定することで、ファッション施策が適用されたユーザの画像を表示させる技術が知られている。 Technology for displaying an image to which a poster's fashion measures, such as makeup, have been applied is known. For example, a technology is known in which a user can specify an image taken by the poster, and an image of the user to which the poster's fashion measures have been applied is displayed.
しかしながら、従来の技術では、ユーザ側のユーザビリティの向上を促進させることはできるが、投稿者側のユーザビリティの向上を促進させるための更なる向上の余地があった。例えば、従来の技術では、自身のファッション施策を他人に広めたいといった投稿者側の金銭的又は精神的な承認欲求を満たし易くするための更なる向上の余地があった。また、例えば、従来の技術では、コンバージョンに繋がり易いファッション施策であるか否かの投稿者の判断を投稿前に可能にするための更なる向上の余地があった。 However, while conventional technology can promote improved usability for users, there is room for further improvement to promote improved usability for contributors. For example, conventional technology has room for further improvement to make it easier for contributors to satisfy their desire for financial or psychological recognition, such as wanting to spread their fashion initiatives to others. Also, conventional technology has room for further improvement to enable contributors to determine, before posting, whether a fashion initiative is likely to lead to conversions.
本願は、上記に鑑みてなされたものであって、コンバージョンに繋がり易いファッション施策であるか否かの投稿者の判断を投稿前に可能にすることを目的とする。 This application was made in light of the above, and aims to enable posters to determine before posting whether a fashion initiative is likely to lead to conversions.
本願に係る情報処理装置は、ファッション施策が適用された投稿者の撮影画像を取得する取得部と、前記ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出して表示させる第1表示部と、前記投稿者が前記サムネイル画像を指定すると、前記ファッション施策を前記モデルに適用した画像を表示させる第2表示部と、を有することを特徴とする。 The information processing device according to the present application is characterized by having an acquisition unit that acquires images taken by a poster to which a fashion measure has been applied, a first display unit that extracts and displays at least one thumbnail image of a model that is a candidate for applying the fashion measure based on predetermined conditions, and a second display unit that, when the poster specifies the thumbnail image, displays an image in which the fashion measure has been applied to the model.
実施形態の一態様によれば、コンバージョンに繋がり易いファッション施策であるか否かの投稿者の判断を投稿前に可能にすることができるという効果を奏する。 One aspect of this embodiment has the effect of enabling posters to determine, before posting, whether a fashion initiative is likely to lead to conversions.
以下に、本願に係る情報処理装置、情報処理方法及び情報処理プログラムを実施するための形態(以下、「実施形態」と呼ぶ)について図面を参照しつつ詳細に説明する。なお、この実施形態により本願に係る情報処理装置、情報処理方法及び情報処理プログラムが限定されるものではない。また、以下の各実施形態において同一の部位には同一の符号を付し、重複する説明は省略される。 Below, the information processing device, information processing method, and information processing program according to the present application (hereinafter referred to as "embodiments") will be described in detail with reference to the drawings. Note that the information processing device, information processing method, and information processing program according to the present application are not limited to these embodiments. Furthermore, identical components in the following embodiments will be designated by the same reference numerals, and duplicate explanations will be omitted.
(実施形態)
〔1.情報処理システムの構成〕
図1に示す情報処理システム1について説明する。図1に示すように、情報処理システム1は、端末装置10と、情報処理装置100とが含まれる。端末装置10と、情報処理装置100とは所定の通信網(ネットワークN)を介して、有線または無線により通信可能に接続される。図1は、実施形態に係る情報処理システム1の構成例を示す図である。
(Embodiment)
1. Configuration of the information processing system
An information processing system 1 shown in Fig. 1 will be described. As shown in Fig. 1, the information processing system 1 includes a terminal device 10 and an information processing device 100. The terminal device 10 and the information processing device 100 are connected to each other via a predetermined communication network (network N) so as to be able to communicate with each other via wired or wireless communication. Fig. 1 is a diagram showing an example of the configuration of the information processing system 1 according to an embodiment.
端末装置10は、SNSなどを介して自身のファッション施策(メイクなど)を他人に広めることを所望する投稿者によって利用される情報処理装置である。端末装置10を利用する投稿者は、例えば、インターネットを用いて自身のファッション施策を発信するインフルエンサーなどである。例えば、自身が行うファッション施策に関する映像の配信を行うインフルエンサーなどである。端末装置10は、実施形態における処理を実現可能であれば、どのような装置であってもよい。また、端末装置10は、スマートフォンや、タブレット型端末や、ノート型PCや、デスクトップPCや、携帯電話機や、PDA等の装置であってもよい。図3では、端末装置10がスマートフォンである場合を示す。 The terminal device 10 is an information processing device used by a poster who wishes to spread their own fashion initiatives (makeup, etc.) to others via SNS or the like. A poster who uses the terminal device 10 may be, for example, an influencer who disseminates their own fashion initiatives over the Internet. For example, an influencer who distributes videos related to the fashion initiatives they undertake. The terminal device 10 may be any device that can implement the processing in the embodiment. The terminal device 10 may also be a device such as a smartphone, tablet device, notebook PC, desktop PC, mobile phone, or PDA. Figure 3 shows a case where the terminal device 10 is a smartphone.
端末装置10は、例えば、スマートフォンやタブレット等のスマートデバイスであり、3G~5G(Generation)やLTE(Long Term Evolution)等の無線通信網を介して任意のサーバ装置と通信を行うことができる携帯端末装置である。また、端末装置10は、液晶ディスプレイ等の画面であって、タッチパネルの機能を有する画面を有し、投稿者から指やスタイラス等によりタップ操作、スライド操作、スクロール操作等、コンテンツ等の表示データに対する各種の操作を受付けてもよい。図3では、端末装置10は投稿者U1によって利用される。 The terminal device 10 is, for example, a smart device such as a smartphone or tablet, and is a portable terminal device that can communicate with any server device via a wireless communication network such as 3G to 5G (Generation) or LTE (Long Term Evolution). The terminal device 10 also has a screen such as an LCD display with touch panel functionality, and may accept various operations on displayed data such as content, such as tapping, sliding, and scrolling, performed by the contributor using a finger or stylus. In Figure 3, the terminal device 10 is used by contributor U1.
情報処理装置100は、自身のファッション施策を他人に広めたいといった投稿者側の金銭的又は精神的な承認欲求を満たし易くし、コンバージョンに繋がり易いファッション施策であるか否かの投稿者の判断を投稿前に可能にすることを目的とした情報処理装置であり、投稿者側のユーザビリティの向上を促進させることを目的とした情報処理装置である。情報処理装置100は、実施形態における処理を実現可能であれば、どのような装置であってもよい。例えば、情報処理装置100は、ファッション施策の投稿(登録)や試着などを可能にする所定のサービス(ウェブサービスでもよいしアプリサービスでもよい)を提供するサーバ装置やクラウドシステム等により実現される。下記実施形態では、このようなサービスの一例としてサービスW1を用いて説明する。すなわち、サービスW1は、ファッション施策の投稿や試着などが可能な所定のサービスである。 The information processing device 100 is an information processing device that aims to make it easier for posters to satisfy their desire for financial or psychological recognition, such as wanting to spread their fashion initiatives to others, and to enable posters to determine before posting whether a fashion initiative is likely to lead to conversions, thereby promoting improved usability for posters. The information processing device 100 may be any device that can implement the processing in the embodiments. For example, the information processing device 100 is implemented by a server device, cloud system, or the like that provides a predetermined service (which may be a web service or an app service) that enables fashion initiatives to be posted (registered) and tried on. In the following embodiments, service W1 will be used as an example of such a service. That is, service W1 is a predetermined service that enables fashion initiatives to be posted and tried on.
〔2.情報処理の一例〕
自身のファッション施策を他人に広めたい場合がある。また、自身のファッション施策を他人に広めるために、どんなファッション施策を行えばコンバージョンに繋がるのか(例えば、購入に繋がるのか、試着してもらえるのか)といった判断を投稿前に行いたい場合がある。しかしながら、従来の技術では、例えば、どんなファッション施策を行えば、自身のフォロワーなどにコンバージョンしてもらえるのかといった判断を投稿前に行うことはできなかった。
[2. An example of information processing]
There are cases where a user wants to spread their own fashion initiatives to others. Also, in order to spread their own fashion initiatives to others, they may want to determine before posting what kind of fashion initiative will lead to conversions (for example, whether it will lead to a purchase or whether it will encourage people to try on the item). However, with conventional technology, it has not been possible to determine, for example, what kind of fashion initiative will lead to conversions among their followers, etc., before posting.
本願は、上記に鑑みてなされたものであって、例えば、自身のファッション施策を他人に広めたいといった投稿者側の金銭的又は精神的な承認欲求を満たし易くし、コンバージョンに繋がり易いファッション施策であるか否かの投稿者の判断を投稿前に可能にすることを目的とする。 This application was made in light of the above, and aims to make it easier for posters to satisfy their desire for financial or psychological recognition, such as wanting to spread their own fashion initiatives to others, and to enable posters to determine before posting whether their fashion initiatives are likely to lead to conversions.
下記実施形態では、ファッション施策の適用とは、実際の行為に限らず、バーチャルで行う行為を含んでもよい。すなわち、投稿者が現実世界で実際に行う行為であってもよいし、アバターなどを介してバーチャルで行う行為であってもよい。 In the following embodiments, the application of fashion measures is not limited to actual actions, but may also include virtual actions. In other words, it may be an action that the poster actually performs in the real world, or an action that they perform virtually via an avatar, etc.
下記実施形態では、ファッション施策の一例として投稿者が顔のメイクを投稿する場合を例に挙げて説明するが、ファッション施策は顔のメイクに限定されなくてもよい。ファッション施策は、例えば、髪型や髪色(ヘアカラー)などの髪のメイク(ヘアメイク)などであってもよいし、メイクに限らず、被服(被服は、ワンピースやドレスなどに限られず、例えば、シューズ、靴下、アクセサリ、腕時計、ジュエリーなどでもよい)の試着(試着は、現実世界の実際の試着であってもよいし、バーチャルな試着であってもよい)やコーディネートなどであってもよい。下記実施形態では、ファッション施策を適用するユーザ側の処理ではなく、ファッション施策を投稿する投稿者側の処理を説明する。なお、ファッション施策を投稿する投稿者側では、例えば、ファッション施策を行った顔を撮影(スキャン)して使用アイテムを登録する。また、ファッション施策を適用するユーザ側では、例えば、投稿者の投稿情報を指定し、ファッション施策(フルメイクなど)を試して気に入れば使用アイテム(そのフルメイクに用いられた使用アイテム)を購入する。 In the following embodiments, a case where a poster posts facial makeup is described as an example of a fashion measure, but fashion measures do not have to be limited to facial makeup. Fashion measures can be, for example, hair makeup (hair makeup) such as hairstyle and hair color (hair color), or can be not limited to makeup but also trying on (trying on in the real world or virtually) and coordinating clothes (clothing is not limited to one-piece dresses and can be, for example, shoes, socks, accessories, watches, jewelry, etc.). In the following embodiments, the processing on the poster's side who posts the fashion measure is described, rather than the processing on the user's side who applies the fashion measure. Note that the poster who posts the fashion measure, for example, photographs (scans) the face after the fashion measure and registers the items used. Furthermore, the user who applies the fashion measure, for example, specifies the poster's post information, tries on the fashion measure (full makeup, etc.), and if they like it, purchases the items used (the items used in that full makeup).
図2Aは、投稿者側のユーザインタフェースの全体的な流れの一例を示す図である。左から1番目の画面下にある「登録する」を投稿者U1が操作すると、左から2番目の画面に遷移する。注意事項を読み進めて初回の場合は利用規約に同意すると、左から3番目の画面に遷移する。メイク撮影画面で顔を多方面から撮影すると、左から4番目の画面に遷移する。撮影されたメイクの試着イメージをプレビュー画面で確認すると、左から5番目の画面に遷移する。そして、この画面上でサムネイル画像と投稿文を記入し、使用アイテムを紐づけて投稿すると、投稿情報が公開される。 Figure 2A is a diagram showing an example of the overall flow of the user interface on the poster's side. When poster U1 clicks "Register" at the bottom of the first screen from the left, they will be redirected to the second screen from the left. After reading the notes and agreeing to the terms of use (if it is their first time), they will be redirected to the third screen from the left. After taking photos of their face from multiple angles on the makeup photo shoot screen, they will be redirected to the fourth screen from the left. After checking the preview screen to see how the captured makeup will look when tried on, they will be redirected to the fifth screen from the left. Then, on this screen, they can enter a thumbnail image and text to post, link the items used, and post, and the posted information will be made public.
図2Bは、ユーザ側のユーザインタフェースの全体的な流れの一例を示す図である。左から1番目の画面でユーザP1が興味のあるメイクを指定すると、左から2番目の画面に遷移する。この画面で「メイクを試す」をユーザP1が操作すると、左から3番目の画面に遷移する。この画面では、登録されたメイクをユーザP1の自身の顔に試着し、使用アイテムを参照することが可能である。また、左から2番目の場面で「HOW TO」をユーザP1が操作すると、左から4番目の画面に遷移する。この画面では、メイクの説明動画を閲覧することが可能である。例えば、アプリ内でウェブビューが起動し動画が再生される。 Figure 2B is a diagram showing an example of the overall flow of the user interface on the user side. When user P1 selects the makeup they are interested in on the first screen from the left, the screen transitions to the second screen from the left. When user P1 operates "Try Makeup" on this screen, the screen transitions to the third screen from the left. On this screen, user P1 can try on the registered makeup on their own face and see the items used. Furthermore, when user P1 operates "HOW TO" in the second scene from the left, the screen transitions to the fourth screen from the left. On this screen, it is possible to view an explanatory video on the makeup. For example, a web view is launched within the app and the video is played.
図3は、実施形態に係る情報処理システム1の情報処理の一例を示す図である。図3では、ファッション施策を投稿する投稿者側の情報処理の一例を説明する。投稿者U1は、自身のメイク情報を、情報処理装置100が提供する所定のサービス(サービスW1)にアップロードする(ステップS11)。メイク情報とは、例えば、自身のメイクを撮影した情報(静止画像など)であってもよいし、メイクの順序やボリュームなどのメイクの仕方を説明した情報(動画像など)であってもよい。また、メイク情報とは、例えば、投稿者の顔に施されたメイクであるマスク情報や、投稿情報に紐づけられた化粧品情報などであってもよい。なお、マスク情報は投稿者のメイク後の顔そのものの情報であっても良いし、投稿者のメイク後の顔と投稿者の素顔の顔の差分から得られたメイクそのものの情報であっても良い。投稿者U1は、例えば、メイクを行った状態の顔を撮影した顔画像をサービスW1にアップロードする。この際、投稿者U1は、例えば、自身のメイク情報と紐づけてメイクに用いた使用アイテムの登録を行ってもよい。例えば、情報処理装置100は、投稿者U1が行ったメイク情報に紐づくようにメイクに用いた使用アイテムの登録を投稿者U1に行わせてもよい。この際、例えば、情報処理装置100は、投稿情報からマスク情報を抽出することでメイク情報を特定してもよい。 3 is a diagram showing an example of information processing in the information processing system 1 according to an embodiment. FIG. 3 illustrates an example of information processing on the side of a poster who posts a fashion campaign. Poster U1 uploads his/her makeup information to a predetermined service (service W1) provided by the information processing device 100 (step S11). The makeup information may be, for example, information captured of the poster's makeup (such as a still image), or information explaining how to apply makeup, such as the order and volume of the makeup (such as a video). The makeup information may also be, for example, mask information, which is the makeup applied to the poster's face, or cosmetic information linked to the posted information. The mask information may be information about the poster's face after makeup has been applied, or information about the makeup itself obtained from the difference between the poster's face after makeup and the poster's bare face. Poster U1, for example, uploads a facial image of his/her face with makeup applied to service W1. At this time, poster U1 may, for example, register the items used for makeup in association with his/her makeup information. For example, the information processing device 100 may cause the poster U1 to register the items used in the makeup so that the items are linked to the makeup information applied by the poster U1. In this case, for example, the information processing device 100 may identify the makeup information by extracting mask information from the posted information.
そして、情報処理装置100は、投稿者U1の投稿情報をユーザが指定すると、例えば、投稿情報に紐づくメイク情報をユーザの顔画像に適用するための処理を行ってもよいし、メイクに用いられた使用アイテムの購入を可能にするための処理を行ってもよい。以下、投稿情報の一例として撮影画像G1(静止画像であってもよいし動画像であってもよい)を用いて説明する。例えば、撮影画像G1を閲覧(視聴)したユーザP1(投稿者U1のフォロワーなど)が撮影画像G1を指定すると、撮影画像G1の撮影中に用いられたメイク情報の適用を可能にしてもよいし、使用アイテムを購入できるように使用アイテムの購入サイトへのリンクの提供を行ってもよい。以下、投稿者U1が自身のメイクを撮影した撮影画像G1をサービスW1にアップロードした際に表示されるプレビュー画面(投稿前のプレビュー画面)について説明する。なお、投稿者U1がプレビュー画面(画面C1)で投稿(登録)を許可(承諾)すると公開されて閲覧が可能になる。また、プレビュー画面とは、例えば、投稿者が自身のメイクを撮影し終えて投稿する前に、メイクを複数の顔タイプ(フレッシュ、キュート、フェミニン、クール、中性的な顔立ちの男性など)にあてはめることでメイクの試着イメージを確認するための画面である。 When a user specifies the posted information of poster U1, the information processing device 100 may, for example, perform processing to apply makeup information linked to the posted information to the user's facial image, or may perform processing to enable the purchase of items used in the makeup. Below, a description will be given using a captured image G1 (which may be a still image or a moving image) as an example of posted information. For example, when user P1 (such as a follower of poster U1) who has viewed (watched) captured image G1 specifies captured image G1, the information processing device 100 may enable the application of makeup information used in capturing captured image G1, or may provide a link to a website where the items used can be purchased. Below, a description will be given of the preview screen (pre-post preview screen) that is displayed when poster U1 uploads captured image G1 of their own makeup to service W1. Note that if poster U1 allows (accepts) the posting (registration) on the preview screen (screen C1), the image will be made public and available for viewing. The preview screen is a screen where, for example, after a poster has finished photographing their own makeup and before posting it, they can check how the makeup will look by applying it to multiple face types (fresh, cute, feminine, cool, and androgynous men, etc.).
図4は、実施形態に係るプレビュー画面の一例を示す図である。画面C1は、撮影画像G1の投稿前のプレビュー画面である。すなわち、投稿者側で表示される投稿前のプレビュー画面である。画面C1には、投稿者U1がメイク後に撮影した自身の撮影画像(撮影画像G1)がトップ(メイン)に表示される。また、画面C1は、撮影画像を変更したい場合に撮影のやり直しを可能にするボタンB1と、撮影画像の投稿を可能にするボタンB2とを含む。ボタンB1が操作(クリックやタップなど)されると、プレビュー画面から撮影用の画面に戻る。ここで撮影を再度行うとプレビュー画面に戻り、新たに撮影された撮影画像がトップに表示される。また、ボタンB2が操作されると、撮影画像G1が投稿される。この場合、撮影画像G1が公開されて投稿者U1のフォロワーなどによる閲覧が可能になる。なお、ここで、投稿された撮影画像G1が閲覧可能になってもよいし、撮影画像G1とともに投稿された情報(例えば、使用アイテムなど)が閲覧可能になってもよいし、投稿者U1に紐づけて登録されている情報(例えば、別途登録されているサムネイル画像など)が閲覧可能になってもよい。 Figure 4 is a diagram showing an example of a preview screen according to an embodiment. Screen C1 is a preview screen before the photographed image G1 is posted. In other words, it is a preview screen displayed on the poster's side before posting. Screen C1 displays, at the top (main), a photographed image (photographed image G1) of poster U1 taken after applying makeup. Screen C1 also includes a button B1 that allows the poster U1 to retake the photograph if they want to change the photographed image, and a button B2 that allows the photographed image to be posted. When button B1 is operated (clicked, tapped, etc.), the preview screen returns to the photographing screen. If the poster U1 takes another photograph here, the preview screen returns and the newly taken photographed image is displayed at the top. When button B2 is operated, photographed image G1 is posted. In this case, photographed image G1 is made public and can be viewed by followers of poster U1, etc. Here, the posted captured image G1 may become viewable, or information posted along with the captured image G1 (such as the items used) may become viewable, or information registered in association with the poster U1 (such as a separately registered thumbnail image) may become viewable.
また、画面C1は、少なくとも一つ以上の顔タイプを示すサムネイル画像を含む。図4では、左から順に、顔タイプが「Fresh」であるサムネイル画像SG1と、顔タイプが「Cute」であるサムネイル画像SG2と、顔タイプが「Feminine」であるサムネイル画像SG3と、顔タイプが「Cool」であるサムネイル画像SG4と、顔タイプが「中性的な顔立ちの男性」であるサムネイル画像SG5を含む。図4では、これらのサムネイル画像SG1乃至SG5が撮影画像G1に重畳して表示される。これにより、4タイプに沿う女性の顔画像4つと、中性的な顔立ちの男性顔画像を1つ表示し、それぞれに撮影したメイクをのせて試着具合を確認できるようになる。 Screen C1 also includes thumbnail images showing at least one face type. In Figure 4, from left to right, it includes thumbnail image SG1 with a "Fresh" face type, thumbnail image SG2 with a "Cute" face type, thumbnail image SG3 with a "Feminine" face type, thumbnail image SG4 with a "Cool" face type, and thumbnail image SG5 with a "male with androgynous features" face type. In Figure 4, these thumbnail images SG1 to SG5 are displayed superimposed on the captured image G1. This displays four female face images according to the four types and one male face image with androgynous features, allowing the user to try on the captured makeup on each image to see how it looks.
また、これらのサムネイル画像SG1乃至SG5は選択可能である。例えば、サムネイル画像SG1が操作されると、サムネイル画像SG1が選択される。この際、サムネイル画像SG1の顔画像に対して投稿者U1のメイク情報(撮影画像G1で用いられたメイク情報)が適用される。そして、サムネイル画像SG1の顔画像に投稿者U1のメイク情報が適用された顔画像がトップに表示される。この結果、投稿者U1は、自身のメイクが「Fresh」の顔タイプでは実際はどのようなメイクになるのか(似合うのかなど)プレビュー画面で確認することができる。同様に、例えば、サムネイル画像SG2が操作されると、サムネイル画像SG2が選択される。この際、サムネイル画像SG2の顔画像に対して投稿者U1のメイク情報が適用される。そして、サムネイル画像SG2の顔画像に投稿者U1のメイク情報が適用された顔画像がトップに表示される。この結果、投稿者U1は、自身のメイクが「Cute」の顔タイプでは実際はどのようなメイクになるのかプレビュー画面で確認することができる。このようにプレビュー画面に表示することで、例えば、どのようなメイクがどのような顔タイプに似合うのか、自身のメイクは実際どのような顔タイプに似合うのかなど、投稿前に確認することができる。また、例えば、自身の投稿が狙っている顔タイプのユーザに魅力的なのか、自身の投稿がフォロワーに魅力的なのかなど、投稿前に確認することができる。サムネイル画像SG1乃至SG5のモデルの顔画像は投稿者U1がメイク情報を適用可能な選択候補であり、投稿者U1がサムネイル画像を選択してメイク情報を適用したモデルの顔画像が画面C1のトップに表示される(ステップS12)。 Furthermore, these thumbnail images SG1 to SG5 are selectable. For example, when thumbnail image SG1 is operated, thumbnail image SG1 is selected. At this time, the makeup information of poster U1 (the makeup information used in captured image G1) is applied to the facial image of thumbnail image SG1. Then, the facial image with the makeup information of poster U1 applied to the facial image of thumbnail image SG1 is displayed at the top. As a result, poster U1 can check on the preview screen what kind of makeup will actually look like (whether it suits them, etc.) on their own face type of "Fresh." Similarly, for example, when thumbnail image SG2 is operated, thumbnail image SG2 is selected. At this time, the makeup information of poster U1 is applied to the facial image of thumbnail image SG2. Then, the facial image with the makeup information of poster U1 applied to the facial image of thumbnail image SG2 is displayed at the top. As a result, poster U1 can check on the preview screen what kind of makeup will actually look like on their own face type of "Cute." By displaying the preview screen in this way, users can check, for example, what kind of makeup suits what kind of face type, and what kind of face type their own makeup actually suits, before posting. Users can also check, for example, whether their post will appeal to users with the face type they are targeting, and whether their post will appeal to their followers, before posting. The facial images of the models in thumbnail images SG1 to SG5 are selection candidates to which poster U1 can apply makeup information, and the facial image of the model to which poster U1 has selected a thumbnail image and applied makeup information is displayed at the top of screen C1 (step S12).
なお、画面C1に表示可能なサムネイル画像の数や顔タイプは、この例に特に限定されなくてもよい。図4に示すようなサムネイル画像SG1乃至SG5の5つの画像数に限定されなくてもよい。例えば、サムネイル画像SG5の右側にさらにサムネイル画像が隠れており、スクロールにより表示可能であってもよい。また、これらのサムネイル画像の表示態様は図4に示す例に限定されなくてもよく、どのような表示態様で表示されてもよい。また、「Fresh」、「Cute」、「Feminine」、「Cool」、「中性的な男性」の顔タイプに限定されなくてもよい。例えば、投稿者U1の顔タイプやメイク情報などによって画面C1に表示されるサムネイル画像の顔タイプが毎回異なってもよい。例えば、メイクを少し変えて撮影をやり直すごとに画面C1に表示されるサムネイル画像の顔タイプが毎回異なってもよい。 Note that the number of thumbnail images and face types that can be displayed on screen C1 do not have to be limited to this example. They do not have to be limited to the five images of thumbnail images SG1 to SG5 shown in FIG. 4. For example, a further thumbnail image may be hidden to the right of thumbnail image SG5 and be visible by scrolling. Furthermore, the display format of these thumbnail images does not have to be limited to the example shown in FIG. 4, and they may be displayed in any display format. Furthermore, they do not have to be limited to face types such as "Fresh," "Cute," "Feminine," "Cool," and "Androgynous Male." For example, the face type of the thumbnail images displayed on screen C1 may be different each time depending on the face type and makeup information of poster U1. For example, the face type of the thumbnail images displayed on screen C1 may be different each time the user slightly changes their makeup and retakes the photo.
以下、画面C1に表示されるサムネイル画像の顔タイプの抽出処理を説明する。図4では、「Fresh」、「Cute」、「Feminine」、「Cool」、「中性的な男性」の5つの顔タイプのサムネイル画像が表示される。このような5つの顔タイプがどのように抽出されたか説明する。 The process for extracting the face type of the thumbnail images displayed on screen C1 will now be explained. In Figure 4, thumbnail images of five face types are displayed: "Fresh," "Cute," "Feminine," "Cool," and "Androgynous Male." We will now explain how these five face types are extracted.
情報処理装置100は、画面C1(投稿前のプレビュー画面)に表示される撮影画像を取得する(ステップS101)。図3及び図4の例では、撮影画像G1を取得する。そして、情報処理装置100は、メイク情報を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出する(ステップS102)。図3及び図4の例では、サムネイル画像SG1乃至SG5を抽出する。そして、情報処理装置100は、抽出したサムネイル画像を画面C1の所定の領域に(撮影画像に重畳するように)表示させる(ステップS103)。図3及び図4の例では、領域R1に表示させる。そして、情報処理装置100は、サムネイル画像が選択された場合、そのサムネイル画像のモデルの顔画像からメイク情報が適用されたモデルの顔画像を生成する(ステップS104)。そして、情報処理装置100は、生成したモデルの顔画像を画面C1のトップに表示させる(ステップS105)。 The information processing device 100 acquires the captured image to be displayed on screen C1 (pre-post preview screen) (step S101). In the examples of FIGS. 3 and 4, captured image G1 is acquired. The information processing device 100 then extracts at least one thumbnail image of a model who is a candidate for applying makeup information based on predetermined conditions (step S102). In the examples of FIGS. 3 and 4, thumbnail images SG1 to SG5 are extracted. The information processing device 100 then displays the extracted thumbnail image in a predetermined area of screen C1 (so as to be superimposed on the captured image) (step S103). In the examples of FIGS. 3 and 4, the information processing device 100 displays the extracted thumbnail image in area R1. When a thumbnail image is selected, the information processing device 100 generates a facial image of the model with makeup information applied from the facial image of the model in the thumbnail image (step S104). The information processing device 100 then displays the generated facial image of the model at the top of screen C1 (step S105).
ステップS104において、メイク情報が適用されたモデルの顔画像を生成するための手法の一例として、例えば、上記非特許文献1に開示されたPSGAN(Pose and Expression Robust Spatial-Aware Generative Adversarial Network)と呼ばれる、顔画像からメイク情報のみを別の顔へ移動させる手法が挙げられる。この手法は、深層学習を用いた手法であり、表情やポーズに関わらず、メイク情報のみを別の顔へ移動させることができる。情報処理装置100は、PSGAN等の手法を用いて、投稿者U1の投稿情報に用いられた投稿者U1のメイクと同様のメイクをモデルの顔画像に適用する。具体的には、情報処理装置100は、投稿者U1のメイクと同様のメイクを適用するためのメイク情報を生成し、生成したメイク情報をモデルの顔画像に適用する。そして、ステップS105において、情報処理装置100は、このようにメイク情報が適用されたモデルの顔画像を生成することで、生成したモデルの顔画像を画面C1のトップに表示させる。 In step S104, an example of a method for generating a facial image of a model with makeup information applied is a method called PSGAN (Pose and Expression Robust Spatial-Aware Generative Adversarial Network), which is disclosed in the above-mentioned non-patent document 1 and which transfers only makeup information from a facial image to another face. This method uses deep learning and can transfer only makeup information to another face regardless of facial expression or pose. Using a method such as PSGAN, the information processing device 100 applies makeup similar to the makeup used by poster U1 in the posted information of poster U1 to the facial image of the model. Specifically, the information processing device 100 generates makeup information for applying makeup similar to the makeup used by poster U1, and applies the generated makeup information to the facial image of the model. Then, in step S105, the information processing device 100 generates a facial image of the model with makeup information applied in this way, and displays the generated facial image of the model at the top of the screen C1.
ここで、ステップS102における所定の条件に基づいて少なくとも一つ抽出する処理について説明する。情報処理装置100は、種々の条件に基づいて、ステップS102における抽出処理を行う。抽出処理の一例として、メイクの似合う度合いを用いた場合と、投稿者のフォロワー情報を用いた場合と、モデルのログイン情報を用いた場合との3パターンを例に挙げて説明するが、以下の例に特に限定されなくてもよい。 Here, the process of extracting at least one item based on predetermined conditions in step S102 will be described. The information processing device 100 performs the extraction process in step S102 based on various conditions. As examples of the extraction process, three patterns will be described: using the degree to which makeup suits the subject, using the poster's follower information, and using the model's login information; however, the process is not limited to the following examples.
(メイクの似合う度合いを用いた場合)
情報処理装置100は、例えば、サムネイル画像のモデルの顔タイプに対するメイクの似合う度合いに基づいて抽出処理を行ってもよい。例えば、情報処理装置100は、投稿者U1が行ったメイクの似合う度合いを示す評価(スコア)の高いモデルの順にサムネイル画像の抽出を行ってもよい。例えば、情報処理装置100は、メイク情報が適用されたモデルの顔画像に対して投稿者U1が行った評価を機械学習させて似合う度合いを推定することでサムネイル画像の抽出を行ってもよい。例えば、情報処理装置100は、メイク情報が適用されたモデルの顔画像に対して投稿者U1が「似合う」と評価した場合を正例とし、「似合わない」と評価した場合を負例として機械学習させた学習モデルを用いて似合う度合いを推定することでサムネイル画像の抽出を行ってもよい。
(Using the degree to which makeup suits you)
The information processing device 100 may perform the extraction process based on, for example, the degree to which makeup suits the face type of the model in the thumbnail image. For example, the information processing device 100 may extract thumbnail images in descending order of the model's rating (score) indicating the degree to which the makeup applied by the poster U1 suits them. For example, the information processing device 100 may extract thumbnail images by estimating the degree to which makeup suits the model using machine learning of the poster U1's rating of the model's face image to which makeup information has been applied. For example, the information processing device 100 may extract thumbnail images by estimating the degree to which makeup suits the model using a learning model trained on machine learning, using a case where the poster U1 has rated the model's face image to which makeup information has been applied as "suitable" as a positive example and a case where the poster U1 has rated the model's face image to which makeup information has been applied as "unsuitable" as a negative example.
情報処理装置100は、例えば、画面C1で投稿者U1が自身のメイクが似合うと判断したモデルの顔タイプを選択(指定でもよい)することで、選択された顔タイプとメイク情報とを合わせて学習させてメイクの似合う度合いの評価を行ってもよい。例えば、情報処理装置100は、選択された顔タイプとメイク情報との組み合わせを正例として機械学習させてもよいし、選択されなかった顔タイプとメイク情報との組み合わせを負例として機械学習させてもよい。そして、情報処理装置100は、例えば、このように機械学習された学習モデルにメイク情報を入力することで似合う顔タイプの推定を行ってもよい。また、情報処理装置100は、例えば、投稿者U1にモデルの顔タイプを選択させることなく、画面C1で試着した状態で登録された情報を正例として機械学習させてもよい。 For example, the information processing device 100 may have the poster U1 select (or specify) on screen C1 the face type of a model that they judge would suit their makeup, and then learn the selected face type in combination with makeup information to evaluate how well the makeup suits them. For example, the information processing device 100 may perform machine learning on a combination of the selected face type and makeup information as a positive example, or on a combination of a face type and makeup information that was not selected as a negative example. The information processing device 100 may then estimate a suitable face type by inputting makeup information into a learning model machine-learned in this way. Furthermore, the information processing device 100 may perform machine learning on information registered on screen C1 after trying on makeup as a positive example, without having the poster U1 select a face type of a model, for example.
情報処理装置100は、例えば、投稿者U1と顔タイプが類似するモデルの順にサムネイル画像の抽出を行ってもよい。例えば、情報処理装置100は、投稿者U1の顔タイプが「Cute」である場合、顔タイプが「Cute」であるモデルのサムネイル画像を優先して抽出を行ってもよい。 The information processing device 100 may extract thumbnail images in the order of models whose face type is similar to that of the poster U1. For example, if the face type of the poster U1 is "Cute," the information processing device 100 may prioritize and extract thumbnail images of models whose face type is "Cute."
(投稿者のフォロワー情報を用いた場合)
情報処理装置100は、例えば、投稿者U1のフォロワー情報に基づいて抽出処理を行ってもよい。例えば、情報処理装置100は、投稿者U1のフォロワーと顔タイプが同一又は類似するモデルのサムネイル画像ほど優先的に抽出してもよい。また、例えば、情報処理装置100は、投稿者U1のフォロワーの興味又は関心のある顔タイプであるモデルのサムネイル画像ほど優先的に抽出してもよい。また、例えば、情報処理装置100は、購入や閲覧などのコンバージョンに繋がると推定されたフォロワーに限定して抽出処理を行ってもよい。この際、例えば、情報処理装置100は、利用者の試着履歴、購入履歴、閲覧履歴などと、その利用者が購入や所定のコンテンツを閲覧したといった所定のコンバージョンに至ったか否かとの関係性を学習させたモデルを用いて、購入や閲覧などのコンバージョンの確度を推定し、推定したコンバージョンの確度に基づいてコンバージョンに繋がるフォロワーを推定してもよい。
(When using the poster's follower information)
The information processing device 100 may perform the extraction process based on, for example, the follower information of the poster U1. For example, the information processing device 100 may prioritize extracting thumbnail images of models with the same or similar facial type as the poster U1's followers. Furthermore, for example, the information processing device 100 may prioritize extracting thumbnail images of models with facial types that are of interest to the poster U1's followers. Furthermore, for example, the information processing device 100 may perform the extraction process limited to followers estimated to lead to conversions such as purchases or viewings. In this case, for example, the information processing device 100 may estimate the likelihood of conversions such as purchases or viewings using a model that has learned the relationship between a user's try-on history, purchase history, viewing history, etc. and whether the user has achieved a predetermined conversion, such as making a purchase or viewing a predetermined content, and then estimate followers who will lead to conversions based on the estimated likelihood of conversions.
また、情報処理装置100は、例えば、モデルではなく、フォロワーの顔画像に基づいて抽出処理を行ってもよい。情報処理装置100は、例えば、フォロワーの顔画像に基づいて生成されたモデルの顔画像(生成AIに基づいて生成されたモデルの顔画像など)に基づいて抽出処理を行ってもよい。この際、情報処理装置100は、例えば、購入回数や閲覧回数や試着回数やフォロワー数が最も多いフォロワーの顔画像を用いてもよいし、複数のフォロワーの平均の顔画像を用いてもよい。これにより、例えば、フォロワー数が多いフォロワーの顔画像を用いることで、試着を効果的に促すことが可能になる。 Furthermore, the information processing device 100 may perform the extraction process based on, for example, the facial image of a follower rather than the model. The information processing device 100 may perform the extraction process based on, for example, a facial image of a model generated based on the facial image of a follower (such as a facial image of a model generated based on generation AI). In this case, the information processing device 100 may use, for example, the facial image of the follower who has made the most purchases, viewed the most, tried on the most items, or had the most followers, or may use the average facial image of multiple followers. This makes it possible to effectively encourage people to try on items by using, for example, the facial image of a follower with a large number of followers.
(モデルのログイン情報を用いた場合)
情報処理装置100は、例えば、画面C1に表示される候補であるモデルのログイン情報(ログイン数や新規登録者であるか否かなど)に基づいて抽出処理を行ってもよい。例えば、情報処理装置100は、ログイン数が多いモデルのサムネイル画像ほど優先的に抽出してもよい。なお、モデルは、例えば、ログイン数が多いユーザの顔タイプと同一又は類似するモデルであってもよいし、ログイン数が多いユーザが興味関心のある顔タイプと同一又は類似するモデルであってもよいし、ログイン数が多いユーザそのものであってもよい。また、ログイン数が多いとは、ログイン回数が多い場合に限られず、ユーザ数が多い場合を含んでもよい。また、例えば、情報処理装置100は、投稿者U1によりメイク情報が適用される可能性が高いと推定されたモデルのサムネイル画像ほど優先的に抽出してもよい。また、例えば、情報処理装置100は、投稿者U1が投稿を行った場合に投稿者U1のメイク情報を適用する可能性が高いと推定されたモデルのサムネイル画像ほど優先的に抽出してもよい。また、例えば、情報処理装置100は、サービスW1への新規登録者であるモデルのサムネイル画像を優先的に抽出してもよい。また、例えば、情報処理装置100は、ログインを所定の期間内(最近など)に行ったモデルのサムネイル画像を優先的に抽出してもよい。
(When using the model's login information)
The information processing device 100 may perform the extraction process based on, for example, login information (such as the number of logins and whether the user is a new registrant) of candidate models displayed on the screen C1. For example, the information processing device 100 may prioritize extracting thumbnail images of models with a higher number of logins. The model may be, for example, a model that is identical or similar to the facial type of a user with a higher number of logins, a model that is identical or similar to the facial type that the user with a higher number of logins is interested in, or the user himself/herself. Furthermore, a high number of logins does not necessarily mean a high number of logins, but may also include a large number of users. Furthermore, for example, the information processing device 100 may prioritize extracting thumbnail images of models that are estimated to be more likely to have makeup information applied by the poster U1 if the poster U1 posts a content. Furthermore, for example, the information processing device 100 may prioritize extracting thumbnail images of models that are estimated to be more likely to have the poster U1's makeup information applied ... newly registrants to the service W1. Furthermore, for example, the information processing device 100 may preferentially extract thumbnail images of models who have logged in within a predetermined period (e.g., recently).
以上、メイク情報を適用する候補であるモデルのサムネイル画像の抽出処理を説明した。例えば、サムネイル画像の抽出時の所定の条件について説明した。以下、このような抽出処理後のサムネイル画像の表示態様(表示順(並び順)、強調表示、顔タイプやフォロワー情報といったプラス情報の表示など)の決定処理について説明する。具体的には、ステップS103においてサムネイル画像を画面C1の所定の領域に表示させる際のサムネイル画像の表示態様の決定処理について説明する。情報処理装置100は、種々の条件に基づいて、ステップS103のためのサムネイル画像の表示態様を決定する。表示態様の決定処理の一例として、上述の抽出処理と同様に、メイクの似合う度合いを用いた場合と、投稿者のフォロワー情報を用いた場合と、モデルのログイン情報を用いた場合との3パターンを例に挙げて説明するが、以下の例に特に限定されなくてもよい。また、上述の抽出処理と同様の説明は適宜省略する。 The above describes the process for extracting thumbnail images of models who are candidates for applying makeup information. For example, the specified conditions for extracting thumbnail images have been described. Below, we will explain the process for determining the display mode of thumbnail images after such extraction process (display order (sort order), highlighting, display of positive information such as face type and follower information, etc.). Specifically, we will explain the process for determining the display mode of thumbnail images when displaying them in a specified area of screen C1 in step S103. The information processing device 100 determines the display mode of thumbnail images for step S103 based on various conditions. As an example of the display mode determination process, we will explain three patterns using the degree to which makeup suits the model, the poster's follower information, and the model's login information, as in the extraction process described above. However, the present invention is not limited to these examples. Furthermore, explanations similar to those for the extraction process described above will be omitted as appropriate.
(メイクの似合う度合いを用いた場合)
情報処理装置100は、例えば、サムネイル画像のモデルの顔タイプに対するメイクの似合う度合いに基づいて表示態様の決定処理を行ってもよい。例えば、情報処理装置100は、投稿者U1が行ったメイクの似合う度合いを示す評価の高いモデルのサムネイル画像ほど優先表示(例えば、上位表示や先頭表示や強調表示など)されるように表示態様を決定してもよい。また、例えば、情報処理装置100は、投稿者U1と顔タイプが類似するモデルのサムネイル画像ほど優先表示されるように表示態様を決定してもよい。また、情報処理装置100は、例えば、顔タイプを細分化(例えば、顔であれば目や唇などのパーツに細分化)して表示態様の決定処理を行ってもよい。このように、キュートやクールなどの顔全体でのタイプではなく、細分化された顔パーツでのタイプを用いて処理を行ってもよい。情報処理装置100は、例えば、顔タイプを細分化したそれぞれに対して評価の高いモデルのサムネイル画像ほど優先表示されるように表示態様を決定してもよい。この際、情報処理装置100は、例えば、ポップアップなどで似合うと表示されるように強調表示させると決定してもよい。例えば、顔全体ではなく特定の顔パーツだけでタイプ判断を行い、その特定の顔パーツで評価を行うことで優先表示させてもよい。例えば、特定の顔パーツが目である場合、目がキュートなモデルのサムネイル画像とともに、目がクールなモデルのサムネイル画像などをプレビュー画面に表示させてもよいし、特定の顔パーツが唇である場合、唇がキュートなモデルのサムネイル画像と、唇がクールなモデルのサムネイル画像などをプレビュー画面に表示させてもよい。また、この際、例えば、目がキュートなモデルのサムネイル画像をプレビュー画面に表示させるために、目がキュートなモデルのサムネイル画像のみを抽出して表示させるとともに、目がクールなモデルのサムネイル画像をプレビュー画面に表示させるために、目がクールなモデルのサムネイル画像のみを抽出して表示させてもよい。
(Using the degree to which makeup suits you)
The information processing device 100 may perform a display mode determination process based on, for example, the degree to which makeup suits the face type of the model in the thumbnail image. For example, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings indicating the degree to which the makeup applied by the poster U1 suits them are displayed preferentially (e.g., displayed at the top, displayed at the beginning, or highlighted). Furthermore, for example, the information processing device 100 may determine a display mode such that thumbnail images of models with a face type similar to that of the poster U1 are displayed preferentially. Furthermore, the information processing device 100 may perform a display mode determination process by subdividing face types (e.g., subdividing the face into features such as eyes and lips). In this way, processing may be performed using subtypes of face features rather than overall face types such as cute or cool. The information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings for each subtype of face type are displayed preferentially. In this case, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings for each subtype of face type are displayed preferentially. In this case, the information processing device 100 may determine a display mode such that thumbnail images of models with higher ratings are displayed preferentially, for example, by highlighting the model to indicate that the model suits the model, using a pop-up or other display. For example, type determination may be performed using only specific facial features rather than the entire face, and evaluation may be performed using the specific facial features for preferential display. For example, if the specific facial features are the eyes, thumbnail images of models with cute eyes may be displayed on the preview screen along with thumbnail images of models with cool eyes, and if the specific facial features are the lips, thumbnail images of models with cute lips and thumbnail images of models with cool lips may be displayed on the preview screen. In this case, for example, in order to display thumbnail images of models with cute eyes on the preview screen, only thumbnail images of models with cute eyes may be extracted and displayed, and in order to display thumbnail images of models with cool eyes on the preview screen, only thumbnail images of models with cool eyes may be extracted and displayed.
(投稿者のフォロワー情報を用いた場合)
情報処理装置100は、例えば、投稿者U1のフォロワー情報に基づいて表示態様の決定処理を行ってもよい。例えば、情報処理装置100は、投稿者U1のフォロワーの顔タイプに基づいて表示態様の決定処理を行ってもよい。例えば、情報処理装置100は、投稿者U1のフォロワーと顔タイプが同一又は類似するモデルのサムネイル画像ほど優先表示されるように表示態様を決定してもよい。また、例えば、情報処理装置100は、投稿者U1のフォロワーの顔タイプの割合に基づいて表示態様の決定処理を行ってもよい。例えば、情報処理装置100は、投稿者U1のフォロワーの顔タイプで「Cute」の割合が多い場合、「Cute」のモデルのサムネイル画像が優先表示されるように表示態様を決定してもよい。また、例えば、情報処理装置100は、投稿者U1のフォロワーの顔タイプで「Cute」の割合が多い場合、「Cute」のモデルのサムネイル画像が複数パターンで表示されるように表示態様を決定してもよい。また、例えば、情報処理装置100は、割合が多い顔タイプが複数ある場合は顔タイプを細分化して表示態様の決定処理を行ってもよい。
(When using the poster's follower information)
The information processing device 100 may perform a display mode determination process based on, for example, follower information of poster U1. For example, the information processing device 100 may perform a display mode determination process based on the facial types of poster U1's followers. For example, the information processing device 100 may determine a display mode such that thumbnail images of models with the same or similar facial types as poster U1's followers are preferentially displayed. Furthermore, for example, the information processing device 100 may perform a display mode determination process based on the proportion of facial types of poster U1's followers. For example, if a high proportion of poster U1's followers have the "Cute" facial type, the information processing device 100 may determine a display mode such that thumbnail images of "Cute" models are preferentially displayed. Furthermore, for example, if a high proportion of poster U1's followers have the "Cute" facial type, the information processing device 100 may determine a display mode such that thumbnail images of "Cute" models are preferentially displayed. Furthermore, for example, if a high proportion of poster U1's followers have the "Cute" facial type, the information processing device 100 may determine a display mode such that thumbnail images of "Cute" models are preferentially displayed. Furthermore, for example, when there are multiple face types with a high proportion, the information processing device 100 may subdivide the face types and perform the process of determining the display mode.
この際、情報処理装置100は、例えば、ポップアップなどでフォロワータイプと表示されるように強調表示させると決定してもよい。例えば、情報処理装置100は、ポップアップなどでフォロワーの顔タイプと表示(フォロワーの顔タイプである旨表示)されるように強調表示させると決定してもよいし、顔タイプごとにフォロワーの人数が表示されるように強調表示させると決定してもよい。例えば、情報処理装置100は、サムネイル画像の枠の色や太さを変更することで強調表示させると決定してもよい。また、この際、情報処理装置100は、例えば、上位に表示されるように優先表示させると決定してもよい。例えば、情報処理装置100は、コンバージョン(試着可能性など)の高い顔タイプが上位に表示されるように優先表示させると決定してもよい。これにより、投稿者U1はエンゲージのよいフォロワーの確認が可能になる。また、情報処理装置100は、例えば、投稿者U1が選択した顔タイプに該当するフォロワーに対して投稿者U1のメイク情報の投稿時に新しいメイク情報が投稿された旨通知を行うと決定してもよい。 At this time, the information processing device 100 may, for example, determine to highlight the follower type by displaying it in a pop-up or the like. For example, the information processing device 100 may determine to highlight the follower's face type (indicating that it is the follower's face type) by displaying it in a pop-up or the like, or may determine to highlight the number of followers for each face type. For example, the information processing device 100 may determine to highlight the thumbnail image by changing the color or thickness of the frame. At this time, the information processing device 100 may also, for example, determine to prioritize the display so that face types with high conversion (likelihood of trying on clothes, etc.) are displayed at the top. This allows the poster U1 to check followers who are highly engaged. The information processing device 100 may, for example, determine to notify followers who match the face type selected by the poster U1 that new makeup information has been posted when the poster U1 posts makeup information.
また、情報処理装置100は、例えば、モデルではなく、フォロワーの顔画像を表示することで強調表示させると決定してもよい。情報処理装置100は、例えば、フォロワーの顔画像に基づいて生成されたモデルの顔画像(生成AIに基づいて生成されたモデルの顔画像など)を表示することで強調表示させると決定してもよい。この際、情報処理装置100は、例えば、購入回数や閲覧回数や試着回数やフォロワー数が最も多いフォロワーの顔画像を用いてもよいし、複数のフォロワーの平均の顔画像を用いてもよい。これにより、例えば、フォロワー数が多いフォロワーの顔画像を用いることで、試着を効果的に促すことが可能になる。 Furthermore, the information processing device 100 may decide to highlight the follower's face image instead of the model's, for example. The information processing device 100 may decide to highlight the follower's face image by displaying a model's face image generated based on the follower's face image (such as a model's face image generated based on generation AI). In this case, the information processing device 100 may use the face image of the follower who has made the most purchases, viewed the most, tried on the most clothes, or had the most followers, or may use the average face image of multiple followers. This makes it possible to effectively encourage followers to try on clothes, for example, by using the face image of a follower with a large number of followers.
(モデルのログイン情報を用いた場合)
情報処理装置100は、例えば、画面C1に表示される候補であるモデルのログイン情報に基づいて表示態様の決定処理を行ってもよい。例えば、情報処理装置100は、モデルのログイン数に基づいて表示態様の決定処理を行ってもよい。なお、上述の抽出処理の場合と同様に、モデルは、例えば、ログイン数が多いユーザの顔タイプと同一又は類似するモデルであってもよいし、ログイン数が多いユーザが興味関心のある顔タイプと同一又は類似するモデルであってもよいし、ログイン数が多いユーザそのものであってもよい。また、ログイン数が多いとは、ログイン回数が多い場合に限られず、ユーザ数が多い場合を含んでもよい。例えば、情報処理装置100は、モデルのログイン数の割合が多い顔タイプのモデルのサムネイル画像ほど優先表示されるように表示態様を決定してもよい。また、例えば、情報処理装置100は、モデルのログイン数の割合が多い顔タイプが複数ある場合(所定の閾値を超える顔タイプが複数ある場合など)は顔タイプを細分化して表示態様の決定処理を行ってもよい。この際、情報処理装置100は、例えば、ポップアップなどで「メイク試着可能性「高!」など」と表示されるように強調表示させると決定してもよい。また、例えば、情報処理装置100は、サービスW1への新規登録者であるモデルのサムネイル画像が優先表示されるように表示態様を決定してもよい。
(When using the model's login information)
The information processing device 100 may perform a display mode determination process based on, for example, login information of a candidate model displayed on the screen C1. For example, the information processing device 100 may perform a display mode determination process based on the number of logins of the model. As in the case of the extraction process described above, the model may be, for example, a model with the same or similar face type as a user with a high number of logins, a model with the same or similar face type as a face type that is of interest to a user with a high number of logins, or the user himself/herself with a high number of logins. Furthermore, a high number of logins does not necessarily mean a high number of logins, but may also include a large number of users. For example, the information processing device 100 may determine a display mode such that thumbnail images of models with a face type that has a high proportion of logins for the model are displayed preferentially. Furthermore, for example, if there are multiple face types with a high proportion of logins for the model (e.g., multiple face types exceeding a predetermined threshold), the information processing device 100 may perform a display mode determination process by subdividing the face types. In this case, the information processing device 100 may determine to highlight the model, for example, by displaying a message such as "High likelihood of trying on makeup!" in a pop-up or the like. Furthermore, for example, the information processing device 100 may determine the display mode so that thumbnail images of models who are new registrants to the service W1 are preferentially displayed.
以上、サムネイル画像の表示態様の決定処理を説明した。そして、情報処理装置100は、サムネイル画像の表示態様を決定すると、決定した表示態様で画面C1の所定の領域にサムネイル画像を表示させる。 The above describes the process for determining the display mode of thumbnail images. Once the information processing device 100 determines the display mode of the thumbnail images, it displays the thumbnail images in a predetermined area of the screen C1 in the determined display mode.
(情報処理のバリエーション1:ファッション施策)
上記実施形態では、ファッション施策の一例としてメイクを例に挙げて説明したが、この例に特に限定されなくてもよい。例えば、髪型や髪色などの髪メイクなどであってもよいし、被服の試着やコーディネートなどであってもよい。これにより、例えば、髪メイクであれば、投稿者U1が投稿した髪メイクが似合うモデルの選択や、投稿者U1が投稿した髪メイクを細分化してモデルの選択を可能にすることができる。
(Information processing variation 1: Fashion measures)
In the above embodiment, makeup has been described as an example of a fashion measure, but the example is not particularly limited. For example, it may be hair and makeup such as hairstyle and hair color, or trying on and coordinating clothes. This makes it possible to select a model that suits the hair and makeup posted by poster U1, or to select a model by subdividing the hair and makeup posted by poster U1.
(情報処理のバリエーション2:友達設定)
上記実施形態では、サムネイル画像のモデルが投稿者U1のフォロワーである場合を例に挙げて説明したが、この例に特に限定されなくてもよい。サムネイル画像のモデルは、例えば、投稿者U1と所定の関係性を有するユーザのみであってもよい。例えば、共有を許可したユーザ(例えば、友達設定されたユーザなど)のみであってもよい。これにより、共有を許可したユーザの顔のみプレビュー画面に表示させることができるため、友達同士だけで楽しむことができるサービスの提供が可能になる。また、サムネイル画像のモデルは、例えば、投稿者U1のフォロワーのうち購入や閲覧に繋がると推定されたフォロワーのみであってもよい。
(Information processing variation 2: Friend setting)
In the above embodiment, the model for the thumbnail image is described as a follower of the poster U1, but this example is not particularly limited. The model for the thumbnail image may be, for example, only users who have a predetermined relationship with the poster U1. For example, it may be only users who have permitted sharing (e.g., users who have been set as friends). This allows only the faces of users who have permitted sharing to be displayed on the preview screen, making it possible to provide a service that can be enjoyed only by friends. Furthermore, the model for the thumbnail image may be, for example, only followers of the poster U1 who are estimated to lead to purchases or views.
(情報処理のバリエーション3:試着可能性の高いモデル)
上記実施形態では、試着可能性の高いモデルのサムネイル画像を上位に表示させる場合を例に挙げて説明した。この際、試着可能性の高いモデルとして、例えば、ファッションコーディネートの投稿や閲覧などが可能な所定のサービス(ソーシャルコマース)に所定の期間内に投稿を行ったユーザなどであってもよい。
(Information processing variation 3: Models likely to be tried on)
In the above embodiment, an example has been described in which thumbnail images of models who are likely to try on the item are displayed at the top. In this case, the models who are likely to try on the item may be, for example, users who have posted within a predetermined period of time to a predetermined service (social commerce) that allows users to post and view fashion coordinations.
(情報処理のバリエーション4:モデルの顔画像(実際/選択/推定))
上記実施形態では、サムネイル画像のモデルが投稿者U1のフォロワー、すなわち、実在しているモデルである場合を例に挙げて説明したが、この例に特に限定されなくてもよい。サムネイル画像のモデルは、実在しているモデルでなくてもよく、例えば、生成AIなどにより生成されたモデルであってもよい。換言すれば、サムネイル画像は、実在しているモデルの顔画像でなくてもよく、例えば、生成AIなどにより生成されたモデルの顔画像であってもよい。例えば、投稿者U1のフォロワー情報に基づいて生成されたモデルの顔画像であってもよい。例えば、投稿者U1のフォロワーの数で重み付け平均することで生成されたモデルの顔画像であってもよい。
(Information processing variation 4: Model face image (actual/selected/estimated))
In the above embodiment, an example has been described in which the model of the thumbnail image is a follower of poster U1, i.e., a real model, but this example is not particularly limited. The model of the thumbnail image does not have to be a real model, and may be, for example, a model generated by a generation AI or the like. In other words, the thumbnail image does not have to be a facial image of a real model, and may be, for example, a facial image of a model generated by a generation AI or the like. For example, it may be a facial image of a model generated based on follower information of poster U1. For example, it may be a facial image of a model generated by taking a weighted average based on the number of followers of poster U1.
また、サムネイル画像の候補となるモデルの顔画像がない場合もあり得る。例えば、投稿者U1のフォロワーのうち顔画像の登録を行っていないフォロワーがいる場合もあり得る。この際、例えば、投稿者U1のフォロワーなどのユーザが自身の顔のタイプなどの顔情報を登録している場合は、ユーザが登録した顔情報に基づいて重み付けを変更してもよい。すなわち、サムネイル画像は、例えば、ユーザが登録した顔情報に基づいて変更された重み付けに基づいて生成されたモデルの顔画像であってもよい。また、モデルの顔画像は、例えば、ユーザのフォロー関係や購買履歴などに基づいてユーザ拡張された情報に基づいて推定された顔画像であってもよい。このように、モデルの顔画像は、実際に撮影された顔画像であってもよいし、ユーザにより選択された顔情報に基づく顔画像であってもよいし、ユーザ拡張された情報に基づいて推定された顔画像であってもよい。 In addition, there may be cases where there is no facial image of the model that can be used as a candidate for the thumbnail image. For example, there may be followers of poster U1 who have not registered their facial images. In such cases, if users such as followers of poster U1 have registered facial information such as their own face type, the weighting may be changed based on the facial information registered by the user. In other words, the thumbnail image may be, for example, a facial image of the model generated based on weighting changed based on the facial information registered by the user. Furthermore, the facial image of the model may be, for example, a facial image estimated based on user-augmented information based on the user's following relationships, purchasing history, etc. In this way, the facial image of the model may be an actually photographed facial image, a facial image based on facial information selected by the user, or a facial image estimated based on user-augmented information.
(情報処理のバリエーション5:プレビュー画面のその他の機能)
上記実施形態において、例えば、ファッション施策の適用のオン・オフを可能にする操作ボタンを画面C1に表示させてもよい。これにより、例えば、操作ボタンを介してワン操作(ワンクリックやワンタップなど)で投稿者の顔画像に適用されたメイク情報のオン・オフを可能にすることができる。また、上記実施形態において、例えば、初回利用向けの説明ガイドを画面C1に表示させてもよい。また、上記実施形態において、例えば、画面C1上での投稿者の操作に応じた顔画像の拡大・縮小を可能にしてもよい。また、上記実施形態において、例えば、画面C1上での投稿者の操作に応じた顔画像の濃度調整を可能にしてもよい。例えば、顔の部位ごとの濃度調整をスライダーなどのユーザインタフェースで調整可能にしてもよい。
(Information Processing Variation 5: Other Functions of the Preview Screen)
In the above embodiment, for example, an operation button that enables the application of a fashion measure to be turned on and off may be displayed on the screen C1. This makes it possible to turn on and off makeup information applied to the poster's facial image with a single operation (such as one click or one tap) via the operation button. In addition, in the above embodiment, for example, an explanatory guide for first-time users may be displayed on the screen C1. In addition, in the above embodiment, for example, it may be possible to enlarge or reduce the facial image in response to an operation by the poster on the screen C1. In addition, in the above embodiment, for example, it may be possible to adjust the density of the facial image in response to an operation by the poster on the screen C1. For example, it may be possible to adjust the density for each part of the face using a user interface such as a slider.
(情報処理のバリエーション6:プレビュー画面のその他の機能)
上記実施形態において、例えば、プレビュー画面において、サムネイル画像の顔タイプに関する説明文を自動生成するためのボタンを表示してもよい。例えば、画像選択時に「○○タイプに似合うメイクなのでぜひ試着してみてください」などの説明文を自動生成するためのボタンを表示してもよい。また、上記実施形態において、例えば、プレビュー画面において、タグからメイク情報が検索し易くなるように、顔タイプに関するタグが紐づけて自動で登録されてもよい。
(Information Processing Variation 6: Other Functions of the Preview Screen)
In the above embodiment, for example, a button for automatically generating a description related to the face type of the thumbnail image may be displayed on the preview screen. For example, when an image is selected, a button for automatically generating a description such as "This makeup looks good on your XX type, so please try it on" may be displayed. Also, in the above embodiment, for example, tags related to face types may be linked and automatically registered on the preview screen to make it easier to search for makeup information from the tags.
上記実施形態において、例えば、プレビュー画面において、サムネイル画像の枠を、各モデルの顔タイプに応じた形状にしてもよい。例えば、丸顔が多い場合には、丸型の形状にしてもよいし、ベース顔が多い場合には、ベース型の形状にしてもよい。また、各モデルの顔タイプの特徴に応じた形状にしても良い。これにより、枠から各モデルの顔タイプを判別できるため登録時のプレビュー確認の負担を軽減させることが可能になる。 In the above embodiment, for example, the frame of the thumbnail image on the preview screen may be shaped to correspond to the face type of each model. For example, if there are many round faces, the frame may be round, and if there are many basic faces, the frame may be basic. The shape may also correspond to the characteristics of each model's face type. This makes it possible to determine each model's face type from the frame, reducing the burden of checking the preview when registering.
上記実施形態において、例えば、顔タイプに応じた選択のかわりに、メイクの目的(例えば、デートや結婚式や女子会など)に応じた選択をさせてもよい。例えば、目的に応じて異なる服装や背景を用いた選択肢の中から選択させてもよい。この際、例えば、目的に応じてサムネイル画像の顔タイプが同じであってもよいし異なってもよい。すなわち、顔タイプが同じであるが目的に応じて服装や背景が異なってもよいし、顔タイプごとに目的に応じて服装や背景が異なってもよい。 In the above embodiment, for example, instead of selecting according to face type, the user may be allowed to select according to the purpose of makeup (e.g., date, wedding, girls' night, etc.). For example, the user may be allowed to select from options using different clothing and backgrounds depending on the purpose. In this case, for example, the face type of the thumbnail images may be the same or different depending on the purpose. In other words, the face type may be the same but the clothing and background may be different depending on the purpose, or the clothing and background may be different for each face type depending on the purpose.
上記実施形態において、例えば、プレビュー画面で濃度調整したうえで登録し、試着させる際はその濃度が基本値で試着されてもよい。 In the above embodiment, for example, the density may be adjusted on the preview screen and then registered, and when trying on, the density may be set to the base value.
上記実施形態において、例えば、似合うメイク情報の登録が少ない顔タイプを優先抽出又は優先表示させてもよい。なお、優先抽出又は優先表示されずに登録が少ないことを分かり易いように表示させてもよい。これによって、フォロワーが少ない投稿者(選ばれ難い投稿者など)であっても登録したメイク情報が試着され易くなるとともに様々な顔タイプに似合うメイク情報が登録されるようになり本機能が促進される。なお、プレビュー画面ではなく、メイクをした顔を撮影する画面やその前の準備段階の画面で、どの顔タイプに似合うメイク情報を登録したほうがよいか提案してもよいし、プレビュー画面において、メイク情報を変更するならどの顔タイプに似合うメイク情報に変更したほうがよいか提案してもよいし、メイク情報を登録したときに、次回もメイク情報を登録するならどれがおすすめか提案してもよい。 In the above embodiment, for example, face types for which there is little registration of makeup information that suits them may be extracted or displayed preferentially. It is also possible to display the fact that there are few registrations in an easily understandable manner without prioritizing extraction or display. This makes it easier for even posters with few followers (such as posters who are hard to choose) to try on their registered makeup information, and promotes this function by allowing makeup information that suits a variety of face types to be registered. It is also possible to suggest which face type should be registered as makeup information that suits them, not on the preview screen, but on the screen where a photo of the face with makeup is taken or on the screen prior to that, and also to suggest which face type should be changed as makeup information that suits them on the preview screen if the makeup information is to be changed, and when makeup information is registered, it is also possible to suggest which makeup information is recommended if the next makeup information is to be registered.
上記実施形態では、例えば、一つのプレビュー画面を表示させる場合を例に挙げて説明したが、上記実施形態において、複数のプレビュー画面を表示させてもよい。例えば、複数のサムネイル画像が選択された場合、複数のプレビュー画面を同時に表示させてもよいし、並べて表示させてもよいし、左右で分けて半分ずつ表示させてもよい。 In the above embodiment, for example, a case where one preview screen is displayed has been described as an example, but in the above embodiment, multiple preview screens may also be displayed. For example, if multiple thumbnail images are selected, the multiple preview screens may be displayed simultaneously, side by side, or split into left and right halves and displayed separately.
上記実施形態において、例えば、似合う度合いが表示可能な場合は、似合う度合いを含めて表示させてもよい。また、例えば、どちらが似合うかを評価させて比較結果を学習させてもよい。これにより、精度の高い学習の実現が可能になる。 In the above embodiment, for example, if the degree of suitability can be displayed, the degree of suitability may also be displayed. Furthermore, for example, the system may be asked to evaluate which looks better and the results of the comparison may be learned. This makes it possible to achieve highly accurate learning.
(情報処理のバリエーション7:インセンティブの付与)
上記実施形態において、例えば、試着、試着からの商品閲覧、試着からの商品購入、休眠フォロワーの復帰につながった場合に、投稿者にインセンティブが付与されてもよい。例えば、商品購入につながった場合に、商品を販売する電子商店街での決済に利用可能な電子マネーやポイントなどを付与してもよい。これにより、コンバージョンを意識した投稿を投稿者に促すことが可能になる。
(Information Processing Variation 7: Providing Incentives)
In the above embodiment, for example, an incentive may be given to the poster when the post leads to trying on clothes, browsing of the product after trying on clothes, purchase of the product after trying on clothes, or reactivation of a dormant follower. For example, when the post leads to a purchase of a product, electronic money or points that can be used for payment at an online shopping mall selling the product may be given. This makes it possible to encourage the poster to post with conversion in mind.
(情報処理のバリエーション8:表示制御)
上記実施形態では、例えば、サムネイル画像の抽出処理を行ってから表示制御を行う場合を例に挙げて説明したが、上記実施形態において、サムネイル画像の抽出処理を行わずにモデルを固定して表示制御を行ってもよい。
(Information Processing Variation 8: Display Control)
In the above embodiment, for example, an example was given of a case where display control is performed after extracting a thumbnail image, but in the above embodiment, display control may also be performed by fixing the model without extracting a thumbnail image.
(情報処理のバリエーション9:自動表示)
上記実施形態では、例えば、ファッション施策を適用するサムネイル画像を投稿者が指定する場合を例に挙げて説明したが、上記実施形態において、投稿者がサムネイル画像を選択するかわりに、自動で最も似合う顔タイプのモデルやフォロワーの顔タイプのモデルなどに適用して投稿者に表示させてもよい。
(Information Processing Variation 9: Automatic Display)
In the above embodiment, for example, a case has been described in which the poster specifies a thumbnail image to which the fashion measure is to be applied. However, in the above embodiment, instead of the poster selecting a thumbnail image, the thumbnail image may be automatically applied to a model with the most suitable face type or a model with the face type of a follower, and displayed to the poster.
〔3.端末装置の構成〕
次に、図5を用いて、実施形態に係る端末装置10の構成について説明する。図5は、実施形態に係る端末装置10の構成例を示す図である。図5に示すように、端末装置10は、通信部11と、入力部12と、出力部13と、制御部14とを有する。
3. Configuration of terminal device
Next, the configuration of the terminal device 10 according to the embodiment will be described with reference to Fig. 5. Fig. 5 is a diagram showing an example of the configuration of the terminal device 10 according to the embodiment. As shown in Fig. 5, the terminal device 10 includes a communication unit 11, an input unit 12, an output unit 13, and a control unit 14.
(通信部11)
通信部11は、例えば、NIC(Network Interface Card)等によって実現される。そして、通信部11は、所定のネットワークNと有線又は無線で接続され、所定のネットワークNを介して、情報処理装置100等との間で情報の送取得を行う。
(Communication unit 11)
The communication unit 11 is realized by, for example, a network interface card (NIC), etc. The communication unit 11 is connected to a predetermined network N by wire or wirelessly, and transmits and receives information to and from the information processing device 100, etc., via the predetermined network N.
(入力部12)
入力部12は、投稿者からの各種操作を受け付ける。図3では、投稿者U1からの各種操作を受け付ける。例えば、入力部12は、タッチパネル機能により表示面を介して投稿者からの各種操作を受け付けてもよい。また、入力部12は、端末装置10に設けられたボタンや、端末装置10に接続されたキーボードやマウスからの各種操作を受け付けてもよい。
(Input unit 12)
The input unit 12 accepts various operations from posters. In FIG. 3, the input unit 12 accepts various operations from poster U1. For example, the input unit 12 may accept various operations from the poster via a display screen using a touch panel function. The input unit 12 may also accept various operations from buttons provided on the terminal device 10 or a keyboard or mouse connected to the terminal device 10.
(出力部13)
出力部13は、例えば液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等によって実現されるタブレット端末等の表示画面であり、各種情報を表示するための表示装置である。例えば、出力部13は、情報処理装置100から送信された情報を表示する。
(Output unit 13)
The output unit 13 is a display screen of a tablet terminal or the like realized by, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display, and is a display device for displaying various information. For example, the output unit 13 displays information transmitted from the information processing device 100.
(制御部14)
制御部14は、例えば、コントローラ(Controller)であり、CPU(Central Processing Unit)やMPU(Micro Processing Unit)等によって、端末装置10内部の記憶装置に記憶されている各種プログラムがRAM(Random Access Memory)を作業領域として実行されることにより実現される。例えば、この各種プログラムには、端末装置10にインストールされたアプリケーションのプログラムが含まれる。例えば、この各種プログラムには、情報処理装置100から送信された情報に基づいて、ファッション施策を適用する候補であるモデルのサムネイル画像を含む投稿情報のプレビュー画面を表示させるアプリケーションのプログラムが含まれる。また、制御部14は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路により実現される。
(Control unit 14)
The control unit 14 is, for example, a controller, and is realized by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like executing various programs stored in a storage device within the terminal device 10 using RAM (Random Access Memory) as a work area. For example, these various programs include application programs installed on the terminal device 10. For example, these various programs include an application program that displays a preview screen of posted information including thumbnail images of models who are candidates for applying a fashion measure, based on information transmitted from the information processing device 100. The control unit 14 is also realized by an integrated circuit, for example, an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
図5に示すように、制御部14は、受信部141と、送信部142とを有し、以下に説明する情報処理の作用を実現または実行する。 As shown in FIG. 5, the control unit 14 has a receiving unit 141 and a transmitting unit 142, and realizes or executes the information processing functions described below.
(受信部141)
受信部141は、情報処理装置100等の他の情報処理装置から各種情報を受信する。例えば、受信部141は、ファッション施策を適用する候補であるモデルのサムネイル画像を含む投稿情報のプレビュー画面を表示させるための情報を受信する。
(Receiving unit 141)
The receiving unit 141 receives various information from other information processing devices such as the information processing device 100. For example, the receiving unit 141 receives information for displaying a preview screen of posted information including a thumbnail image of a model who is a candidate for applying a fashion measure.
(送信部142)
送信部142は、情報処理装置100等の他の情報処理装置へ各種情報を送信する。例えば、送信部142は、投稿情報とともに表示されたサムネイル画像の中から投稿者が選択した選択情報を送信する。また、例えば、送信部142は、投稿者がメイクの似合う度合いの評価を行った場合は評価情報を送信する。
(Transmitter 142)
The transmission unit 142 transmits various types of information to other information processing devices such as the information processing device 100. For example, the transmission unit 142 transmits selection information selected by the poster from among thumbnail images displayed together with the posted information. Furthermore, for example, the transmission unit 142 transmits evaluation information when the poster evaluates how well makeup suits them.
〔4.情報処理装置の構成〕
次に、図6を用いて、実施形態に係る情報処理装置100の構成について説明する。図6は、実施形態に係る情報処理装置100の構成例を示す図である。図6に示すように、情報処理装置100は、通信部110と、記憶部120と、制御部130とを有する。なお、情報処理装置100は、情報処理装置100の管理者から各種操作を受け付ける入力部(例えば、キーボードやマウス等)や、各種情報を表示するための表示部(例えば、液晶ディスプレイ等)を有してもよい。
4. Configuration of Information Processing Device
Next, the configuration of the information processing device 100 according to the embodiment will be described with reference to Fig. 6. Fig. 6 is a diagram showing an example of the configuration of the information processing device 100 according to the embodiment. As shown in Fig. 6, the information processing device 100 includes a communication unit 110, a storage unit 120, and a control unit 130. Note that the information processing device 100 may also include an input unit (e.g., a keyboard, a mouse, etc.) that accepts various operations from an administrator of the information processing device 100, and a display unit (e.g., a liquid crystal display, etc.) that displays various information.
(通信部110)
通信部110は、例えば、NIC等によって実現される。そして、通信部110は、ネットワークNと有線又は無線で接続され、ネットワークNを介して、端末装置10等との間で情報の送取得を行う。
(Communication unit 110)
The communication unit 110 is realized by, for example, a NIC etc. The communication unit 110 is connected to a network N by wire or wirelessly, and transmits and receives information to and from the terminal device 10 etc. via the network N.
(記憶部120)
記憶部120は、例えば、RAM、フラッシュメモリ等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。図6に示すように、記憶部120は、モデル情報記憶部121と、評価情報記憶部122とを有する。
(Storage unit 120)
The storage unit 120 is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk. As shown in FIG. 6 , the storage unit 120 has a model information storage unit 121 and an evaluation information storage unit 122.
モデル情報記憶部121は、プレビュー画面(画面C1)に表示される候補であるモデル情報を記憶する。ここで、図7に、実施形態に係るモデル情報記憶部121の一例を示す。図7に示すように、モデル情報記憶部121は、「モデルID」、「撮影画像」、「設定タイプ情報」、「モデル情報」といった項目を有する。 The model information storage unit 121 stores model information that is a candidate for display on the preview screen (screen C1). Here, Figure 7 shows an example of the model information storage unit 121 according to the embodiment. As shown in Figure 7, the model information storage unit 121 has items such as "Model ID," "Captured Image," "Setting Type Information," and "Model Information."
「モデルID」は、モデル(ユーザ)を識別するための識別情報を示す。「撮影画像」は、モデルが登録した自身の撮影画像を示す。図7に示した例では、「撮影画像」に「撮影画像#1」や「撮影画像#2」といった概念的な情報が格納される例を示したが、実際には、画像データなどが格納される。例えば、画像データが所在するURL(Uniform Resource Locator)や、格納場所を示すファイルパス名などが格納されてもよい。「設定タイプ情報」は、モデルが設定した自身のタイプ情報を示す。図7に示した例では、「設定タイプ情報」に「設定タイプ情報#1」や「設定タイプ情報#2」といった概念的な情報が格納される例を示したが、実際には、「顔タイプ:Fresh;髪タイプ:ストレート;・・・」などの情報が格納される。「モデル情報」は、モデルのフォロー関係や購買履歴などのモデルに関連するモデル情報を示す。図7に示した例では、「モデル情報」に「モデル情報#1」や「モデル情報#2」といった概念的な情報が格納される例を示したが、実際には、「フォロー:ユーザP111、ユーザP112、・・・;フォロワー:ユーザP211、ユーザP212、・・・;購買履歴:商品F1、商品F2、・・・;・・・」などの情報が格納される。 "Model ID" indicates identification information for identifying the model (user). "Photographed image" indicates a photographed image of the model that has been registered by the model. In the example shown in Figure 7, conceptual information such as "Photographed image #1" and "Photographed image #2" is stored in "Photographed image," but in reality, image data, etc. is stored. For example, the URL (Uniform Resource Locator) where the image data is located, or a file path name indicating the storage location, may be stored. "Setting type information" indicates the type information set by the model. In the example shown in Figure 7, conceptual information such as "Setting type information #1" and "Setting type information #2" is stored in "Setting type information," but in reality, information such as "Face type: Fresh; Hair type: Straight; ..." is stored. "Model information" indicates model information related to the model, such as the model's following relationships and purchase history. In the example shown in Figure 7, conceptual information such as "Model Information #1" and "Model Information #2" is stored in "Model Information," but in reality, information such as "Following: User P111, User P112, ...; Followers: User P211, User P212, ...; Purchase History: Product F1, Product F2, ...; ..." is stored.
評価情報記憶部122は、投稿者が行った評価情報(例えば、メイクの似合う度合いを示す評価情報など)を記憶する。ここで、図8に、実施形態に係る評価情報記憶部122の一例を示す。図8に示すように、評価情報記憶部122は、「評価情報ID」、「投稿者ID」、「モデルID」、「評価情報」といった項目を有する。 The evaluation information storage unit 122 stores evaluation information made by posters (for example, evaluation information indicating how well makeup suits them). Figure 8 shows an example of the evaluation information storage unit 122 according to the embodiment. As shown in Figure 8, the evaluation information storage unit 122 has items such as "Evaluation Information ID," "Contributor ID," "Model ID," and "Evaluation Information."
「評価情報ID」は、評価情報を識別するための識別情報を示す。「投稿者ID」は、評価を行った投稿者を識別するための識別情報を示す。「モデルID」は、評価されたモデルを識別するための識別情報を示す。「評価情報」は、投稿者の評価情報を示す。図8に示した例では、「評価情報」に「評価情報#1」や「評価情報#2」といった概念的な情報が格納される例を示したが、実際には、モデルに適用されたメイク情報と投稿者の評価(似合う、似合わないなど)との組み合わせなどの情報が格納される。 "Evaluation Information ID" indicates identification information for identifying the evaluation information. "Contributor ID" indicates identification information for identifying the contributor who made the evaluation. "Model ID" indicates identification information for identifying the evaluated model. "Evaluation Information" indicates the evaluation information of the contributor. In the example shown in Figure 8, conceptual information such as "Evaluation Information #1" and "Evaluation Information #2" is stored in "Evaluation Information," but in reality, information such as a combination of makeup information applied to the model and the contributor's evaluation (suits, does not suit, etc.) is stored.
(制御部130)
制御部130は、コントローラであり、例えば、CPUやMPU等によって、情報処理装置100内部の記憶装置に記憶されている各種プログラムがRAMを作業領域として実行されることにより実現される。また、制御部130は、例えば、ASICやFPGA等の集積回路により実現される。
(Control unit 130)
The control unit 130 is a controller, and is realized, for example, by a CPU, an MPU, or the like, executing various programs stored in a storage device inside the information processing device 100 using RAM as a work area. The control unit 130 is also realized, for example, by an integrated circuit such as an ASIC or an FPGA.
図6に示すように、制御部130は、取得部131と、特定部132と、抽出部133と、第1表示部134と、生成部135と、第2表示部136と、決定部137とを有し、以下に説明する情報処理の作用を実現または実行する。なお、制御部130の内部構成は、図6に示した構成に限られず、後述する情報処理を行う構成であれば他の構成であってもよい。 As shown in FIG. 6, the control unit 130 has an acquisition unit 131, an identification unit 132, an extraction unit 133, a first display unit 134, a generation unit 135, a second display unit 136, and a determination unit 137, and realizes or executes the information processing functions described below. Note that the internal configuration of the control unit 130 is not limited to the configuration shown in FIG. 6, and may be any other configuration that performs the information processing described below.
(取得部131)
取得部131は、外部の情報処理装置から各種情報を取得する。取得部131は、端末装置10等の他の情報処理装置から各種情報を取得する。
(Acquisition unit 131)
The acquisition unit 131 acquires various types of information from an external information processing device, such as the terminal device 10.
取得部131は、記憶部120から各種情報を取得する。また、取得部131は、取得した各種情報を記憶部120に格納する。 The acquisition unit 131 acquires various types of information from the storage unit 120. The acquisition unit 131 also stores the acquired various types of information in the storage unit 120.
取得部131は、ファッション施策が適用された投稿者の撮影画像を取得する。また、取得部131は、ファッション施策を適用する候補であるモデルのサムネイル画像を取得する。また、取得部131は、サムネイル画像の中から投稿者が選択した選択情報を取得する。また、取得部131は、投稿者が評価を行った場合は評価情報を取得する。 The acquisition unit 131 acquires images taken by the poster to which the fashion measure has been applied. The acquisition unit 131 also acquires thumbnail images of models who are candidates for applying the fashion measure. The acquisition unit 131 also acquires selection information selected by the poster from among the thumbnail images. The acquisition unit 131 also acquires evaluation information if the poster has made an evaluation.
(特定部132)
特定部132は、投稿者の撮影画像からマスク情報を抽出することで投稿者のメイク情報を特定する。例えば、特定部132は、取得部131により取得された撮影画像からマスク情報を抽出することでメイク情報を特定する。
(Specific unit 132)
The identification unit 132 identifies the makeup information of the poster by extracting mask information from the photographed image of the poster. For example, the identification unit 132 identifies the makeup information by extracting mask information from the photographed image acquired by the acquisition unit 131.
(抽出部133)
抽出部133は、ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出する。例えば、抽出部133は、取得部131により取得されたファッション施策を適用する候補であるモデルのサムネイル画像の中から所定の条件に基づいて少なくとも一つ抽出する。例えば、抽出部133は、ファッション施策を適用する候補であるモデルのサムネイル画像を、メイクの似合う度合いを示す評価の高いモデルの順に少なくとも一つ抽出する(メイクの似合う度合いを示す評価の高いモデルのサムネイル画像ほど優先的に少なくとも一つ抽出する)。また、例えば、抽出部133は、ファッション施策を適用する候補であるモデルのサムネイル画像を、フォロワーと顔タイプが類似するモデルの順に少なくとも一つ抽出する(フォロワーと顔タイプが類似するモデルのサムネイル画像ほど優先的に少なくとも一つ抽出する)。また、例えば、抽出部133は、ファッション施策を適用する候補であるモデルのサムネイル画像を、ログイン数が多いモデルの順に少なくとも一つ抽出する(ログイン数が多いモデルのサムネイル画像ほど優先的に少なくとも一つ抽出する)。
(Extraction unit 133)
The extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure based on a predetermined condition. For example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure based on a predetermined condition from among the thumbnail images of models who are candidates for applying a fashion measure and acquired by the acquisition unit 131. For example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's highest rating indicating how well makeup suits them (the higher the rating indicating how well makeup suits them, the more priority is given to extracting at least one thumbnail image). Furthermore, for example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's facial type similar to that of followers (the more similar the facial type of a model is to followers, the more priority is given to extracting at least one thumbnail image). Furthermore, for example, the extraction unit 133 extracts at least one thumbnail image of a model who is a candidate for applying a fashion measure in order of the model's highest number of logins (the more priority is given to extracting at least one thumbnail image of a model with a highest number of logins).
(第1表示部134)
第1表示部134は、ファッション施策を適用する候補であるモデルのサムネイル画像を表示させる。例えば、第1表示部134は、抽出部133により抽出されたサムネイル画像(所定の条件に基づいて少なくとも一つ抽出されたサムネイル画像)を表示させる。また、第1表示部134は、抽出部133の抽出処理を行ってもよい。
(First display section 134)
The first display unit 134 displays thumbnail images of models who are candidates for applying the fashion measure. For example, the first display unit 134 displays thumbnail images extracted by the extraction unit 133 (at least one thumbnail image extracted based on a predetermined condition). The first display unit 134 may also perform the extraction process of the extraction unit 133.
第1表示部134は、サムネイル画像を所定の表示態様で優先的にプレビュー画面に表示させる(例えば、後述の決定部137により決定された所定の表示態様で表示させる)。また、第1表示部134は、後述の決定部137の決定処理を行ってもよい。第1表示部134は、例えば、メイクの似合う度合いを示す評価の高いモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、抽出部133により抽出されたサムネイル画像の中から、メイクの似合う度合いを示す評価の高いモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、抽出部133により抽出されたサムネイル画像を、メイクの似合う度合いを示す評価の高い順に並び替えることで、メイクの似合う度合いを示す評価の高いモデルのサムネイル画像を優先的に表示させる。また、第1表示部134は、例えば、顔タイプを細分化したそれぞれに対して、メイクの似合う度合いを示す評価の高いモデルのサムネイル画像を優先的に表示させる。 The first display unit 134 preferentially displays thumbnail images on the preview screen in a predetermined display format (for example, in a predetermined display format determined by the determination unit 137 described below). The first display unit 134 may also perform the determination process of the determination unit 137 described below. The first display unit 134, for example, preferentially displays thumbnail images of models highly rated for how well makeup suits them. For example, the first display unit 134 preferentially displays thumbnail images of models highly rated for how well makeup suits them from among the thumbnail images extracted by the extraction unit 133. For example, the first display unit 134 preferentially displays thumbnail images of models highly rated for how well makeup suits them by sorting the thumbnail images extracted by the extraction unit 133 in descending order of the ratings indicating how well makeup suits them. The first display unit 134 also preferentially displays thumbnail images of models highly rated for how well makeup suits them, for example, for each of the subdivided face types.
第1表示部134は、例えば、投稿者と顔タイプが類似するモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、抽出部133により抽出されたサムネイル画像の中から、投稿者と顔タイプが類似するモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、抽出部133により抽出されたサムネイル画像を、投稿者と顔タイプが類似する順に並び替えることで、投稿者と顔タイプが類似するモデルのサムネイル画像を優先的に表示させる。 The first display unit 134, for example, preferentially displays thumbnail images of models whose facial type is similar to that of the poster. For example, the first display unit 134 preferentially displays thumbnail images of models whose facial type is similar to that of the poster from among the thumbnail images extracted by the extraction unit 133. For example, the first display unit 134 preferentially displays thumbnail images of models whose facial type is similar to that of the poster by sorting the thumbnail images extracted by the extraction unit 133 in order of similarity of facial type to that of the poster.
第1表示部134は、例えば、投稿者のフォロワー情報に基づいて、顔タイプが特定のタイプであるフォロワーの割合が多いモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、フォロワーの顔タイプで特定の顔タイプの割合が多い場合、特定の顔タイプのモデルのサムネイル画像を優先的に表示させる。また、第1表示部134は、例えば、モデルのログイン情報に基づいて、顔タイプが特定のタイプであるモデルのログイン数の割合が多いモデルのサムネイル画像を優先的に表示させる。例えば、第1表示部134は、ログイン数の割合が多いモデルの顔タイプで特定の顔タイプの割合が多い場合、特定の顔タイプのモデルのサムネイル画像を優先的に表示させる。また、例えば、第1表示部134は、ログイン数の割合が多いモデルの顔タイプで割合が多い特定の顔タイプが複数ある場合は、顔タイプを細分化したそれぞれで特定の顔タイプのモデルのサムネイル画像を優先的に表示させる。 The first display unit 134, for example, based on the poster's follower information, preferentially displays thumbnail images of models with a high proportion of followers who have a specific face type. For example, if a specific face type is prevalent among the face types of followers, the first display unit 134 preferentially displays thumbnail images of models with the specific face type. Furthermore, the first display unit 134, for example, based on the model's login information, preferentially displays thumbnail images of models with a specific face type and a high proportion of logins. For example, if a specific face type is prevalent among the face types of models with a high proportion of logins, the first display unit 134 preferentially displays thumbnail images of models with the specific face type. Furthermore, for example, if there are multiple specific face types prevalent among the face types of models with a high proportion of logins, the first display unit 134 preferentially displays thumbnail images of models with the specific face type for each of the subdivisions of the face type.
(生成部135)
生成部135は、ファッション施策を適用した画像を生成する。例えば、生成部135は、取得部131により取得された選択情報に基づいて、投稿者が選択したサムネイル画像にファッション施策を適用した画像を生成する。例えば、生成部135は、PSGAN等の手法を用いて、投稿者のファッション施策と同様のファッション施策を適用した画像を生成するためのファッション施策情報を生成する。例えば、生成部135は、生成したファッション施策情報をモデルに適用した画像を生成する。
(Generation unit 135)
The generation unit 135 generates an image to which a fashion measure has been applied. For example, the generation unit 135 generates an image to which a fashion measure has been applied to a thumbnail image selected by a poster, based on the selection information acquired by the acquisition unit 131. For example, the generation unit 135 generates fashion measure information for generating an image to which a fashion measure similar to the fashion measure of the poster has been applied, using a method such as PSGAN. For example, the generation unit 135 generates an image to which the generated fashion measure information has been applied to a model.
(第2表示部136)
第2表示部136は、ファッション施策を適用した画像を表示させる。例えば、第2表示部136は、生成部135により生成された画像(ファッション施策を適用した画像)を表示させる。また、第2表示部136は、生成部135の生成処理を行ってもよい。
(Second display section 136)
The second display unit 136 displays an image to which the fashion measure has been applied. For example, the second display unit 136 displays an image (an image to which the fashion measure has been applied) generated by the generation unit 135. The second display unit 136 may also perform the generation process of the generation unit 135.
(決定部137)
決定部137は、サムネイル画像をプレビュー画面に表示させる所定の表示態様を決定する(例えば、第1表示部134による優先表示のための所定の表示態様を決定する)。例えば、決定部137は、サムネイル画像を画面C1の所定の領域(領域R1)に表示させる際のサムネイル画像の表示態様を決定する。すなわち、例えば、決定部137は、プレビュー画面に表示させるサムネイル画像を決定する。決定部137は、例えば、メイクの似合う度合いを示す評価の高いモデルのサムネイル画像を優先的に表示させると決定する。例えば、決定部137は、メイクの似合う度合いを示す評価の高い順に並び替えることで、優先的に表示させるサムネイル画像を決定する。例えば、決定部137は、メイクの似合う度合いを示す評価の高い順に並び替えることで、メイクの似合う度合いを示す評価の高いほど優先表示されるようなサムネイル画像の表示態様を決定する。
(Determination unit 137)
The determination unit 137 determines a predetermined display mode for displaying thumbnail images on the preview screen (for example, determines a predetermined display mode for preferential display by the first display unit 134). For example, the determination unit 137 determines the display mode of thumbnail images when displaying them in a predetermined area (area R1) on the screen C1. That is, for example, the determination unit 137 determines thumbnail images to be displayed on the preview screen. For example, the determination unit 137 determines that thumbnail images of models with high ratings indicating how well makeup suits them are to be preferentially displayed. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the thumbnail images in descending order of ratings indicating how well makeup suits them. For example, the determination unit 137 determines a display mode of thumbnail images in which models with higher ratings indicating how well makeup suits them are preferentially displayed by sorting the thumbnail images in descending order of ratings indicating how well makeup suits them.
決定部137は、例えば、投稿者と顔タイプが類似するモデルのサムネイル画像を優先的に表示させると決定する。例えば、決定部137は、投稿者と顔タイプが類似する順に並び替えることで、優先的に表示させるサムネイル画像を決定する。例えば、決定部137は、投稿者と顔タイプが類似する順に並び替えることで、投稿者と顔タイプが類似するほど優先表示されるようなサムネイル画像の表示態様を決定する。また、決定部137は、例えば、顔タイプが特定のタイプであるフォロワーの割合が多いモデルのサムネイル画像を優先的に表示させると決定する。例えば、決定部137は、顔タイプが特定のタイプであるフォロワーの割合が多い順に並び替えることで、優先的に表示させるサムネイル画像を決定する。例えば、決定部137は、顔タイプが特定のタイプであるフォロワーの割合が多い順に並び替えることで、顔タイプが特定のタイプであるフォロワーの割合が多いほど優先表示されるようなサムネイル画像の表示態様を決定する。また、決定部137は、顔タイプが特定のタイプであるモデルのログイン数の割合が多いモデルのサムネイル画像を優先的に表示させると決定する。例えば、決定部137は、顔タイプが特定のタイプであるモデルのログイン数の割合が多い順に並び替えることで、優先的に表示させるサムネイル画像を決定する。例えば、決定部137は、顔タイプが特定のタイプであるモデルのログイン数の割合が多い順に並び替えることで、顔タイプが特定のタイプであるモデルのログイン数の割合が多いほど優先表示されるようなサムネイル画像の表示態様を決定する。 The determination unit 137 determines, for example, to preferentially display thumbnail images of models whose facial type is similar to the poster. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the images in order of similarity of facial type to the poster. For example, the determination unit 137 determines a display mode of thumbnail images such that the more similar the facial type to the poster is, the more preferentially displayed the thumbnail images are, by sorting the images in order of similarity of facial type to the poster. Furthermore, the determination unit 137 determines, for example, to preferentially display thumbnail images of models who have a higher proportion of followers with a specific facial type. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting the images in order of the highest proportion of followers with a specific facial type. For example, the determination unit 137 determines a display mode of thumbnail images such that the higher the proportion of followers with a specific facial type is, the more preferentially displayed the thumbnail images are, by sorting the images in order of the highest proportion of followers with a specific facial type. Furthermore, the determination unit 137 determines to preferentially display thumbnail images of models who have a higher proportion of logins with a specific facial type. For example, the determination unit 137 determines thumbnail images to be preferentially displayed by sorting them in descending order of the proportion of logins of models with a specific face type. For example, the determination unit 137 determines a display mode for thumbnail images in which models with a specific face type have a higher proportion of logins, by sorting them in descending order of the proportion of logins of models with a specific face type.
〔5.情報処理のフロー〕
次に、図9及び図10を用いて、実施形態に係る情報処理システム1による情報処理の手順について説明する。図9及び図10は、実施形態に係る情報処理システム1による情報処理の手順を示すフローチャートである。図9は、モデルのサムネイル画像を所定の条件に基づいて抽出する処理を含む情報処理の手順を示すフローチャートであり、図10は、モデルのサムネイル画像を所定の表示態様に基づいて優先的に表示させる(上位に並び替える)処理を含む情報処理の手順を示すフローチャートである。
5. Information Processing Flow
Next, the procedure of information processing by the information processing system 1 according to the embodiment will be described with reference to Fig. 9 and Fig. 10. Fig. 9 and Fig. 10 are flowcharts showing the procedure of information processing by the information processing system 1 according to the embodiment. Fig. 9 is a flowchart showing the procedure of information processing including processing for extracting thumbnail images of models based on predetermined conditions, and Fig. 10 is a flowchart showing the procedure of information processing including processing for preferentially displaying thumbnail images of models (rearranging them to the top) based on a predetermined display mode.
図9に示すように、情報処理装置100は、ファッション施策が適用された投稿者の撮影画像を取得する(ステップS201)。情報処理装置100は、ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出する(ステップS202)。情報処理装置100は、抽出したサムネイル画像をプレビュー画面に表示させる(ステップS203)。 As shown in FIG. 9, the information processing device 100 acquires an image taken by a poster to which a fashion measure has been applied (step S201). The information processing device 100 extracts at least one thumbnail image of a model that is a candidate for applying the fashion measure based on predetermined conditions (step S202). The information processing device 100 displays the extracted thumbnail image on a preview screen (step S203).
情報処理装置100は、投稿者がサムネイル画像を選択したか否かを判定する(ステップS204)。情報処理装置100は、投稿者がサムネイル画像を選択した場合(ステップS204;YES)、ファッション施策を適用した画像を生成する(ステップS205)。情報処理装置100は、生成した画像をプレビュー画面に表示させる(ステップS206)。一方、情報処理装置100は、投稿者がサムネイル画像を選択しなかった場合(ステップS204;NO)、情報処理を終了する。この場合、投稿者はファッション施策の適用を確認せずに投稿を行ってもよい。 The information processing device 100 determines whether the poster has selected a thumbnail image (step S204). If the poster has selected a thumbnail image (step S204; YES), the information processing device 100 generates an image to which the fashion measure has been applied (step S205). The information processing device 100 displays the generated image on the preview screen (step S206). On the other hand, if the poster has not selected a thumbnail image (step S204; NO), the information processing device 100 ends information processing. In this case, the poster may post without confirming the application of the fashion measure.
図10に示すように、情報処理装置100は、ファッション施策が適用された投稿者の撮影画像を取得する(ステップS301)。情報処理装置100は、ファッション施策を適用する候補であるモデルのサムネイル画像を少なくとも一つ抽出する(ステップS302)。情報処理装置100は、抽出したサムネイル画像を所定の表示態様で優先的にプレビュー画面に表示させる(ステップS303)。 As shown in FIG. 10, the information processing device 100 acquires an image taken by the poster to which the fashion measure has been applied (step S301). The information processing device 100 extracts at least one thumbnail image of a model who is a candidate for applying the fashion measure (step S302). The information processing device 100 preferentially displays the extracted thumbnail image on the preview screen in a predetermined display format (step S303).
情報処理装置100は、投稿者がサムネイル画像を選択したか否かを判定する(ステップS304)。情報処理装置100は、投稿者がサムネイル画像を選択した場合(ステップS304;YES)、ファッション施策を適用した画像を生成する(ステップS305)。情報処理装置100は、生成した画像をプレビュー画面に表示させる(ステップS306)。一方、情報処理装置100は、投稿者がサムネイル画像を選択しなかった場合(ステップS304;NO)、情報処理を終了する。この場合、投稿者はファッション施策の適用を確認せずに投稿を行ってもよい。 The information processing device 100 determines whether the poster has selected a thumbnail image (step S304). If the poster has selected a thumbnail image (step S304; YES), the information processing device 100 generates an image to which the fashion measure has been applied (step S305). The information processing device 100 displays the generated image on the preview screen (step S306). On the other hand, if the poster has not selected a thumbnail image (step S304; NO), the information processing device 100 ends information processing. In this case, the poster may post without confirming the application of the fashion measure.
〔6.効果〕
上述してきたように、実施形態に係る情報処理装置100は、取得部131と、第1表示部134と、第2表示部136とを有する。取得部131は、ファッション施策が適用された投稿者の撮影画像を取得する。第1表示部134は、ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出して表示させる。第2表示部136は、投稿者がサムネイル画像を指定すると、ファッション施策を前記モデルに適用した画像を表示させる。
6. Effects
As described above, the information processing device 100 according to the embodiment includes an acquisition unit 131, a first display unit 134, and a second display unit 136. The acquisition unit 131 acquires images taken by the poster to which a fashion measure has been applied. The first display unit 134 extracts and displays at least one thumbnail image of a model who is a candidate to apply the fashion measure based on predetermined conditions. When the poster specifies a thumbnail image, the second display unit 136 displays an image in which the fashion measure has been applied to the model.
これにより、実施形態に係る情報処理装置100は、例えば、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, check before posting whether a fashion campaign is likely to lead to conversions.
また、第1表示部134は、モデルの外見に関する所定部分のタイプに対するファッション施策の似合う度合いを示す評価に基づいてサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays thumbnail images based on an evaluation indicating the degree to which a fashion measure suits a specific type of part of the model's appearance.
これにより、実施形態に係る情報処理装置100は、例えば、ファッション施策の似合う度合いを加味して、コンバージョンに繋がり易いモデルのタイプを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the degree to which a fashion campaign suits the model and check the type of model that is likely to lead to conversions before posting.
また、第1表示部134は、投稿者と外見に関する所定部分のタイプが類似するモデルのサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays thumbnail images of models whose appearances are similar to those of the poster.
これにより、実施形態に係る情報処理装置100は、例えば、投稿者とのタイプの類似度を加味して、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the degree of similarity in style with the poster and check before posting whether the fashion campaign is likely to lead to conversions.
また、第1表示部134は、投稿者のフォロワー情報に基づいてモデルのサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays thumbnail images of the model based on the poster's follower information.
これにより、実施形態に係る情報処理装置100は、例えば、投稿者のフォロワー情報を加味して、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the poster's follower information and check before posting whether the fashion campaign is likely to lead to conversions.
また、第1表示部134は、投稿者のフォロワーと所定部分のタイプが類似するモデルのサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays thumbnail images of models whose specific features are similar to those of the poster's followers.
これにより、実施形態に係る情報処理装置100は、例えば、投稿者のフォロワーとのタイプの類似度を加味して、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the degree of similarity in style between the poster and their followers, and check before posting whether the fashion campaign is likely to lead to conversions.
また、第1表示部134は、モデルのログイン情報に基づいてモデルのサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays a thumbnail image of the model based on the model's login information.
これにより、実施形態に係る情報処理装置100は、例えば、モデルのログイン情報を加味して、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the model's login information and check before posting whether the fashion campaign is likely to lead to conversions.
また、第1表示部134は、ファッション施策を試着する可能性が高いと推定されたモデルのサムネイル画像を抽出して表示させる。 In addition, the first display unit 134 extracts and displays thumbnail images of models estimated to be highly likely to try on the fashion campaign.
これにより、実施形態に係る情報処理装置100は、例えば、試着可能性を加味して、コンバージョンに繋がり易いファッション施策であるか否かを投稿前に確認することができる。 As a result, the information processing device 100 according to the embodiment can, for example, take into account the possibility of trying on clothes and check before posting whether the fashion initiative is likely to lead to conversions.
〔7.ハードウェア構成〕
また、上述してきた実施形態に係る端末装置10及び情報処理装置100は、例えば、図11に示すような構成のコンピュータ1000によって実現される。図11は、端末装置10及び情報処理装置100の機能を実現するコンピュータの一例を示すハードウェア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM1300、HDD1400、通信インターフェイス(I/F)1500、入出力インターフェイス(I/F)1600、及びメディアインターフェイス(I/F)1700を有する。
7. Hardware Configuration
The terminal device 10 and the information processing device 100 according to the above-described embodiments are realized, for example, by a computer 1000 configured as shown in Fig. 11. Fig. 11 is a hardware configuration diagram showing an example of a computer that realizes the functions of the terminal device 10 and the information processing device 100. The computer 1000 has a CPU 1100, a RAM 1200, a ROM 1300, a HDD 1400, a communication interface (I/F) 1500, an input/output interface (I/F) 1600, and a media interface (I/F) 1700.
CPU1100は、ROM1300またはHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。 The CPU 1100 operates based on programs stored in the ROM 1300 or HDD 1400, and controls each component. The ROM 1300 stores a boot program executed by the CPU 1100 when the computer 1000 starts up, as well as programs that depend on the computer 1000's hardware.
HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を格納する。通信インターフェイス1500は、所定の通信網を介して他の機器からデータを取得してCPU1100へ送り、CPU1100が生成したデータを所定の通信網を介して他の機器へ送信する。 The HDD 1400 stores programs executed by the CPU 1100 and data used by such programs. The communication interface 1500 acquires data from other devices via a specified communication network and sends it to the CPU 1100, and transmits data generated by the CPU 1100 to other devices via a specified communication network.
CPU1100は、入出力インターフェイス1600を介して、ディスプレイやプリンタ等の出力装置、及び、キーボードやマウス等の入力装置を制御する。CPU1100は、入出力インターフェイス1600を介して、入力装置からデータを取得する。また、CPU1100は、生成したデータを、入出力インターフェイス1600を介して出力装置へ出力する。 The CPU 1100 controls output devices such as displays and printers, and input devices such as keyboards and mice, via the input/output interface 1600. The CPU 1100 acquires data from input devices via the input/output interface 1600. The CPU 1100 also outputs generated data to output devices via the input/output interface 1600.
メディアインターフェイス1700は、記録媒体1800に格納されたプログラムまたはデータを読み取り、RAM1200を介してCPU1100に提供する。CPU1100は、かかるプログラムを、メディアインターフェイス1700を介して記録媒体1800からRAM1200上にロードし、ロードしたプログラムを実行する。記録媒体1800は、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。 Media interface 1700 reads programs or data stored on recording medium 1800 and provides them to CPU 1100 via RAM 1200. CPU 1100 loads the programs from recording medium 1800 onto RAM 1200 via media interface 1700 and executes the loaded programs. Recording medium 1800 is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or a PD (Phase Change Rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical Disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
例えば、コンピュータ1000が実施形態に係る端末装置10及び情報処理装置100として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされたプログラムを実行することにより、制御部14及び130の機能を実現する。コンピュータ1000のCPU1100は、これらのプログラムを記録媒体1800から読み取って実行するが、他の例として、他の装置から所定の通信網を介してこれらのプログラムを取得してもよい。 For example, when the computer 1000 functions as the terminal device 10 and information processing device 100 according to the embodiment, the CPU 1100 of the computer 1000 executes programs loaded onto the RAM 1200 to realize the functions of the control units 14 and 130. The CPU 1100 of the computer 1000 reads and executes these programs from the recording medium 1800, but as another example, the CPU 1100 may obtain these programs from another device via a specified communications network.
〔8.その他〕
また、上記実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。
[8. Other]
Furthermore, among the processes described in the above embodiments, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically using known methods. In addition, the information including the processing procedures, specific names, various data, and parameters shown in the above documents and drawings can be changed as desired unless otherwise specified. For example, the various information shown in each drawing is not limited to the information shown in the drawings.
また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 Furthermore, the components of each device shown in the figure are functional concepts and do not necessarily have to be physically configured as shown. In other words, the specific form of distribution and integration of each device is not limited to that shown, and all or part of them can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
また、上述してきた実施形態は、処理内容を矛盾させない範囲で適宜組み合わせることが可能である。 Furthermore, the above-described embodiments can be combined as appropriate to the extent that the processing content is not contradictory.
以上、本願の実施形態のいくつかを図面に基づいて詳細に説明したが、これらは例示であり、発明の開示の欄に記載の態様を始めとして、当業者の知識に基づいて種々の変形、改良を施した他の形態で本発明を実施することが可能である。 The above describes in detail some of the embodiments of the present application based on the drawings, but these are merely examples, and the present invention can be implemented in other forms that incorporate various modifications and improvements based on the knowledge of those skilled in the art, including the aspects described in the Disclosure of the Invention section.
また、上述してきた「部(section、module、unit)」は、「手段」や「回路」などに読み替えることができる。例えば、取得部は、取得手段や取得回路に読み替えることができる。 Furthermore, the "parts" (section, module, unit) mentioned above can be read as "means" or "circuit." For example, an acquisition unit can be read as an acquisition means or an acquisition circuit.
1 情報処理システム
10 端末装置
11 通信部
12 入力部
13 出力部
14 制御部
100 情報処理装置
110 通信部
120 記憶部
121 モデル情報記憶部
122 評価情報記憶部
130 制御部
131 取得部
132 特定部
133 抽出部
134 第1表示部
135 生成部
136 第2表示部
137 決定部
141 受信部
142 送信部
N ネットワーク
REFERENCE SIGNS LIST 1 Information processing system 10 Terminal device 11 Communication unit 12 Input unit 13 Output unit 14 Control unit 100 Information processing device 110 Communication unit 120 Storage unit 121 Model information storage unit 122 Evaluation information storage unit 130 Control unit 131 Acquisition unit 132 Identification unit 133 Extraction unit 134 First display unit 135 Generation unit 136 Second display unit 137 Determination unit 141 Reception unit 142 Transmission unit N Network
Claims (9)
前記ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出して表示させる第1表示部と、
前記投稿者が前記サムネイル画像を指定すると、前記ファッション施策を前記モデルに適用した画像を表示させる第2表示部と、
を有することを特徴とする情報処理装置。 an acquisition unit that acquires a photographed image of a poster to which the fashion measure has been applied;
a first display unit that extracts and displays at least one thumbnail image of a model that is a candidate for applying the fashion measure based on a predetermined condition;
a second display unit that displays an image in which the fashion measure is applied to the model when the contributor designates the thumbnail image;
An information processing device comprising:
前記モデルの外見に関する所定部分のタイプに対する前記ファッション施策の似合う度合いを示す評価に基づいて前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項1に記載の情報処理装置。 The first display unit
The information processing device according to claim 1 , wherein the thumbnail images are extracted and displayed based on an evaluation indicating a degree to which the fashion measure suits a type of a predetermined part of the model's appearance.
前記投稿者と外見に関する所定部分のタイプが類似する前記モデルの前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項1に記載の情報処理装置。 The first display unit
The information processing device according to claim 1 , wherein the thumbnail images of the models similar in type to the poster in a predetermined part of their appearance are extracted and displayed.
前記投稿者のフォロワー情報に基づいて前記モデルの前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項1に記載の情報処理装置。 The first display unit
The information processing device according to claim 1 , wherein the thumbnail image of the model is extracted and displayed based on follower information of the poster.
前記投稿者のフォロワーと前記所定部分のタイプが類似する前記モデルの前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項3に記載の情報処理装置。 The first display unit
The information processing device according to claim 3 , wherein the thumbnail images of the models whose types of the predetermined features are similar to those of the followers of the poster are extracted and displayed.
前記モデルのログイン情報に基づいて前記モデルの前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項1に記載の情報処理装置。 The first display unit
The information processing apparatus according to claim 1 , wherein the thumbnail image of the model is extracted and displayed based on login information of the model.
前記ファッション施策を試着する可能性が高いと推定された前記モデルの前記サムネイル画像を抽出して表示させる
ことを特徴とする請求項5に記載の情報処理装置。 The first display unit
The information processing device according to claim 5 , wherein the thumbnail image of the model estimated to be highly likely to try on the fashion campaign is extracted and displayed.
ファッション施策が適用された投稿者の撮影画像を取得する取得工程と、
前記ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出して表示させる第1表示工程と、
前記投稿者が前記サムネイル画像を指定すると、前記ファッション施策を前記モデルに適用した画像を表示させる第2表示工程と、
を含むことを特徴とする情報処理方法。 1. A computer-implemented information processing method, comprising:
an acquisition step of acquiring an image taken by the poster to which the fashion measure has been applied;
a first display step of extracting and displaying at least one thumbnail image of a model that is a candidate for applying the fashion measure based on a predetermined condition;
a second display step of displaying an image in which the fashion measure is applied to the model when the contributor designates the thumbnail image;
An information processing method comprising:
前記ファッション施策を適用する候補であるモデルのサムネイル画像を所定の条件に基づいて少なくとも一つ抽出して表示させる第1表示手順と、
前記投稿者が前記サムネイル画像を指定すると、前記ファッション施策を前記モデルに適用した画像を表示させる第2表示手順と、
をコンピュータに実行させることを特徴とする情報処理プログラム。 A procedure for acquiring images taken by a poster to which the fashion campaign has been applied;
a first display step of extracting and displaying at least one thumbnail image of a model that is a candidate for applying the fashion measure based on a predetermined condition;
a second display step of displaying an image in which the fashion measure is applied to the model when the contributor designates the thumbnail image;
An information processing program characterized by causing a computer to execute the above.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024076060A JP2025171074A (en) | 2024-05-08 | 2024-05-08 | Information processing device, information processing method, and information processing program |
| JP2024-076060 | 2024-05-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025234261A1 true WO2025234261A1 (en) | 2025-11-13 |
Family
ID=97674937
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2025/014499 Pending WO2025234261A1 (en) | 2024-05-08 | 2025-04-11 | Information processing device, information processing method, and information processing program |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2025171074A (en) |
| WO (1) | WO2025234261A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002132916A (en) * | 2000-10-26 | 2002-05-10 | Kao Corp | How to provide makeup advice |
| JP7308317B1 (en) * | 2022-03-04 | 2023-07-13 | 株式会社Zozo | Information processing device, information processing method and information processing program |
-
2024
- 2024-05-08 JP JP2024076060A patent/JP2025171074A/en active Pending
-
2025
- 2025-04-11 WO PCT/JP2025/014499 patent/WO2025234261A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002132916A (en) * | 2000-10-26 | 2002-05-10 | Kao Corp | How to provide makeup advice |
| JP7308317B1 (en) * | 2022-03-04 | 2023-07-13 | 株式会社Zozo | Information processing device, information processing method and information processing program |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2025171074A (en) | 2025-11-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Manovich | Instagram and contemporary image | |
| US20220124411A1 (en) | Matching and ranking content items | |
| KR102490438B1 (en) | Display apparatus and control method thereof | |
| KR102102571B1 (en) | System and method for providing online shopping platform | |
| WO2019171128A1 (en) | In-media and with controls advertisement, ephemeral, actionable and multi page photo filters on photo, automated integration of external contents, automated feed scrolling, template based advertisement post and actions and reaction controls on recognized objects in photo or video | |
| CN107257338B (en) | media data processing method, device and storage medium | |
| US10922744B1 (en) | Object identification in social media post | |
| KR102139664B1 (en) | System and method for sharing profile image card | |
| CN115668263A (en) | Identification of physical products for augmented reality experience in messaging systems | |
| Halpern et al. | Iphoneography as an emergent art world | |
| US10504264B1 (en) | Method and system for combining images | |
| CN115803779A (en) | Analyzing augmented reality content usage data | |
| CN115735231A (en) | Augmented reality content based on product data | |
| JP6120467B1 (en) | Server device, terminal device, information processing method, and program | |
| US12475621B2 (en) | Product image generation based on diffusion model | |
| CN116324845A (en) | Analyzing augmented reality content item usage data | |
| US11222361B2 (en) | Location-based book identification | |
| Yang | Smartphone photography and its socio-economic life in China: An ethnographic analysis | |
| JP2023145312A (en) | Program, information processing device, method and system | |
| JP2022128493A (en) | Image processing device, image processing method, program and recording medium | |
| JP7662708B2 (en) | Information processing device, information processing method, and information processing program | |
| KR102102572B1 (en) | System and method for providing online shopping mall | |
| WO2025234261A1 (en) | Information processing device, information processing method, and information processing program | |
| WO2025234262A1 (en) | Information processing device, information processing method, and information processing program | |
| US12482208B2 (en) | Mirroring 3D assets for virtual experiences |