[go: up one dir, main page]

WO2018095273A1 - Procédé et dispositif de synthèse d'images, et procédé et dispositif d'implémentation de mise en correspondance - Google Patents

Procédé et dispositif de synthèse d'images, et procédé et dispositif d'implémentation de mise en correspondance Download PDF

Info

Publication number
WO2018095273A1
WO2018095273A1 PCT/CN2017/111500 CN2017111500W WO2018095273A1 WO 2018095273 A1 WO2018095273 A1 WO 2018095273A1 CN 2017111500 W CN2017111500 W CN 2017111500W WO 2018095273 A1 WO2018095273 A1 WO 2018095273A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual
skeleton
bone
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/111500
Other languages
English (en)
Chinese (zh)
Inventor
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201611045200.4A external-priority patent/CN106504309B/zh
Priority claimed from CN201611051058.4A external-priority patent/CN106780766B/zh
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of WO2018095273A1 publication Critical patent/WO2018095273A1/fr
Priority to US16/298,884 priority Critical patent/US10762721B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to the field of computers, and more particularly to image synthesis and matching implementation.
  • the simulation object generally includes the skeleton model and the skin. If the avatar is also included, in order to save the workload of the art and the hard disk resources, the bone offset data applied to the avatar display is usually stored first. In the rendering calculation of the simulated object, combined with the pre-stored bone offset data, the bones in the simulated object are adjusted to achieve the purpose of attaching the additive to the simulated object.
  • the skin of the simulated object may also change when the bone model and the avatar are simultaneously changed in size. Therefore, the current method of adjusting the avatar will cause the skeletal model of the simulated object to change, which will cause the skin of the simulated object to change, that is, the display of the simulated object is deformed, and the display effect is not good.
  • Embodiments of the present invention provide an image synthesizing method and apparatus, and a matching implementation method and apparatus for coordinating display of a simulated object (also referred to as a virtual object) with an additive (also referred to as a virtual pendant), thereby Make users have a good experience and visual enjoyment.
  • a simulated object also referred to as a virtual object
  • an additive also referred to as a virtual pendant
  • the embodiment of the invention provides a method for image synthesis, which may include:
  • the embodiment of the invention further provides an image synthesizing device, which may include:
  • An obtaining module configured to acquire first data of a skeleton model of the simulation object and bone data of an additive to be synthesized to the simulation object;
  • a determining module configured to determine, according to the first data and bone data of the additive, a target bone corresponding to the additive on the skeleton model
  • a copying module configured to acquire first target data corresponding to the target bone from the first data
  • An adjustment module configured to adjust, according to the preset offset data of the target bone, the first target data corresponding to the target bone to obtain first adjustment data
  • a rendering module configured to perform rendering according to the first adjustment data and the first data, to obtain a simulation object synthesized with the additive.
  • the object that is adjusted according to the offset data of the target bone is the first target data of the target bone, and the first data of the skeleton model of the simulated object does not change, so the first adjustment data and the first data are performed. Rendering, the resulting composite object has the effect of displaying the simulated object Coordination can enable users to have a good experience and visual enjoyment.
  • the present invention also provides a matching implementation and device
  • the present invention provides a matching implementation method, where the method includes:
  • the virtual pendant is independent of the virtual object; and determining a form of the virtual object;
  • the virtual widget after the determination is attached to the virtual object.
  • the invention also provides a matching implementation device, the device comprising:
  • An acquiring unit configured to obtain hook modification information of the virtual pendant for the virtual object;
  • the virtual pendant is independent of the virtual object determining unit, and is configured to determine a shape of the virtual object;
  • An adjusting unit configured to determine a shape of the virtual pendant according to the shape of the virtual object and the hook modification information
  • a display unit configured to attach the attached form to the virtual object.
  • a terminal includes the above-described matching implementation device.
  • the virtual pendant and the virtual object are independent of each other, and can be separately designed and developed, and the hooking correction information can be used to adapt the independent virtual components to the virtual objects, so that the same pendant can be adapted.
  • the flexibility and adaptability of the present invention are greatly improved compared to the implementation of the virtual object and the virtual object as a whole.
  • FIG. 1 is a schematic diagram showing an uncoordinated display after adding an avatar to a simulated object according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of normal display after adding an avatar to a simulated object according to an embodiment of the present invention
  • FIG. 3 is another illustration of the uncoordinated display displayed after adding an avatar to a simulated object according to an embodiment of the present invention. intention;
  • FIG. 4 is another schematic diagram of normal display after adding an avatar to a simulated object according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a skeleton model of a character simulation object according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of joint connection of a character simulation object according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an embodiment of a method for image synthesis according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a skeleton model of a character simulation object according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a cloak skeleton model of a character simulation object according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an additive bound bone in an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of another embodiment of a method for image synthesis according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of copying first target data of a target bone in an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of adjusting first target data of a target bone in an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of coordination after adjusting the cloak of the generals A in the embodiment of the present invention.
  • FIG. 15 is a schematic diagram of coordination after adjusting the cloak of the generals B in the embodiment of the present invention.
  • 16 is a schematic diagram of coordination after adjusting a cloak of a military commander C according to an embodiment of the present invention
  • FIG. 17 is a schematic diagram of an embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 19 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 20 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 21 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 22 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • FIG. 23 is a schematic diagram of another embodiment of an image synthesizing apparatus according to an embodiment of the present invention.
  • 1a is a schematic diagram of a computer architecture of a terminal according to an embodiment of the present invention.
  • 2a, 3a, 5a, and 5b are exemplary flowcharts of a matching implementation method according to an embodiment of the present invention.
  • FIG. 4a and FIG. 4b are diagrams showing an example of a skeleton of a bone affecting grid according to an embodiment of the present invention
  • FIG. 6a and 6b are diagrams showing an example of a bone-affecting attachment point according to an embodiment of the present invention.
  • FIG. 7a is a flow chart of a prior art bone skin according to an embodiment of the present invention.
  • FIG. 8a is an exemplary structural diagram of a matching implementation apparatus according to an embodiment of the present invention.
  • an interactive application system may include multiple simulation objects (hereinafter also referred to as virtual objects), and each simulation object may be configured with different additives avatar (below) Also known as virtual pendants, in the 3D model, it is common to use the same avatar resource to adapt to different mock objects.
  • virtual objects herein include, but are not limited to, character simulation objects, animal simulation objects, machine simulation objects, and plant simulation objects.
  • the same helmet can be used to adapt different character simulation objects, or the same cloak can be used to adapt different character simulation objects.
  • the interactive application system mainly used in the technical solution of the present invention is a game application scenario, wherein the simulation object is a player virtual character in a game scene, and the addition is a helmet, a streamer, a cloak, a sword accessory, and the like.
  • the simulation object generally has a skeleton model and a skin. If the simulation object also has an avatar, in order to save the workload of the art and the hard disk resources, the offset data of the bone to be adjusted is usually stored first, and the bone to be adjusted is The offset data is the display that acts on the avatar; then the bones of the simulated object are adjusted in conjunction with the pre-stored offset data when performing the rendering calculation.
  • the above processing method will cause the skeletal model and avatar to change size at the same time, the simulation object will also change together, because the skin of the simulated object and the avatar bound part of the bone are the same, so the avatar bound bone changes, simulation The skin of the object also changes, causing the simulated object to change as well.
  • FIG. 1 is a schematic diagram of the effect of adding a cloak to a character simulation object by using the current method, wherein the shoulder of the character simulation object after adding the cloak is moved up.
  • the display effect is not coordinated.
  • the correct display effect of adding a cloak to a character simulation object is shown in Fig. 2.
  • FIG. 3 another schematic diagram of the effect of adding a cloak to the character simulation object in the current manner is obviously uncoordinated.
  • the corresponding correct display is as shown in FIG. 4.
  • Figure 1 and Figure 3 is just two examples. At present, there are various kinds of simulation objects showing the effect of the wear or offset. In practical applications, there are many cases where the additive and the simulated object display are inconsistent. I will not repeat them one by one.
  • each character simulation object corresponds to a bone diagram, as shown in Figure 5.
  • Figure 5 is a schematic diagram of the skeleton model of the character simulation object.
  • Skeletal animation is a bone-driven animation that is a common way of animation for modern mainstream 3D large interactive applications.
  • a character character has a skeleton skeleton, which can also be called a skeleton model, and a skeleton skeleton is composed of a set of bone bones.
  • bone it is actually a joint between bones.
  • the real-time upper bone is actually a coordinate system with the joint as the origin.
  • FIG. 6 which is a schematic diagram of the joint connection of the character simulation object according to the embodiment of the present invention.
  • the skeletal animation simulation object can be divided into many parts (also called mesh Mesh). These scattered Mesh are organized together by a parent-child hierarchy. The parent Mesh drives the motion of the next Mesh.
  • the coordinates of the vertices in each Mesh are defined in themselves. In the coordinate system, each Mesh participates in the movement as a whole. Set the position and orientation of the simulated object, actually set the position and orientation of the root bone, then calculate the position and orientation of each bone according to the transformation relationship between the father and child bones in the bone hierarchy, and then tie the vertices in the Mesh according to the bone. Determines the coordinates of the vertex in the world coordinate system to render the vertex.
  • a skeletal animation typically includes bone hierarchy data, mesh (mesh) data, mesh skin data (skin info), and bone animation (keyframe) data.
  • Skeletal hierarchy data is primarily about information about who is the child's joint or who is the parent joint of a joint.
  • Mesh data and mesh skin data can generally be collectively referred to as skinned data.
  • the mesh skin data determines how the vertices are bound to the bone, and the mesh skin data of the vertices includes which bones the vertices are affected by and the weights at which the bones affect the vertices.
  • a bone offset matrix (Bone Offset Matrix) is needed for each bone, and a bone offset matrix is used to transform the vertices from the Mesh space to the bone space.
  • the bone controls the skin movement, and the movement of the bone itself is an animation.
  • the setting of the data, each key frame contains time and bone animation information, the skeletal animation information can directly represent the new transformation of the bone with a matrix, or the quaternion can be used to represent the rotation of the bone, or the data of the bone movement can be defined by itself. .
  • the skeleton is the main body of the model, and the Mesh is just like a layer of skin and a piece of clothing.
  • the bone is the coordinate space, and the bone level is the nested coordinate space.
  • a joint is simply a description of the position of the bone, that is, the position of the bone's own coordinate space origin in its parent space. Rotation around the joint refers to the rotation of the bone coordinate space (including all subspaces) itself.
  • Skinning refers to attaching (binding) vertices in a mesh mesh to a bone, and each vertex can be controlled by multiple bones, so that the vertices at the joint change position due to the pulling of the parent and child bones at the same time. Cracks are eliminated.
  • the matrix of each bound joint is known.
  • the function transforms the vertices with a bone and transforms the vertices from the Mesh coordinate system to the world coordinate system, so that the space can be transformed from the model space to the corresponding joint space.
  • One of the most important points is that no matter how the joint changes, when the joint space is changed, the coordinates of the vertex will not change during the joint transformation, so that the coordinates of the vertex in the joint space are transformed.
  • the joint space is transformed into the model space, and the coordinates of the model space transformation can be obtained.
  • the i in the formula represents the i-th joint, which produces a set of skinned matrices Kj.
  • This array is called a matrix palette.
  • the meaning of the palette is to select one of the matrices in the selected matrix.
  • the skin matrix Kj here is the same as Mi.
  • Ms-bi is the transformation matrix from the skin coordinate system to the bone coordinate system when the posture is bound
  • Mb-rpi is the transformation coordinate from the coordinate system of the current joint to the root joint
  • Mrp-s is the transformation coordinate from the coordinate system of the root joint to the skin coordinate system.
  • Equation 1-2 Represents vertex bindings in the model; ⁇ ij represents weights; Represents the vertex current in the model.
  • Vfinal Mworld*Mw-s*Mb-w*Ms-b*V (1-3)
  • Vfinal Mprojection*Mview*Mworld*Mw-s*Mb-w*Ms-b*V(1-4)
  • V represents the coordinate value of the vertices in the mesh grid in the skin model skin model coordinate system.
  • Ms-b transforms the vertex coordinates from the skin skin model coordinates to the bind pose of the bone.
  • the matrix is derived from the resource. There is a Ni Skinning Mesh Modifier. In the device).
  • Mprojection projection matrix
  • Mview observation matrix
  • Mworld model to world transformation matrix
  • Mw-s world to skin transformation matrix
  • Mb-w bone to world transformation matrix
  • Ms-b skin to bone Conversion matrix
  • Equation 1-4 represents the screen coordinates to which the conversion is made.
  • the execution subject in the embodiment of the present invention is a terminal, and the terminal may include a computer, a server, a mobile phone, a tablet computer, a personal digital assistant (English full name: Personal Digital Assistant, English abbreviation: PDA), and a sales terminal (English full name: Point of Sales, English abbreviation: POS), car terminal and other terminal equipment.
  • FIG. 7 it is a schematic diagram of an embodiment of a method for image synthesis according to an embodiment of the present invention, including:
  • the first data of the skeleton model of the simulation object, the skin data of the simulation object, the bone data of the additive to be synthesized to the simulation object, the skin data of the additive, and the like are generated in advance.
  • the engine can call the above-mentioned pre-generated data, that is, the engine can acquire the pre-generated data, obtain the first data of the skeleton model of the simulated object, and the skeleton data of the additive to be synthesized to the simulated object.
  • the engine may also obtain the skin data of the simulated object and the skinned data of the additive, etc.
  • the detailed information acquired by the engine includes but is not limited to the above summary, and is not described herein.
  • a schematic diagram of the skeleton model of the simulated object can be as shown in FIG. 5 above.
  • the additives in the embodiments of the present invention may be external objects such as swords, cloaks, helmets, armor and gloves.
  • the engine may determine the target bone corresponding to the additive on the skeleton model of the simulated object according to the first data and the bone data of the additive.
  • the method may include: establishing a correspondence between the bone data of the additive and the first data, binding the additive to the skeleton model, and determining the target bone.
  • the binding of the avatar of the skeletal animation is, as shown in FIG. 8, a schematic diagram of the skeleton model of the character simulation object, and FIG. 9 is a schematic diagram of the cloak skeleton.
  • the embodiment of the present invention may copy a piece of bone data from the avatar resource, and bind the corresponding bone in the resource.
  • the additive is worn on the virtual character, You can re-bind to the bone of the mock object according to the corresponding bone name, you can control the avatar bone binding during the resource creation process, and you don't need to run two sets of bone data in the program.
  • the skeleton in the avatar resource is mainly used for indexing, and the skin of avatar is actually bound at runtime.
  • the animation of avatar is completely consistent with the simulation object.
  • the bones bound in Avatar are shown in Figure 10, and these bones are the same as the bone names in the mock object. It should be understood that after binding the bone data of the additive to the skeleton model of the simulated object, the target bone associated with the additive can be determined.
  • the engine may acquire the first target data corresponding to the target bone from the first data of the acquired skeleton model of the simulated object. In an implementation manner, the engine may copy the first target data corresponding to the target bone from the first data of the acquired skeleton model of the simulated object.
  • the first data of the skeleton model of the acquired simulation object and the first target data of the target bone corresponding to the additive may be added to the corresponding nif file, and the first target data of the target bone may also be called To offset the data of the bone.
  • the engine may also bind the skin data of the simulation object with the first data to form skinned binding data of the simulation object.
  • the offset data is directly applied to the first data of the skeleton model, so after the engine is adjusted
  • the skeletal model that caused the simulated object also changed.
  • the first data of the skeleton model of the simulation object does not change, and the offset data of the target bone corresponding to the additive is pre-configured, and the first target data of the target bone is performed according to the offset data.
  • the adjustment further obtains first adjustment data, and the first adjustment data acts on the display of the avatar.
  • the offset data of the target bone corresponding to the additive is pre-configured, and specifically, the data that can be obtained during the animation process.
  • the simulation object has a skeleton model and a skin, and secondly, it is determined whether the synthetic display effect of the simulation object added with the additive is in accordance with expectations, and if not, the calculation may be performed according to the corresponding preset algorithm.
  • the offset data of the target bone corresponding to the additive is pre-configured, and specifically, the data that can be obtained during the animation process.
  • the corresponding avatar is usually not configured separately for each simulated object, and the same avatar is generally used to adapt different simulated objects, but because Each simulated object has different bone sizes (high and low), different skins (fat and thin, clothes, etc.), so when the same avatar is added to different simulated objects, there may be inconsistencies, so you need to simulate The actual situation of the object is adjusted.
  • an offset data that is, the offset data of the target bone corresponding to the additive, can be obtained by calculation.
  • a target bone can be understood as a piece or bones in the skeleton model of the simulated object to which the additive needs to be bound.
  • each additive is bound to a fixed bone of the skeletal model of the simulated object according to the corresponding design.
  • the cloak may be one or several bones bound to the shoulder of the simulated object, and the sword is tied. It is placed on the bone of the hand part of the simulated object. Therefore, the bone of one or several bones and hand parts of the above-mentioned shoulder is the target bone in the embodiment of the present invention.
  • the first adjustment data obtained by adjusting the first target data according to the offset data of the target bone configured in advance can reach the position of the mobile avatar, rotate the avatar, or change the avatar. The size and other effects.
  • the first target data of the target bone is adjusted according to the offset data of the target bone, and after the first adjustment data is obtained, the first adjustment data and the first data are used for rendering, and the synthesis is performed.
  • a mock object with this addition is performed.
  • obtaining skin data of the simulated object and skin data of the additive and binding the skin data of the simulated object with the first data to obtain skin binding data of the simulated object
  • the skin is bound to the data for rendering, and the simulation object can be obtained; the first adjustment data and the skin data of the additive are rendered, and the additive can be obtained; finally, the obtained simulation object and the additive are synthesized, and the synthesized composition is added.
  • the object of the object It should be noted that the skin of the avatar can be bound during the animation running, and the specific bound object is the first adjustment data.
  • the skin of the simulated object includes a plurality of vertices, and the skin of the additive is also included.
  • the first world coordinates of each vertex may be calculated according to the first adjustment data and the first data; and the first world coordinates of each vertex are rendered to obtain a simulation object synthesized with an additive.
  • spatial conversion may be performed according to the first adjustment data and the first data (weight mixing calculation, and Equations 1-2 and 1-3 above may be used) to obtain first world coordinates of each vertex.
  • the first world coordinates of each vertex also need to perform vertex mixing calculation, and finally render, to obtain a simulation object with an additive.
  • the solution provided by the embodiment of the present invention is to perform avatar adjustment on the simulation object that initially includes the additive.
  • the object to be adjusted is the first target data of the target bone, so the first data of the skeleton model of the simulated object does not change, and finally according to the first adjustment data and The first data is rendered, and the simulated object with the additive added is obtained and displayed, and the simulated object with the added object is coordinated, so that the user has a good visual enjoyment and experience.
  • the embodiment of the present invention is only described for adjusting one of the simulated objects, and the corresponding technical solution can be applied to other simulated objects, and the target bones corresponding to the added objects are acquired for different simulated objects.
  • the offset data may be different because each mock object is generally different.
  • the embodiment of the invention can make corresponding adjustments according to the offset data of different target bones, so that the same avatar can be coordinated and applied to different simulation objects.
  • FIG. 11 is a schematic diagram of another embodiment of a method for image composition according to an embodiment of the present invention, including:
  • steps 1101-1105 and steps in the embodiment shown in FIG. 701-705 is the same and will not be described here.
  • the second data after the skeleton model animation of the simulation object is acquired.
  • the skeleton model animation is updated, and the effect displayed is that the simulation object corresponding to the skeleton model changes, for example, the character simulation object takes a step, extends the arm, lifts the sword, runs, squats, and the like.
  • the animation system obtains the second data according to the animation update of the skeleton model, and can change the skin data correspondingly when the simulation object is animated.
  • the second target data of the target bone is obtained from the second data.
  • the second target data of the target bone may be copied from the second data.
  • the specific second data can represent the change of the data of the skeleton model, and correspondingly, the data of the target bone corresponding to the additive also occurs.
  • the second embodiment data of the target bone corresponding to the additive is obtained from the second data, and is used to indicate the data change of the target bone corresponding to the additive.
  • the second data of the skeleton model of the simulation object and the second target data of the target bone may be added in the corresponding nif file, wherein the second target data of the target bone may also be referred to as an offset (offest ) bone data.
  • the offset data of the target bone corresponding to the additive is pre-configured, and the second target data of the target bone is adjusted according to the offset data, thereby obtaining the second adjustment data, and the skeleton of the simulated object is obtained.
  • the second data of the model does not change, and the second adjustment data acts on the display of the avatar.
  • the data of the offset bone (such as the first target data and the second target data) can be modified according to the offset data of the target bone. Because the offset bone itself is not bound to the corresponding mesh grid, the animation system does not update the offset bone, and the program needs to manually copy and Modify the corresponding transform to change the data, then change the corresponding transform to the shader renderer, and then the shader renderer renders the avatar model (additive) based on the data.
  • the second adjustment data is obtained, and once used for image synthesis, the position of the avatar can be changed, the avatar can be rotated, or the avatar can be changed.
  • the size and other effects are included in the second adjustment data.
  • the second target data of the target bone is adjusted according to the offset data of the target bone that is pre-configured, and after the second adjustment data is obtained, the second adjustment data and the second data are rendered according to the second adjustment data to obtain an animation.
  • the updated simulation has the simulation object of the additive.
  • the skin of the simulated object includes a plurality of vertices
  • the skin of the additive also includes a plurality of vertices
  • the second world coordinates of each vertex are calculated according to the second adjustment data and the second data;
  • the second world coordinates of the vertex are rendered, and the simulated object with the additive is synthesized after the animation is updated.
  • the solution provided by the embodiment of the present invention is an avatar adjustment when animating the simulation object including the additive.
  • the object to be adjusted is the second target data of the target bone. Because the animation is updated once, the data of the skeleton model of the simulated object changes, that is, the second data is generated, and the corresponding target bone is obtained.
  • the second target data during the adjustment process, the second data of the skeleton model of the simulated object does not change. Finally, the second adjustment data and the second data are rendered, and the simulation object with the additive is displayed, and the simulated object with the added object is coordinated, so that the user has a good visual enjoyment and experience.
  • the embodiment of the present invention is only described for adjusting one of the simulated objects, and the corresponding technical solution can be applied to other simulated objects, and the target bones corresponding to the added objects are acquired for different simulated objects.
  • the offset data may be different because each mock object is generally different.
  • the embodiment of the invention can make corresponding adjustments according to the offset data of different target bones, so that the same avatar can be coordinated and applied to different simulation objects.
  • the skeleton data copied from the first data and the second data may be stored in the nif resource file of the art, or may be stored in other types of configuration files.
  • avatar transformation modifications the same can be applied to weapons with skinned animation, or various parts of the character. For example, you can enlarge your hand or other parts during an attack.
  • the bone data of the above three generals are set in advance, and the skin data of each general is different, corresponding to skin data A, skin data B and skin data C, respectively.
  • hanging the cloak on the general is to hang the cloak on the target bone corresponding to the martial model of the general, that is, assume that the target bone is the two bones of the two shoulders of the general.
  • the military commander A is riding on the horse. If the cloak is hung on the military commander A, on the back, the cloak covers the armor of the military commander A, giving the user the feeling that the military commander A is bloated and the display is not good; the military commander B is standing. The posture, if the cloak is hung on the military commander B, on the back, the military commander B is relatively short, the cloak has been mopped, and the display effect is not good; the military commander C is a posture against the sledgehammer, if the cloak is hung on the generals C, On the back, the head of the generals C has been blocked by the cloak, and the display is not good. Therefore, the offset data A of the military commander A and the cloak, the offset data B of the military commander B and the cloak, and the offset data C of the generals C and the cloak can be calculated.
  • the engine will call the pre-acquired data, namely the skeleton data A of the generals A, the skin data A, the cloak data and the offset data A; the bone data B of the generals B, the skin data B, the cloak data and the partial Move data B; skeleton data C, skin data C, cloak data, and offset data C of the generals C.
  • it is very simple to do a special adjustment to the avatar skeleton model. It is to add several pieces of offset bones for adjusting the corresponding avatar in the corresponding nif file, as shown in Figure 12, which is a kind of target bone.
  • the "Spine1_301_offset" ending in offset in Fig. 12 is the added bone for adjusting the avatar, that is, the target bone.
  • the corresponding avatar is adjusted according to the offset bone in the nif resource file.
  • the art can adjust the local transform local change data of the target bone according to the specific situation, and the transform data will only act on the avatar of the character, as shown in Figure 13, which is a first target data for adjusting the target bone.
  • Figure 13 is a first target data for adjusting the target bone. The schematic diagram is described in detail below.
  • the first target data corresponding to the target bone is copied from the skeleton data A, and the skeleton data A is bound to the skin data A, and then The first target data corresponding to the target bone is adjusted according to the offset data A.
  • the first adjustment data of the target bone is obtained, and the first adjustment data is applied to the cloak; finally, the first adjustment data and the data bound by the skin are used for rendering, and the cloak in the composite image can be better fit to the general.
  • the cloak hangs on the shoulder, but does not block the armor, watching the military commander A comparatively mighty domineering, as shown in Figure 14, after the adjustment of the cloak, the general schematic of the military commander A.
  • the first target data corresponding to the target bone is copied from the skeleton data B, and the bone data B is bound to the skin data B, and then The first target data corresponding to the target bone is adjusted according to the offset data B.
  • the first adjustment data is applied to the cloak; finally, using the first adjustment data and the data bound by the skin to render, the cloak in the composite image can better fit the military commander B
  • the cloak hangs on the shoulder, but because the size of the cloak is adjusted according to the height of the military commander B, the generals hang the cloak and appear more coordinated, as shown in Figure 15, after the adjustment of the cloak, the synthesis of the generals B schematic diagram.
  • the first target data corresponding to the target bone is copied from the skeleton data C, and the skeleton data C is bound to the skin data C, and then The first target data corresponding to the target bone is adjusted according to the offset data C. Adjusting to obtain the first adjustment data of the target bone, the first adjustment data is applied to the cloak; finally, using the first adjustment data and the data bound by the skin to render, the resulting cloak can be better in the composite image.
  • the same avatar can be worn on different military martial arts, and there is no positional deviation, which can well fit the military commander itself, and also allows the player to enjoy the wind after wearing, feeling His own mighty domineering, invincible on the battlefield.
  • FIG. 17 is a schematic diagram of an embodiment of an image synthesizing apparatus according to an embodiment of the present invention, including:
  • An obtaining module 1701 configured to acquire first data of a skeleton model of the simulation object and bone data of an additive to be synthesized to the simulation object;
  • a determining module 1702 configured to determine, according to the first data and the bone data of the additive, a target bone corresponding to the additive on the skeleton model;
  • a copying module 1703 configured to acquire first target data corresponding to the target bone from the first data
  • the adjusting module 1704 is configured to adjust the first target data corresponding to the target bone according to the offset data of the target bone that is configured in advance, to obtain the first adjustment data;
  • the rendering module 1705 is configured to perform rendering according to the first adjustment data and the first data to obtain a simulation object synthesized with an additive.
  • the apparatus further includes:
  • a recording module 1706 configured to acquire, after the simulation object animation is updated, the second data after the skeleton model animation of the simulated object is updated;
  • the copying module 1703 is further configured to obtain second target data corresponding to the target bone from the second data;
  • the adjustment module 1704 is further configured to: adjust, according to the offset data of the target bone, the second target data corresponding to the target bone to obtain second adjustment data;
  • the rendering module 1705 is further configured to perform rendering according to the second adjustment data and the second data, to obtain a simulated object with an additive synthesized after the animation.
  • a determining module 1702 specifically configured to establish a correspondence between the bone data of the additive and the first data, so as to bind the additive to the skeleton model; and according to the correspondence between the bone data of the additive and the first data And determining a target bone corresponding to the additive.
  • the rendering module 1705 includes:
  • the obtaining unit 17051 is configured to acquire skin data of the simulated object and skin data of the additive;
  • the rendering unit 17052 is configured to bind the skin data of the simulation object and the first data, obtain the skin binding data of the simulation object, and render the skin binding data to obtain a simulation object; and the first adjustment data And the skin data of the additive is rendered to obtain an additive;
  • the synthesizing unit 17053 is configured to synthesize the simulation object and the additive to obtain a simulation object in which the additive is synthesized.
  • the skin data of the simulated object includes a plurality of vertices
  • the skin of the additive includes a plurality of vertices
  • the rendering module 1705 further includes:
  • the calculating unit 17054 is configured to calculate first world coordinates of each vertex according to the first adjustment data and the first data;
  • the rendering unit 17052 is configured to perform rendering according to the first world coordinates of each vertex to obtain a simulation object synthesized with an additive.
  • the calculating unit 17054 is further configured to calculate second world coordinates of each vertex according to the second adjustment data and the second data;
  • the rendering unit 17052 is further configured to perform rendering according to the second world coordinates of each vertex, and obtain a simulated object with an additive synthesized after the animation is updated.
  • the embodiment of the present invention further provides another image synthesizing device. As shown in FIG. 23, for the convenience of description, only parts related to the embodiment of the present invention are shown. If the specific technical details are not disclosed, please refer to the embodiment of the present invention. Method part.
  • the image synthesizing device can be integrated into the terminal or can be a separate device connected to the terminal through a wired communication interface or a wireless communication interface.
  • the terminal may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (English full name: Personal Digital Assistant, English abbreviation: PDA), a sales terminal (English full name: Point of Sales, English abbreviation: POS), a car computer, and the like. Take the terminal as a mobile phone as an example:
  • FIG. 23 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 2310, a memory 2320, an input unit 2330, a display unit 2340, a sensor 2350, an audio circuit 2360, a wireless fidelity (WiFi) module 2370, and a processor 2380. And power supply 2390 and other components.
  • RF radio frequency
  • the structure of the handset shown in FIG. 23 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.
  • the RF circuit 2310 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processor 2380 processes the data. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuit 2310 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 2310 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • the memory 2320 can be used to store software programs and modules, and the processor 2380 runs the storage
  • the software program and modules in the memory 2320 perform various functional applications and data processing of the mobile phone.
  • the memory 2320 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • the memory 2320 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 2330 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 2330 may include a touch panel 2331 and other input devices 2332.
  • the touch panel 2331 also referred to as a touch screen, can collect touch operations on or near the user (such as a user using a finger, a stylus, or the like on the touch panel 2331 or near the touch panel 2331. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 2331 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 2380 is provided and can receive commands from the processor 2380 and execute them.
  • the touch panel 2331 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 2330 may also include other input devices 2332.
  • other input devices 2332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 2340 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 2340 can include a display panel 2341.
  • the display panel 2341 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 2331 may cover the display panel 2341.
  • the touch panel 2331 detects a touch operation on or near the touch panel 2331, the touch panel 2331 transmits to the processor 2380 to determine the type of the touch event, and then the processor 2380 according to the touch event.
  • the type provides a corresponding visual output on display panel 2341.
  • the touch panel 2331 and the display panel 2341 are implemented as two separate components to implement input of the mobile phone and The function is input, but in some embodiments, the touch panel 2331 and the display panel 2341 can be integrated to implement input and output functions of the mobile phone.
  • the handset can also include at least one type of sensor 2350, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 2341 according to the brightness of the ambient light, and the proximity sensor may close the display panel 2341 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • gesture of the mobile phone such as horizontal and vertical screen switching, related Interactive applications, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as well as other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. No longer.
  • An audio circuit 2360, a speaker 2361, and a microphone 2362 can provide an audio interface between the user and the handset.
  • the audio circuit 2360 can transmit the converted electrical data of the received audio data to the speaker 2361, and convert it into a sound signal output by the speaker 2361; on the other hand, the microphone 2362 converts the collected sound signal into an electrical signal, by the audio circuit 2360. After receiving, it is converted into audio data, and then processed by the audio data output processor 2380, sent to, for example, another mobile phone via the RF circuit 2310, or outputted to the memory 2320 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 2370. It provides users with wireless broadband Internet access.
  • FIG. 23 shows the WiFi module 2370, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 2380 is the control center of the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 2320, and invoking data stored in the memory 2320, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 2380 may include one or more processing units; preferably, the processor 2380 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 2380.
  • the mobile phone also includes a power supply 2390 (such as a battery) that supplies power to various components.
  • a power supply 2390 (such as a battery) that supplies power to various components.
  • the power supply can be It is logically connected to the processor 2380 through a power management system to manage functions such as charging, discharging, and power management through a power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 2380 included in the terminal further has the following functions: acquiring first data of a skeleton model of a simulated object, bone data of an additive to be synthesized to the simulated object; and according to the first data and the additive Skeletal data, determining a target bone corresponding to the additive on the bone model; copying the first target data of the target bone from the first data; and first target data of the target bone according to the offset data of the pre-configured target bone Adjusting is performed to obtain first adjustment data; and rendering is performed according to the first adjustment data and the first data to obtain a simulation object synthesized with an additive.
  • the processor 2380 may also be configured to perform the steps in the foregoing FIG. 7 or FIG. 11 , and details are not described herein again.
  • simulations of objects or people are involved in virtual scenes. For example, 3D modeling of objects or characters in the field of computer graphics or computer animation.
  • a person's body may need to attach a pendant, such as a watch, a streamer, a helmet, or even a knife, a gun, a sword, and the like.
  • a pendant such as a watch, a streamer, a helmet, or even a knife, a gun, a sword, and the like.
  • the human body and the above-mentioned pendant are designed as a whole, and the pendant is also taken as a part of the human body, and the disadvantage is that the adaptability is poor.
  • N personal objects need to be equipped with a helmet, it is necessary to model each character, and the human body and the helmet are integrated in each model.
  • the present invention provides a matching implementation method, which can make the above-mentioned human body and the helmet independent of each other and flexibly realize the attachment of the hanging piece on the character.
  • 3dsMax A computer software used to create 3D models, animations, special effects, etc.
  • Skinning The name of the model face and vertices attached to the bone in 3dsMax;
  • Bone A virtual object created in 3dsMax that controls the animation of the model.
  • the matching implementation method and related device can be applied In the field of virtual reality, computer graphics, computer animation and other fields.
  • it can be applied to the simulation of astronauts in the field of virtual reality.
  • it can be applied to the field of computer animation, especially character dressing in game scenes.
  • the above matching implementation device may be applied in the form of software to a terminal (such as a desktop computer, a mobile terminal, an iPad, a tablet, etc.), or in the form of hardware (for example, a controller/processor specifically capable of being a terminal) as a component of the above device. section.
  • a terminal such as a desktop computer, a mobile terminal, an iPad, a tablet, etc.
  • hardware for example, a controller/processor specifically capable of being a terminal
  • the matching implementation device may be an application, such as a mobile phone APP, a terminal application, or the like, or may be a component of an application or an operating system.
  • Figure 1a shows a general computer system architecture of the above terminal.
  • the above computer system may include a bus, a processor 1, a memory 2, a communication interface 3, an input device 4, and an output device 5.
  • the processor 1, the memory 2, the communication interface 3, the input device 4, and the output device 5 are connected to each other through a bus. among them:
  • the bus can include a path for transferring information between the various pendants of the computer system.
  • the processor 1 may be a general-purpose processor, such as a general-purpose central processing unit (CPU), a network processor (NP Processor, NP for short, a microprocessor, etc., or an application-specific integrated circuit (ASIC). , or one or more integrated circuits for controlling the execution of the program of the present invention. It can also be a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the processor 1 may include a main processor (CPU) and may also include a graphics processing unit (GPU) in the graphics card.
  • CPU main processor
  • GPU graphics processing unit
  • the memory 2 stores a program for executing the technical solution of the present invention, and can also store an operating system and other key services.
  • the program can include program code, the program code including computer operating instructions.
  • the memory 2 may include a read-only memory (ROM), other types of static storage devices that can store static information and instructions, random access memory (RAM), storable information, and Other types of dynamic storage devices, disk storage, flash, and so on.
  • Input device 4 may include means for receiving data and information input by a user, such as a keyboard, mouse, camera, scanner, light pen, voice input device, touch screen, pedometer or gravity sensor, and the like.
  • the output device 5 may include means for allowing output of information to the user, such as a display screen, a printer, Speakers, etc.
  • Communication interface 3 may include devices that use any type of transceiver to communicate with other devices or communication networks, such as Ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN), and the like.
  • RAN Radio Access Network
  • WLAN Wireless Local Area Network
  • the processor 1 executes the program stored in the memory 2, and calls other devices, and can be used to implement various steps in the matching implementation method provided by the embodiments of the present invention.
  • the steps of the matching implementation method provided by the embodiment of the present invention may be implemented by the CPU by executing the program stored in the memory 2 and calling other devices.
  • the steps in the matching implementation method provided by the embodiments of the present invention may be implemented by the CPU and the GPU, and the other devices are invoked.
  • FIG. 2a shows an exemplary flow of the above described matching implementation method.
  • the method shown in FIG. 2a is applied in the above mentioned domain or application scenario, and the CPU (or CPU and GPU) in the terminal shown in FIG. 1a is interactively completed with other pendants.
  • the above-mentioned hook correction information can be acquired by the processor 1 (CPU) of the terminal shown in FIG. 1.
  • the virtual pendant is independent of the virtual object.
  • the hook correction information is used to adapt the virtual pendant to the virtual object.
  • the virtual widget can be an Avatar (such as a paper doll) pendant selected by the player, or an Avatar pendant obtained by the player during the game.
  • Avatar such as a paper doll
  • the virtual object can be a game character.
  • the virtual object can also be an object or a human body part, for example, the virtual object can be a human face under the simulation scene.
  • the virtual pendant can be a pure-hanging Avatar pendant or an Avatar pendant that participates in the calculation of the skeleton skin.
  • the so-called hanging point type means that the virtual pendant moves and rotates simply following the character.
  • the virtual pendant is only displaced (or offset) and rotated relative to the virtual object, and does not deform, ie its shape is constant.
  • the Avatar pendants involved in the calculation of the skeletal skin will have different shapes depending on the pose of the character's skeleton. For example, a cloak involved in the calculation of the skeletal skin, its form will follow the role of the station It changes its posture and presents different forms.
  • the pendant When making a pure hanging point pendant, the pendant can be modeled according to the requirements of pure hanging point. Of course, when making a pendant that participates in bone skin calculations, the pendant can be modeled according to the bone skin requirements.
  • third party middleware such as Havok Cloth may also be attached as a separate pendant, and third party middleware is generally used for auxiliary calculation.
  • the target pendant can also be integrated with third-party middleware, a plug-in that integrates third-party middleware, and is also made as a stand-alone pendant.
  • the portion 202 can be performed by the processor 1 (CPU or GPU) of the terminal shown in FIG. 1.
  • the shape of the virtual object may include displacement (or offset), rotation, scaling, and other morphological changes relative to the reference (eg, the character changes from standing to squatting).
  • 203 Adjust a form of the virtual pendant according to the form of the virtual object and the hook modification information.
  • the 203 portion can be executed by the processor 1 (CPU in the terminal) shown in FIG. 1.
  • the 203 portion is executed by the CPU in the processor 1 in cooperation with the GPU.
  • the form of the virtual pendant may include scaling, displacement (or offset) relative to a reference (eg, a skeleton of the virtual object), rotation, and the like.
  • the hook correction information is used to adapt the virtual pendant to the virtual object.
  • the hook correction information can not only support the avatar pendant and the virtual object adaptation of the pure-pointing point, but also support the adaptation of the avatar pendant and the virtual object that need to participate in the character skeleton skinning calculation.
  • the hook correction information may include offset information, rotation information, and zoom information.
  • a sword with an initial orientation of the tip of the sword can be upward.
  • the hook correction information of the sword with respect to the character 1 can determine which position of the character 1 is hung in the character 1 (determined by the offset information), and the tip of the sword is upward, downward or oblique (determined by the rotation information)
  • the size of the sword can also be reduced to accommodate the thinner character.
  • the size of the sword can also be enlarged to fit the larger character (can be determined by scaling information).
  • its hook correction information for role 1 may determine that it is attached to At a certain position of the left shoulder of the character 1, the tip of the sword is facing downward; and the attachment correction information for the character 2 can be determined to be attached to a position on the right shoulder of the character 2, and the tip of the sword is pointed.
  • the embodiment of the present invention adjusts the form of the virtual pendant in combination with the form of the virtual object and the hook correction information.
  • the display execution 204 portion can be controlled by a processor 1 (e.g., a GPU) of the terminal shown in Figure 1a.
  • a processor 1 e.g., a GPU
  • the form of the virtual object may be displayed first, and then the form of the virtual pendant after the adjustment mode is displayed, and the process is similar to that of the person wearing the clothes, so that the visual effect of the virtual pendant attached to the virtual object may be presented.
  • the virtual pendant and the virtual object are independent of each other, and can be separately designed and developed, and the hooking correction information can be used to adapt the independent virtual pendant to the virtual object, thereby making the same virtual pendant. Can adapt to different virtual objects.
  • the flexibility and adaptability of embodiments of the present invention are greatly improved compared to current implementations in which the pendant and the human body or object are designed as a single unit.
  • the implementation of skeletal animation requires at least a virtual skeleton and a mesh model (Mesh) bound to the virtual skeleton.
  • the skeleton is included in the virtual skeleton.
  • the virtual skeleton is equivalent to the skeleton in the human body
  • the mesh model bound to the virtual skeleton is equivalent to the skin of the human body.
  • the mesh model determines what the human body or object looks like. It can be understood that the shape of the mesh model (skin) is affected by the virtual skeleton pose.
  • the posture can also be understood as an action. For example, the T-shaped posture is that the arms are flat and the legs are tight, which is an action.
  • Figure 3a illustrates an exemplary flow of applying a matching implementation in a skeletal animation scenario, with the CPU (or CPU and GPU) in the terminal shown in Figure 1a interacting with other components.
  • the exemplary process includes:
  • Model data can be stored on any storage medium that can be accessed by processor 1, such as memory.
  • the model data of the virtual pendant includes at least a first mesh model and a first virtual skeleton.
  • the first virtual skeleton is a virtual skeleton corresponding to the virtual pendant
  • the first mesh model is a deformable mesh model corresponding to the virtual pendant, and the shape thereof is affected by the first virtual skeleton posture.
  • the model data can include the first mesh model and the first virtual skeleton.
  • the form of the virtual pendant may include the form of the first mesh model, or the shape of the virtual pendant may be characterized by the form of the first mesh model. Therefore, the shape of the first mesh model will be calculated later.
  • virtual pendants can be avatar pendants such as helmets, streamers, cloaks, and sword accessories.
  • Model data can be stored on any storage medium that can be accessed by processor 1, such as memory.
  • the model data of the virtual object may include a second virtual skeleton, a second mesh model, and hook modification information of the virtual pendant for the virtual object.
  • the second virtual skeleton is a virtual skeleton corresponding to the virtual object.
  • the first virtual skeleton is a part of the second virtual skeleton.
  • the (second) virtual skeleton of the virtual object is the entire virtual skeleton of the person. It is assumed that the (first) virtual skeleton corresponding to the cloak of the cloak includes the shoulder bone and the bone on the back, and the bone on the shoulder bone and the back is part of the entire skeleton of the human.
  • the second mesh model is a deformable mesh model corresponding to the virtual object, and its shape is affected by the posture of the second virtual skeleton. It should be noted that the shape of the virtual object may include the shape of the second mesh model, or the shape of the virtual object may be characterized by the shape of the second mesh model. Therefore, the shape of the second mesh model will be calculated later.
  • virtual objects can be different game characters. For example, a military commander wearing a heavy armor, a clothed cloth, a strange warrior, and so on.
  • Model data of virtual objects can be saved 3dsmax file (role production file) stored in the character model.
  • the hook modification information of the virtual pendant may be different according to the virtual object, and the hook modification information of the virtual pendant for the virtual object is to be created in the model of the virtual object, and belongs to a part of the model data of the virtual object.
  • the hook correction information can be added to the 3dsmax file of a character in the form of additional nodes.
  • the above additional nodes can be created by different max scripts according to the pendant classification (such as cloak, waist weapon, etc.), and the art designer can create all or part of these nodes according to requirements (if some nodes are created, others are not created. With the default values, these nodes will determine that the virtual pendant can be hooked to different locations according to different roles, with different rotation angles and scaling.
  • the posture of the second virtual skeleton may be represented by the state of the bones included in the second virtual skeleton.
  • Section 303 can include:
  • the shape of the second mesh model at the current time is determined according to the state of the bones included in the second virtual skeleton.
  • part 303 is the refinement of the above 202 part.
  • Section 303 can be performed by the processor 1 (CPU in the terminal) of the terminal shown in Fig. 1a.
  • the 303 portion is executed by the GPU in the processor 1 in cooperation with the GPU.
  • the posture of the first virtual skeleton is also determined.
  • the shape of the first mesh model at the current time may be determined according to the hook correction information and the posture of the first virtual skeleton at the current time.
  • the posture of the first virtual skeleton may be represented by the state of the bones included in the first virtual skeleton.
  • the portion 304 may include: determining the morphology of the first mesh model at the current time according to the hook correction information and the state of the bone included in the first virtual skeleton at the current time.
  • the position, orientation (rotation) or even shape change of the pendant can be calculated according to the attachment correction information and the state of the bone contained in the first virtual skeleton at the current time;
  • a pendant-type pendant that calculates the offset and rotation of the pendant relative to the bone based on the hook correction information.
  • part 304 is the refinement of the above 203 part.
  • Part 304 can be performed by the processor 1 (CPU in the terminal) of the terminal shown in Figure 1a.
  • the portion of the 304 is executed by the GPU in processor 1 in cooperation with the GPU.
  • Section 305 is a refinement of Section 204 above.
  • the portion 305 can be performed by the processor 1 (the GPU in the terminal) shown in FIG. 1a.
  • Skeletal animations typically include a virtual skeleton (skeleton hierarchy), a mesh model (Mesh) bound to the virtual skeleton, and a series of keyframes.
  • a key frame corresponds to a new pose of the skeleton, and the skeleton pose between the two key frames can be obtained by interpolation. This is because if the key frame is played simply, the problem of unsmooth motion may occur.
  • the solution is to inter-frame smooth interpolation.
  • Mesh In skeletal animation, instead of placing Mesh directly into the world coordinate system, Mesh is only used as a skin and attached to the bone. It really determines the position and orientation of the virtual object or virtual pendant in the world coordinate system. .
  • the shape of the mesh model is characterized by the position (coordinates) of the mesh vertices.
  • Figures 4a and 4b exemplarily show a mesh model and a skeleton model of the arm. Assume that the position of a vertex on the mesh model is V, and the position of the vertex is affected by the position and orientation of the forearm bone.
  • Figure 4b shows that after the forearm bone has rotated an angle, the position of the vertex changes from V to W.
  • the virtual skeleton consists of a series of discrete joints that are linked by a parent-child relationship.
  • skeletal animation taking virtual objects as an example, we set the position and orientation of the virtual object, actually setting the position and orientation of its root bone, and then calculating the position of each bone according to the transformation relationship between the father and child bones in the bone hierarchy. And orientation, and then calculate the coordinates of each vertex in the world coordinate system according to the binding of the bone to the vertices in the Mesh, thereby rendering the vertex, and finally obtaining the displayed virtual object.
  • each joint is defined in its parent space, and each joint itself defines a subspace.
  • the clavicle is a joint, which is the origin of the upper arm.
  • the elbow joint is the origin of the forearm.
  • the wrist joint is the origin of the finger bone. The joint determines both the position of the bone space and the center of rotation and scaling of the bone space.
  • a bone can be expressed in a 4X4 matrix, because the translation component contained in the 4X4 matrix determines the position of the joint to which the bone is connected (determining the origin of the bone), and the rotation and scaling components determine the rotation and scaling of the bone space.
  • the origin position is located somewhere in the space of the upper arm bone space.
  • the upper arm bone there is a subspace somewhere in the coordinate space (that is, the position where the elbow joint is located), that is, the forearm bone.
  • the forearm coordinate space is actually rotating, so that the subspace contained therein (the finger bone coordinate space) is also rotated around the elbow joint.
  • the subspace will follow the parent space movement, just like a person follows the earth.
  • the bones are not the actual bones, so when the forearm bones rotate, the only change is the orientation of the coordinate space.
  • Objects can be translated in the coordinate system, as well as their own rotation and scaling.
  • Child bone These transformations can also be done in the coordinate system of the parent bone to change the position and orientation of the parent skeleton coordinate system.
  • TransformMatrix is generally used to describe the transformation of the child skeleton in its parent bone coordinate system.
  • the change matrix determines the position of the bone in the parent bone coordinate system.
  • Transform Matrix which acts to transform vertices from bone space to upper space
  • the transformation matrix of the bone in the world space also called the global matrix
  • the transformation matrix of bones in world space can be transformed from bone space to world space. Then the inverse matrix (offset matrix) of the matrix can transform the coordinates in world space to the bone space of a certain bone.
  • the mesh model space coincides with the world space.
  • the world space is actually used as the Mesh space.
  • the bones When adding bones, it is also to put the bones into the world space, and adjust the relative position of the bones to match the mesh, to get the initial posture of the skeleton (for example, the mesh model should be made into a two-arm side flat upright T posture, The bones should also fit this posture).
  • the initial posture of the skeleton for example, the mesh model should be made into a two-arm side flat upright T posture, The bones should also fit this posture.
  • the above offset matrix can transform the coordinates of the mesh model vertex in the world space to the skeleton space of a certain bone.
  • an animation keyframe a transformation matrix for each bone in the keyframe relative to the parent bone coordinate system is indicated.
  • an animation keyframe can record the rotation, translation, and scaling of each joint relative to the bound pose.
  • the game application scenario is taken as an example to introduce a matching implementation method.
  • FIG. 5a or FIG. 5b shows still another exemplary flow of the matching implementation method.
  • the method shown in FIG. 5a or 5b can be applied in a skeletal animation scenario, and the CPU and GPU interaction in the terminal shown in FIG. 1 is completed.
  • the CPU loads the model data of the avatar pendant.
  • the model data of the avatar pendant may include network data (including vertex data) of the first mesh model and bone information of the first virtual skeleton.
  • Each mesh in the mesh model is typically a triangle or other polygon.
  • the grid data is composed of vertex data (vertex table) and index data.
  • vertex data vertex table
  • index data Each vertex in the vertex table has information such as position, normal vector, material, texture, etc., and also indicates which bones affect the vertex, affecting the weight. how many.
  • the bone information includes the number of all bones in the first virtual skeleton and the specific information of each bone.
  • Section 501 is similar to Section 301 above. For details, see the introduction in Section 301. I will not repeat them here.
  • the CPU acquires model data of a role (ie, a virtual object).
  • the model data of the character may include bone information of the second virtual skeleton, vertex data of the second mesh model, and hook correction information of the virtual pendant for the virtual object.
  • hook correction information it can exist as an extra node (hook point).
  • a role can be attached to multiple avatar pendants, each avatar pendant can correspond to a set of mount points, and any set of mount points includes at least one mount point.
  • a mount point corresponds to (affects) a portion of the vertices in the grid model of the avatar pendant.
  • Each mount point declares which avatar pendant it corresponds to. For example, the prefix of the name of the mount point can indicate what avatar pendant the hook point is used to mount.
  • a hook point can be bound to one or more bones. There is a parent-child relationship between the hook point and the bone to which it is bound, and the hook point is a child node, similar to the parent-child relationship between the bones.
  • the hook point may include an offset value, a rotation angle, and a scaling.
  • the above offset values, rotation angles, and scaling ratios are typically saved in a 4x4 matrix or a 4x3/3x4 matrix.
  • the above offset value is the offset value of the hit point relative to one or some of the bones.
  • the offset value may also be regarded as an offset value of the partial mesh vertex relative to the bone in the first virtual skeleton.
  • the hit point A is bound to the grid vertex 0-100, and the mount point A is bound to the bone 1.
  • the above offset value can be regarded as the offset value of the grid vertex 0-100 with respect to the bone 1.
  • the above rotation angle is the rotation angle of the attachment point relative to one or some of the bones in the first virtual skeleton. It can also be regarded as the rotation angle of a part of the mesh vertices of the first mesh model with respect to one or some of the first virtual skeletons.
  • the scaling of the first mesh model described above is the scaling of the hitch point relative to the original size of the component. It can also be considered as the scaling of the mesh vertex of the first mesh model relative to the original size of the component.
  • hook point may further include constraint information.
  • the constraint information is used as an additional parameter for component mounting, and different information can be saved according to the actual application.
  • the constraint information may include amplitude constraint information, and the amplitude constraint information may be used to save an acceptable offset range of the hook point, so that the player can adjust the size slightly when the component is worn (can be adjusted up and down and left and right) The offset position of the part.
  • the constraint information can include elastic constraint information that will help calculate the position of the component without following the body movement in a rigid manner.
  • the CPU calculates a transformation matrix set corresponding to the second virtual skeleton in the current frame.
  • a transformation matrix of each bone relative to the parent skeleton coordinate system in the keyframe can be indicated.
  • the animation keyframe may record the rotation, translation, and scaling of each joint state of each joint relative to the bound gesture (the aforementioned T-pose).
  • the transformation matrix corresponding to each bone in the second virtual skeleton can be calculated, and the set of the transformation matrix is the above-mentioned transformation matrix set.
  • matrix[i] is used to represent the transformation matrix of the i-th skeleton in the second virtual skeleton of the current frame in the world space or the original model space.
  • the state of each bone contained in the second virtual skeleton can be characterized by a set of transformation matrices (transformation matrix sets) of all bones.
  • the transformation matrix set corresponding to the first virtual skeleton in the current frame may be extracted from the second virtual skeleton in the transformation matrix set corresponding to the current frame.
  • the hook point is a parent-child relationship with the bound bone.
  • the hook point inherits the transformation matrix of the parent node in world space, and then multiplies the matrix containing the offset value, rotation angle, and scaling of the hook point.
  • a transformation matrix (or a modified transformation matrix) corresponding to the hook point can be obtained.
  • the set of the modified transform matrix of the set of mount points is the first virtual skeleton in the current frame. Corresponding modified transformation matrix set.
  • matrix[j] is used to represent the transform matrix in world space corresponding to the jth bone of the first virtual skeleton of the current frame.
  • the CPU transmits the rendering related data of the character and the avatar pendant to the GPU.
  • the above rendering related data may include vertex data of the character and the avatar pendant, matrix[i], matrix[j] t , vertex texture data of the first mesh model, and vertex texture data of the second mesh model.
  • the above data can be transmitted in batches.
  • the vertex data of the character, the matrix[i], the vertex texture data of the second mesh model, the vertex data of the avatar pendant, the matrix[j] t , and the vertex texture data of the first mesh model may be transmitted first.
  • the solution is to perform inter-frame smooth interpolation between two adjacent key frames.
  • interpolation Given a time t, find out which two keyframes t are between, assuming p and q, and then calculate the joint state (or bone state) and time t recorded according to p, q. The state of the bone at t time.
  • interpolation methods such as linear interpolation, hermite interpolation, and spherical interpolation.
  • Hermite interpolation can be selected for translation and quaternion spherical interpolation for rotation.
  • matrix[i] represents the transformation matrix in world space corresponding to the i-th skeleton of the current frame
  • the translation [matrix] includes translation, rotation, and scaling.
  • matrix[i] t can be used to represent the transformation matrix in world space corresponding to the i-th skeleton of the current time t.
  • Section 507 is similar to Section 506 and will not be described here.
  • Bones can be used to represent the number of all bones that affect the vertex s. Then calculate the new position of the vertex s under the independent action of each bone, then share the bones New location. Finally, all new positions are weighted and summed according to the weight of the bone to the vertex s. Note that the sum of ownership should be 1.
  • the coordinates of the vertex s can be calculated using the classic skeletal animation calculation formula.
  • the formula is as follows:
  • v' represents the position of the vertex s at the current time t.
  • v bindpose represents the initial position of the vertex s;
  • Weight(p) represents the weight of the vertex s for the pth bone
  • Matrix[p] t represents the transformation matrix in world space corresponding to the pth bone at the current time.
  • An offset matrix representing the space of the vertex s relative to the pth bone, right multiplied The purpose is to map the transformation information matrix of the vertex to the space of the pth bone.
  • 509 Calculate, according to the modified transformation matrix set, the current coordinate value of each mesh vertex in the first mesh model.
  • the coordinate values of the vertex R can be calculated using the classic skeletal animation calculation formula.
  • the formula is as follows:
  • w' represents the position of the vertex R at the current time t.
  • w bindpose represents the initial position of the vertex R;
  • Weight(q) represents the weight of the vertex R for the qth bone
  • An offset matrix representing the space of the vertex R relative to the qth bone, right multiplied The purpose is to map the transformation information matrix of the vertex to the space of the qth bone.
  • 511 Render each mesh vertex in the first mesh model according to coordinate values and texture data of each mesh vertex in the first mesh model at the current time.
  • Avatar pendants are generally divided into two categories according to their application methods: simply follow the character to move in a hanging point manner; follow the character bones for skinning calculation.
  • avatar pendant For the first type of avatar pendant, it is usually used as a stand-alone object, and it is connected to the character-specific bone at runtime to perform rigid body calculation in parent-child relationship.
  • the avatar pendant of the pure hanging point method is a separate object, there is no conflict between the avatar pendant and the character body shape (ie, fat short, high thin). But it can only be hooked in the same position, subject to the original character image design, the visual effect that the character A can accept, and the role B may not be applicable because of the extra space that was originally left blank.
  • the general approach is to divide the role into multiple parts, and then at runtime to replace the basic mesh model merged into the original character according to the game configuration or player selection, and combine into a complete model to participate in the skeleton. Skin calculation.
  • the avatar component that participates in the skin calculation of the character bone can't solve the occlusion effect of the extra model data on the component in the original model.
  • the same virtual pendant can be specified to be hung in different positions according to different roles.
  • the component can be matched to the most suitable. The role of this character image.
  • Figure 7a shows the existing skeletal skinning mode.
  • the CPU needs to additionally calculate the modified transformation matrix set of the current frame of the avatar pendant, and then pass the GPU to the GPU computing application, but the pre-processing of the role model data (model merge/ Texture merge) is no longer needed.
  • the technical solution solves the visual effect that the avatar pendant can be used in various roles with different body types and serious differences in original images while ensuring the simplicity and convenience of data creation. problem.
  • it has good integration compatibility with third-party middleware such as Havok Cloth.
  • Third-party middleware such as Havok Cloth.
  • the player can use the functions realized by the program to mix and match the character images of various styles, which increases the fun of the game.
  • FIG. 8a shows a possible structural diagram of a matching implementation device, including:
  • the first obtaining unit 801a is configured to acquire hook modification information of the virtual pendant for the virtual object
  • the virtual pendant is independent of the virtual object; the hook modification information is used to adapt the virtual pendant to the virtual object.
  • the first determining unit 802a is configured to determine a form of the virtual object.
  • the second determining unit 803a is configured to determine a form of the virtual pendant according to the form of the virtual object and the hook modification information;
  • the attaching unit 804a is configured to attach the attached form to the virtual object.
  • the first obtaining unit 801a can be used to execute the portion 201 of the embodiment shown in Fig. 2a, the portion 301-302 of the embodiment shown in Fig. 3a, and the portion 501-502 of the embodiment shown in Figs. 5a and 5b.
  • the first determining unit 802a can be used to perform the portion 202 of the embodiment shown in Figure 2a, the portion 303 of the embodiment shown in Figure 3a, and the portion 503, 505 of the embodiment shown in Figures 5a and 5b (also by the adjusting unit 803) Execute Section 505), Section 506, and Section 508.
  • the second determining unit 803 can be used to perform the portion 203 of the embodiment shown in Fig. 2a, the portion 304 of the embodiment shown in Fig. 3a, the portion 504, the portion 507, and the portion 509 of the embodiment shown in Figs. 5a and 5b.
  • the hooking unit 804a can be used to perform the portion 204 of the embodiment shown in Fig. 2a, the portion 305 of the embodiment shown in Fig. 3a, and the portion 510-511 of the embodiment shown in Figs. 5a and 5b.
  • the apparatus may further include:
  • a second acquiring unit configured to acquire model data of the virtual pendant;
  • the model data of the virtual pendant includes at least a first mesh model and a first virtual skeleton;
  • the first virtual skeleton is a virtual corresponding to the virtual pendant a skeleton,
  • the first mesh model is a deformable mesh model corresponding to the virtual pendant, and a form of the first mesh model is affected by the first virtual skeleton posture;
  • a third acquiring unit configured to acquire model data of the virtual object;
  • the model data of the virtual object includes at least a second virtual skeleton and a second mesh model; and
  • the second virtual skeleton is a virtual pair a corresponding virtual skeleton;
  • the second mesh model is a deformable mesh model corresponding to the virtual object, and a form of the first mesh model is affected by a posture of the second virtual skeleton;
  • the first virtual skeleton is a part of the second virtual skeleton.
  • the form of the virtual object includes a form of the second mesh model
  • the first determining unit is specifically configured to: determine a shape of the second mesh model at a current time according to a posture of the second virtual skeleton at a current time.
  • the form of the virtual pendant includes a form of the first mesh model
  • the second determining unit is specifically configured to:
  • the posture of the first virtual skeleton at the current time is determined by the posture of the second virtual skeleton at the current time.
  • the posture of the second virtual skeleton is represented by a state of a bone included in the second virtual skeleton
  • the first determining unit is specifically configured to:
  • the shape of the second mesh model at the current time is determined according to the state of the bones included in the second virtual skeleton at the current time.
  • the posture of the first virtual skeleton is represented by a state of a bone included in the first virtual skeleton
  • the second determining unit is specifically configured to:
  • the virtual pendant includes a hanging pendant or a pendant that participates in the calculation of the bone skin.
  • the morphology of the second mesh model is represented by the position of each mesh vertex of the first mesh model
  • the state of the bones contained by the second virtual skeleton is characterized by a set of transformation matrices of the second virtual skeleton.
  • the first determining unit includes:
  • a first calculation subunit configured to calculate a transformation matrix set corresponding to the second virtual skeleton in the current frame
  • a first interpolation sub-unit configured to obtain, according to the transform matrix set corresponding to each of the previous key frame and the current frame, the interpolation matrix set of the second virtual skeleton at the current time;
  • a second calculating subunit configured to calculate a coordinate value of each mesh vertex in the second mesh model at the current time according to the transformation matrix set of the second virtual skeleton at the current time.
  • the state of the bones included in the first virtual skeleton is represented by a transformation matrix set of the first virtual skeleton
  • the morphology of the first mesh model is characterized by the position of each mesh vertex of the first mesh model.
  • the hook modification information includes:
  • An offset value of a mesh vertex of the first mesh model relative to a bone in the first virtual skeleton, a mesh vertex of the first mesh model relative to a bone in the first virtual skeleton The angle of rotation and / or the scale of the first mesh model.
  • the second determining unit includes:
  • Extracting a subunit configured to extract, from the second virtual skeleton, a transformation matrix set corresponding to the current frame in the transformation matrix set corresponding to the current frame;
  • a third calculating subunit configured to calculate, according to the hook correction information and the extracted transform matrix set, a modified transform matrix set corresponding to the first virtual skeleton in a current frame;
  • a second interpolation sub-unit configured to obtain, according to the first virtual skeleton, a modified transformation matrix set corresponding to each of the previous key frame and the current frame, and obtain a modified transformation matrix corresponding to the first virtual skeleton at the current time set;
  • a fourth calculating subunit configured to calculate a coordinate value of each mesh vertex in the first mesh model at a current time according to the modified transformation matrix set.
  • the model data of the virtual pendant further includes texture data of each mesh vertex of the first mesh model;
  • the model data of the virtual object further includes a texture number of each mesh vertex of the second mesh model according to;
  • the hooking unit includes:
  • a first rendering subunit configured to render, according to the coordinate values and texture data of each mesh vertex in the second mesh model at the current time, the mesh vertices in the second mesh model
  • a rendering subunit configured to render each mesh vertex in the first mesh model according to coordinate values and texture data of each mesh vertex in the first mesh model at the current time.
  • An embodiment of the present invention further provides a terminal, including any of the foregoing matching implementation devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de synthèse d'images, qui sont utilisés pour obtenir de manière coordonnée et afficher un objet simulé synthétisé avec un additif. Le procédé comprend les étapes suivantes consistant à : acquérir des premières données d'un modèle de squelette d'un objet simulé et des données de squelette d'un additif à synthétiser avec l'objet simulé (701) ; selon les premières données et les données de squelette de l'additif, déterminer un squelette cible correspondant à l'additif sur le modèle de squelette (702) ; acquérir des premières données cibles correspondant au squelette cible à partir des premières données (703) ; selon des données de décalage préconfigurées du squelette cible, ajuster les premières données cibles correspondant au squelette cible pour obtenir des premières données d'ajustement (704) ; et selon les premières données d'ajustement et les premières données, réaliser un rendu pour obtenir l'objet simulé synthétisé avec l'additif (705). La présente invention concerne en outre un procédé et un dispositif d'implémentation de mise en correspondance lorsqu'un gadget virtuel virtuel est joint à un objet virtuel.
PCT/CN2017/111500 2016-11-24 2017-11-17 Procédé et dispositif de synthèse d'images, et procédé et dispositif d'implémentation de mise en correspondance Ceased WO2018095273A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/298,884 US10762721B2 (en) 2016-11-24 2019-03-11 Image synthesis method, device and matching implementation method and device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201611045200.4A CN106504309B (zh) 2016-11-24 2016-11-24 一种图像合成的方法以及图像合成装置
CN201611045200.4 2016-11-24
CN201611051058.4 2016-11-24
CN201611051058.4A CN106780766B (zh) 2016-11-24 2016-11-24 匹配实现方法及相关装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/298,884 Continuation US10762721B2 (en) 2016-11-24 2019-03-11 Image synthesis method, device and matching implementation method and device

Publications (1)

Publication Number Publication Date
WO2018095273A1 true WO2018095273A1 (fr) 2018-05-31

Family

ID=62195424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/111500 Ceased WO2018095273A1 (fr) 2016-11-24 2017-11-17 Procédé et dispositif de synthèse d'images, et procédé et dispositif d'implémentation de mise en correspondance

Country Status (2)

Country Link
US (1) US10762721B2 (fr)
WO (1) WO2018095273A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136232A (zh) * 2019-05-16 2019-08-16 北京迈格威科技有限公司 骨骼蒙皮动画的处理方法、装置、电子设备及存储介质
CN111710020A (zh) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 动画渲染方法和装置及存储介质
CN114169049A (zh) * 2021-11-25 2022-03-11 北京建筑大学 古木建筑斗拱模型的构建方法、装置及电子设备
CN114504825A (zh) * 2022-01-26 2022-05-17 网易(杭州)网络有限公司 调整虚拟角色模型的方法、装置、存储介质及电子装置

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11625457B2 (en) * 2007-04-16 2023-04-11 Tailstream Technologies, Llc System for interactive matrix manipulation control of streamed data
CN106970800B (zh) * 2017-04-01 2019-05-07 腾讯科技(深圳)有限公司 换装方法和装置
US10803647B1 (en) * 2019-04-04 2020-10-13 Dreamworks Animation Llc Generating animation rigs using scriptable reference modules
CN110288681B (zh) * 2019-06-25 2023-06-27 网易(杭州)网络有限公司 角色模型的蒙皮方法、装置、介质及电子设备
CN112237739A (zh) * 2019-07-17 2021-01-19 厦门雅基软件有限公司 游戏角色渲染方法、装置、电子设备及计算机可读介质
CN111223171B (zh) * 2020-01-14 2025-06-27 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN113538637A (zh) * 2020-04-21 2021-10-22 阿里巴巴集团控股有限公司 生成动画的方法、装置、存储介质和处理器
CN111672112B (zh) * 2020-06-05 2023-03-24 腾讯科技(深圳)有限公司 虚拟环境的显示方法、装置、设备及存储介质
CN112102453B (zh) * 2020-09-27 2021-06-18 完美世界(北京)软件科技发展有限公司 一种动画模型骨骼处理方法、装置、电子设备及存储介质
CN112270734B (zh) * 2020-10-19 2024-01-26 北京大米科技有限公司 一种动画生成方法、可读存储介质和电子设备
CN112634415B (zh) * 2020-12-11 2023-11-10 北方信息控制研究院集团有限公司 一种基于人体骨骼模型的人员动作实时仿真方法
CN113034651B (zh) * 2021-03-18 2023-05-23 腾讯科技(深圳)有限公司 互动动画的播放方法、装置、设备及存储介质
CN113362435B (zh) * 2021-06-16 2023-08-08 网易(杭州)网络有限公司 虚拟对象模型的虚拟部件变化方法、装置、设备及介质
CN113838170B (zh) * 2021-08-18 2025-02-25 网易(杭州)网络有限公司 目标虚拟对象的处理方法、装置、存储介质和电子装置
CN116524077B (zh) * 2022-01-21 2024-07-23 腾讯科技(深圳)有限公司 虚拟对象的编辑方法及相关设备
US11908058B2 (en) * 2022-02-16 2024-02-20 Autodesk, Inc. Character animations in a virtual environment based on reconstructed three-dimensional motion data
CN114241100B (zh) * 2022-02-25 2022-06-03 腾讯科技(深圳)有限公司 虚拟对象的蒙皮处理方法、装置、设备、介质及程序产品
CN114842155B (zh) * 2022-07-04 2022-09-30 埃瑞巴蒂成都科技有限公司 一种高精度自动骨骼绑定方法
CN115458128B (zh) * 2022-11-10 2023-03-24 北方健康医疗大数据科技有限公司 一种基于关键点生成数字人体影像的方法、装置及设备
CN120569961A (zh) * 2023-01-24 2025-08-29 Oppo广东移动通信有限公司 索引至不可分变换核的帧内预测模式的确定
US11954814B1 (en) * 2023-02-17 2024-04-09 Wombat Studio, Inc. Computer graphics production control system and method
US20250349059A1 (en) * 2024-05-10 2025-11-13 Microsoft Technology Licensing, Llc Texture joint animation
CN120654287A (zh) * 2024-08-20 2025-09-16 浙江凌迪数字科技有限公司 虚拟附件的仿真固定方法、装置、设备、介质和程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620741A (zh) * 2009-04-13 2010-01-06 武汉数字媒体工程技术有限公司 基于部件库的真实感虚拟化身模型的交互式生成方法
CN104008557A (zh) * 2014-06-23 2014-08-27 中国科学院自动化研究所 一种服装与人体模型的三维匹配方法
CN104021584A (zh) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 一种骨骼蒙皮动画的实现方法
CN105654334A (zh) * 2015-12-17 2016-06-08 中国科学院自动化研究所 虚拟试衣方法和系统
CN106504309A (zh) * 2016-11-24 2017-03-15 腾讯科技(深圳)有限公司 一种图像合成的方法以及图像合成装置
CN106780766A (zh) * 2016-11-24 2017-05-31 腾讯科技(深圳)有限公司 匹配实现方法及相关装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010170279A (ja) * 2009-01-21 2010-08-05 Namco Bandai Games Inc 骨格動作制御システム、プログラムおよび情報記憶媒体
US8744121B2 (en) * 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US10489956B2 (en) * 2015-07-27 2019-11-26 Autodesk, Inc. Robust attribute transfer for character animation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620741A (zh) * 2009-04-13 2010-01-06 武汉数字媒体工程技术有限公司 基于部件库的真实感虚拟化身模型的交互式生成方法
CN104008557A (zh) * 2014-06-23 2014-08-27 中国科学院自动化研究所 一种服装与人体模型的三维匹配方法
CN104021584A (zh) * 2014-06-25 2014-09-03 无锡梵天信息技术股份有限公司 一种骨骼蒙皮动画的实现方法
CN105654334A (zh) * 2015-12-17 2016-06-08 中国科学院自动化研究所 虚拟试衣方法和系统
CN106504309A (zh) * 2016-11-24 2017-03-15 腾讯科技(深圳)有限公司 一种图像合成的方法以及图像合成装置
CN106780766A (zh) * 2016-11-24 2017-05-31 腾讯科技(深圳)有限公司 匹配实现方法及相关装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136232A (zh) * 2019-05-16 2019-08-16 北京迈格威科技有限公司 骨骼蒙皮动画的处理方法、装置、电子设备及存储介质
CN110136232B (zh) * 2019-05-16 2023-10-03 北京迈格威科技有限公司 骨骼蒙皮动画的处理方法、装置、电子设备及存储介质
CN111710020A (zh) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 动画渲染方法和装置及存储介质
CN111710020B (zh) * 2020-06-18 2023-03-21 腾讯科技(深圳)有限公司 动画渲染方法和装置及存储介质
CN114169049A (zh) * 2021-11-25 2022-03-11 北京建筑大学 古木建筑斗拱模型的构建方法、装置及电子设备
CN114504825A (zh) * 2022-01-26 2022-05-17 网易(杭州)网络有限公司 调整虚拟角色模型的方法、装置、存储介质及电子装置

Also Published As

Publication number Publication date
US10762721B2 (en) 2020-09-01
US20190206145A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
WO2018095273A1 (fr) Procédé et dispositif de synthèse d'images, et procédé et dispositif d'implémentation de mise en correspondance
CN112037311B (zh) 一种动画生成的方法、动画播放的方法以及相关装置
US10776981B1 (en) Entertaining mobile application for animating a single image of a human body and applying effects
JP6181917B2 (ja) 描画システム、描画サーバ、その制御方法、プログラム、及び記録媒体
US20220215583A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN106780766B (zh) 匹配实现方法及相关装置
US12374067B2 (en) Layered clothing that conforms to an underlying body and/or clothing layer
CN108389247A (zh) 用于生成真实的带绑定三维模型动画的装置和方法
CN113318428A (zh) 游戏的显示控制方法、非易失性存储介质及电子装置
US11238667B2 (en) Modification of animated characters
CN106504309B (zh) 一种图像合成的方法以及图像合成装置
CN112843704B (zh) 动画模型处理方法、装置、设备及存储介质
CN108109209A (zh) 一种基于增强现实的视频处理方法及其装置
CN115526967A (zh) 虚拟模型的动画生成方法、装置、计算机设备及存储介质
CN108525306B (zh) 游戏实现方法、装置、存储介质及电子设备
US20250225708A1 (en) Virtual object control method and apparatus, electronic device, and storage medium
CN111009022B (zh) 一种模型动画生成的方法和装置
CN113610949B (zh) 虚拟手臂骨骼的蒙皮方法、装置、设备以及存储介质
CN115006847A (zh) 虚拟场景更新方法、装置、电子设备以及存储介质
US20250054257A1 (en) Automatic fitting and tailoring for stylized avatars
HK40043868B (en) Method and apparatus for processing animation model, device and storage medium
HK40043868A (en) Method and apparatus for processing animation model, device and storage medium
CN116958341A (zh) 一种虚拟角色的换装方法、相关装置、设备以及存储介质
HK40055250B (en) Skinning method, device, equipment and storage medium of virtual arm bone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17873524

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17873524

Country of ref document: EP

Kind code of ref document: A1