WO2023029475A1 - Model perspective method, intelligent terminal and storage device - Google Patents
Model perspective method, intelligent terminal and storage device Download PDFInfo
- Publication number
- WO2023029475A1 WO2023029475A1 PCT/CN2022/085105 CN2022085105W WO2023029475A1 WO 2023029475 A1 WO2023029475 A1 WO 2023029475A1 CN 2022085105 W CN2022085105 W CN 2022085105W WO 2023029475 A1 WO2023029475 A1 WO 2023029475A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- perspective
- depth value
- model
- information
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/12—Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Definitions
- the invention relates to the field of model display, in particular to a model perspective method, an intelligent terminal and a storage device.
- the terminal will use the occlusion relationship between different parts or entities in the model Select the unoccluded part to display.
- the present invention proposes a model perspective method, an intelligent terminal and a storage device, calculates the current penetration surface according to the mouse coordinates and the view direction, obtains the list of candidate entities, the list of eliminated surfaces, and according to the obtained user depth value and current depth value to obtain the currently hidden and displayed surfaces for perspective and restoration of the model. There is no need to manually hide and restore the display of the model. The operation is simple and time-consuming, which improves the user's interactive experience.
- a technical solution adopted by the present invention is: a model perspective method
- the model perspective method includes: S101: Acquire perspective selection information, calculate the current penetration surface according to the perspective selection information, and the perspective selection The information includes mouse coordinates and view directions; S102: Determine perspective information according to the current penetration surface, continuously acquire user depth values, perform perspective and restoration of the model according to the user depth values, current depth values, and perspective information, and
- the above-mentioned perspective information includes a list of candidate entities and a list of eliminated faces.
- the step of acquiring perspective selection information may further include: judging whether a perspective instruction is received; if yes, perform S101; if not, do not perform S101.
- the step of calculating the current penetration surface according to the perspective selection information specifically includes: obtaining the world coordinates of the selected point in the world coordinate system and the world view direction through the mouse coordinates and the view direction, and using the world coordinates and world view direction to generate rays, and obtain the current penetration surface according to the rays.
- the step of determining the perspective information according to the current penetration surface specifically includes: performing line-plane intersection on the ray and the current penetration surface to obtain the intersection point of the ray and the current penetration surface Coordinates: Obtain the depth value of the intersection point according to the coordinates of the intersection point and the world view direction, and generate a candidate entity list and a list of eliminated surfaces based on the depth value.
- the step of generating a list of candidate entities and a list of eliminated surfaces based on the depth values specifically includes: sorting the current penetration surfaces according to the depth values of intersection points on the current penetration surface, and generating candidate entities based on the sorting list, and put the faces in the model that are culled and intersected by the ray into the culled faces list.
- the step of continuously acquiring the user depth value specifically includes: continuously detecting depth input information, and determining the user depth value according to the depth input information, current coordinate depth, and perspective information.
- the step of performing perspective and restoration of the model according to the user depth value, the current depth value and perspective information specifically includes: judging whether the user depth value is smaller than the current depth value; The elimination surface whose depth value is greater than the current one in the list of eliminated surfaces is determined to be in a non-perspective state, modify the perspective information according to the non-perspective state, and draw the model based on the modified perspective information; if not, then in When the user depth value is greater than the current depth value, determine the surface whose depth value is smaller than the user depth value in the candidate entity list as a perspective state, modify the perspective information according to the perspective state, and based on The modified perspective information draws the model.
- the current depth value and perspective information further includes: judging whether perspective object change information is detected, and the perspective object change information includes model rotation 1. At least one of mouse movement; if yes, execute S101; if no, display the model according to the user's depth value.
- the present invention also proposes an intelligent terminal, which includes a processor and a memory, the memory stores a computer program, and the processor executes the above-mentioned model perspective method according to the computer program .
- the present invention also proposes a storage device, the storage device stores program data, and the program data is used to execute the above-mentioned model perspective method.
- the present invention has the beneficial effects of calculating the current penetrating surface according to the mouse coordinates and the view direction, obtaining the list of candidate entities and the list of eliminated surfaces, and obtaining the currently hidden
- the surface and the displayed surface can be used for perspective and recovery of the model, without the need to manually hide and restore the display of the model, the operation is simple and time-consuming, and the user's interactive experience is improved.
- Fig. 1 is the flowchart of an embodiment of the model perspective method of the present invention
- FIG. 2 is a functional architecture diagram of an embodiment of the terminal of the application model perspective method of the present invention
- Fig. 3 is a flowchart of another embodiment of the model perspective method of the present invention.
- Fig. 4 is a flowchart of another embodiment of the model perspective method of the present invention.
- FIG. 5 is a structural diagram of an embodiment of an intelligent terminal of the present invention.
- FIG. 6 is a structural diagram of an embodiment of the storage device of the present invention.
- FIG. 1 is a flow chart of an embodiment of the model perspective method of the present invention
- FIG. 4 is a flowchart of another embodiment of the model perspective method of the present invention, and the model perspective method of the present invention will be described in detail in conjunction with FIGS. 1-4.
- the smart terminal applying the model perspective method can be a mobile phone, a notebook computer, a desktop computer, an all-in-one computer, a tablet computer, a server, a cloud platform, and other devices capable of displaying a model and editing the model according to received instructions.
- the functional modules used by the smart terminal to execute the model perspective method include a response module, a display module, a selection module, a perspective technology module and a data module, wherein the response module is located at the presentation layer, and the display module, the selection module, and the perspective technology module The module is located in the business layer, and the data module is located in the data layer.
- the processor in the smart terminal runs the above functional modules according to the computer program in the memory to realize the model perspective method.
- the response module includes the input and output of the intelligent terminal, including the display and the mouse.
- the input obtained by the response module is mouse coordinate information, mouse wheel information and view direction information, and the output is screen display information.
- the display module is used for model display, so as to realize the perspective function.
- the display module applies OpenGL technology.
- the occlusion display of the 3D entity in CAD is realized through the depth test of OpenGL.
- OpenGL stores all the depth information in a Z buffer (Z-buffer), also known as the depth buffer (Depth Buffer). ).
- the OpenGL application framework GLFW automatically generates the Z buffer.
- the depth value of each pixel in the model is stored in each fragment of the Z buffer.
- OpenGL When the fragment wants to output the color of the pixel, OpenGL will compare the depth value of the pixel with the z buffer. If the current fragment is behind other fragments, It will be discarded, otherwise it will be overwritten. This process is called depth testing (Depth Testing), it is done automatically by OpenGL.
- the selection module is mainly used for model selection, so as to realize the function of surface selection in perspective.
- the selection function mainly uses technologies such as "ray method and surface strategy sorting" to sort entities in depth. By constructing candidate entities, entities of different depths can be selected.
- the perspective technology module stores some calculation methods and data maintenance methods in the perspective function, mainly including depth rules and elimination rules.
- the depth rule refers to the calculation method of the user depth value, which determines the ease of use of the function, the user depth value D user and the scroll value m of the mouse wheel, the current number of candidate faces n can , and the current elimination
- the number of faces n del is related to the current depth value D pre , that is
- Elimination rules refer to the criteria for judging and maintaining the data of eliminated surfaces during the perspective process. For the perspective function, it is necessary to consider the restoration of the perspective surface, so the data of the eliminated surfaces needs to be saved.
- the perspective technology module maintains a list of eliminated faces, corresponding to the list of candidate entities in the selection module (the list of candidate entities saves the data of all faces penetrated by the ray method, and is sorted by depth value). Note that the intersection of the culled face list and the candidate entity list is empty.
- the two lists are updated in real time according to the perspective and reply of the model, and the selection module is used to realize the selection and removal of faces.
- the data module is mainly used to save the data information in each module.
- the smart terminal When the smart terminal turns on the see-through function, it automatically obtains the response input of the device, mainly including mouse coordinates, view direction and mouse wheel value. According to the information input in response, the smart terminal can construct the corresponding data, mainly including the list of candidate entities, the list of eliminated faces and the user depth value. After obtaining the data, the smart terminal calls the display module and the selection module again to realize the perspective or recovery function of the current model. When the mouse moves or rotates the view, the smart terminal dynamically obtains new response input and repeats the above process. Through coordinate transformation and view rotation, quickly select the surface that needs to be perspectived and displayed, and the use of model perspective method can be more convenient and flexible, which greatly improves the efficiency of designers.
- the model perspective method includes:
- S101 Obtain perspective selection information, and calculate a current penetration surface according to the perspective selection information, where the perspective selection information includes mouse coordinates and view directions.
- the smart terminal obtains the mouse coordinates and view direction, and calculates the current penetration surface of the model that may be perspectived according to the mouse coordinates and view direction. Use the current penetration surface to provide reference information for the perspective and restoration of the subsequent model. Obtain the currently selected surface and the 3D coordinates of the selected point on the model through the mouse coordinates.
- the step of acquiring the perspective selection information before the step of acquiring the perspective selection information, it further includes: judging whether a perspective instruction is received; if yes, execute S101; if not, do not execute S101.
- the smart terminal when the smart terminal obtains the perspective selection information, it also automatically collects the user's mouse wheel value.
- the smart terminal After the smart terminal determines to receive an instruction to enable the see-through function, it starts the see-through function, and automatically acquires the user's see-through selection information and the value of the mouse wheel.
- the steps of calculating the current penetration surface according to the perspective selection information include: obtaining the world coordinates of the selected point in the world coordinate system and the world view direction through the mouse coordinates and the view direction, and using the world coordinates and the world view direction to generate rays, and obtaining them according to the rays The current penetration face.
- the mouse coordinates are the current screen coordinates of the mouse on the screen, through the current screen coordinates p 0 and the view direction Calculate the world coordinate p 1 and view direction of the current screen coordinate point in the world coordinate system Corresponding world view direction
- the ray method after obtaining the world coordinates and the world view direction, use the ray method to calculate the current penetration surface, construct the ray with the world coordinates and the world view direction, and make the ray penetrate the current model to obtain the The current penetrating surface of the penetrating surface.
- S102 Determine the perspective information according to the current penetration surface, continuously obtain the user depth value, and perform perspective and restoration of the model according to the user depth value, current depth value and perspective information.
- the perspective information includes a list of candidate entities and a list of eliminated surfaces.
- the step of determining the perspective information according to the current penetration surface specifically includes: performing a line-plane intersection on the ray and the current penetration surface to obtain the coordinates of the intersection point of the ray and the current penetration surface; according to the coordinates of the intersection point, the world view The direction obtains the depth value of the intersection point, and generates a list of candidate entities and a list of culled faces based on the depth value.
- the step of generating the list of candidate entities and the list of eliminated surfaces based on the depth value specifically includes: sorting the current penetration surface according to the depth values of the intersection points on the current penetration surface, generating the list of candidate entities based on the sorting, and setting the model in the elimination state And the faces intersected by the ray are placed in the list of culled faces.
- the depth value of the intersection point is D. Sort the current penetration surface according to the depth value of the intersection point. Among them, the entities that are eliminated in the current penetration surface and are perspectived are added to the list of eliminated surfaces, and the surfaces that are not perspectived are added to the list of candidate entities.
- the step of continuously obtaining the user's depth value specifically includes: continuously detecting the depth input information, and determining the user's depth value according to the depth input information, current coordinate depth, and perspective information.
- the depth input information is the scrolling value of the mouse wheel.
- the depth input information may also be numbers, characters, special characters and other information that can be identified and based on which the user's depth value can be obtained.
- the step of perspective and restoration of the model according to the user depth value, current depth value and perspective information specifically includes: judging whether the user depth value is smaller than the current depth value; If the depth value is greater than the current culling surface, it is determined to be in a non-perspective state, modify the perspective information according to the non-perspective state, and draw the model based on the modified perspective information; if not, when the user's depth value is greater than the current depth value, the candidate will be selected
- the surface whose depth value in the entity list is smaller than the user's depth value is determined as the perspective state, and the perspective information is modified according to the perspective state, and the model is drawn based on the modified perspective information.
- the current face is the face selected by the current user, that is, the face where the mouse coordinates are located.
- the smart terminal compares the user's depth value with the depth value of the candidate face in the candidate entity list, and when the user's depth value is greater than the current face's depth value, that is, the current face should be seen through.
- the smart terminal For the faces that should be perspective, hide the faces that should be perspective by redrawing the display of the current model, and then delete the faces that should be perspective by updating the list of candidate entities, and save them in the list of eliminated faces.
- the perspective surface is not visible on the screen and cannot be selected for operation.
- operations such as selecting surfaces displayed on the smart terminal and executing modeling commands can be performed.
- the user's depth value when the user's depth value is smaller than the current depth value, it is compared with the depth value of the nearest culled face in the culled face list. When the user depth value is greater than the depth of the nearest culled face, it is considered that the culled face has returned to the non-perspective state. Similarly, redraw the current model to display the culled surface with the largest depth value among the culled surfaces whose depth value is less than or equal to the user's depth value, and update the candidate list and the culled surface list synchronously. At this point, we can select the current display surface, execute modeling commands and other operations.
- step of perspective and restoration of the model according to the user's depth value, current depth value and perspective information it further includes: judging whether perspective object change information is detected, perspective object change information includes model rotation, mouse movement at least one of them; if yes, execute S101; if not, display the model according to the user depth value.
- the change information of the perspective object may also be a model editing operation input by the user, such as replacing, hiding, dragging, etc., which can change the coordinates of the mouse.
- the model perspective method of the present invention calculates the current penetration surface according to the mouse coordinates and the view direction, obtains the list of candidate entities and the list of eliminated surfaces, and obtains the currently hidden surface and the displayed surface according to the obtained user depth value and the current depth value.
- the perspective and recovery of the model can be carried out without manual operations of hiding and restoring the display of the model. The operation is simple and time-consuming, which improves the user's interactive experience.
- FIG. 5 is a structural diagram of an embodiment of the intelligent terminal of the present invention.
- the smart terminal of the present invention will be described with reference to FIG. 5 .
- the smart terminal includes a processor and a memory
- the memory stores a computer program
- the processor executes the model perspective method as described in the above embodiments according to the computer program.
- FIG. 6 is a structural diagram of an embodiment of the storage device of the present invention.
- the storage device of the present invention will be described in conjunction with FIG. 6 .
- the storage device stores program data, and the program data is used to execute the model perspective method described in the above embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
本发明涉及模型显示领域,尤其涉及一种模型透视方法、智能终端以及存储装置。The invention relates to the field of model display, in particular to a model perspective method, an intelligent terminal and a storage device.
在CAD的产品模型中,会存在很多相互遮挡或包含的零部件模型,而且单个模型也存在不同实体的相互遮挡,终端在显示这些模型时,会根据模型中不同部件或实体之间的遮挡关系选择不被遮挡的部分进行显示。In the CAD product model, there will be many part models that occlude or contain each other, and a single model also has mutual occlusion of different entities. When displaying these models, the terminal will use the occlusion relationship between different parts or entities in the model Select the unoccluded part to display.
然而,用户在查看或处理模型时,经常需要选择模型中被遮挡的部分进行显示。为了达到该目的,需要手动先一个个选择前面的遮挡零部件或实体进行隐藏,直至被遮挡的对象前不存在能够遮挡其的零部件或实体。而且,在用户需要查看被隐藏的物体时,又需要手动对被隐藏的物体一一进行恢复显示操作。操作繁琐,耗时长,降低了工作效率和交互体验。However, when viewing or manipulating a model, the user often needs to select an occluded part of the model for display. In order to achieve this goal, it is necessary to manually select the front occluded components or entities one by one to hide until there is no component or entity that can be occluded in front of the occluded object. Moreover, when the user needs to view the hidden objects, it is necessary to manually restore and display the hidden objects one by one. The operation is cumbersome and time-consuming, which reduces the work efficiency and interactive experience.
发明内容Contents of the invention
为了克服现有技术的不足,本发明提出一种模型透视方法、智能终端以及存储装置,根据鼠标坐标、视图方向计算当前穿透面,获取候选实体列表、剔除面列表,并根据获取的用户深度值、当前的深度值获取当前隐藏的面和显示的面以进行模型的透视和恢复,无需手动对模型进行隐藏和恢复显示操作,操作简单、耗时短,提高了用户的交互体验。In order to overcome the deficiencies of the prior art, the present invention proposes a model perspective method, an intelligent terminal and a storage device, calculates the current penetration surface according to the mouse coordinates and the view direction, obtains the list of candidate entities, the list of eliminated surfaces, and according to the obtained user depth value and current depth value to obtain the currently hidden and displayed surfaces for perspective and restoration of the model. There is no need to manually hide and restore the display of the model. The operation is simple and time-consuming, which improves the user's interactive experience.
为解决上述问题,本发明采用的一个技术方案为:一种模型透视方法,所述模型透视方法包括:S101:获取透视选择信息,根据所述透视选择信息计算 当前穿透面,所述透视选择信息包括鼠标坐标、视图方向;S102:根据所述当前穿透面确定透视信息,持续获取用户深度值,根据所述用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复,所述透视信息包括候选实体列表、剔除面列表。In order to solve the above problems, a technical solution adopted by the present invention is: a model perspective method, the model perspective method includes: S101: Acquire perspective selection information, calculate the current penetration surface according to the perspective selection information, and the perspective selection The information includes mouse coordinates and view directions; S102: Determine perspective information according to the current penetration surface, continuously acquire user depth values, perform perspective and restoration of the model according to the user depth values, current depth values, and perspective information, and The above-mentioned perspective information includes a list of candidate entities and a list of eliminated faces.
进一步地,所述获取透视选择信息的步骤之前还包括:判断是否接收到透视指令;若是,则执行S101;若否,则不执行S101。Further, before the step of acquiring perspective selection information, it may further include: judging whether a perspective instruction is received; if yes, perform S101; if not, do not perform S101.
进一步地,所述根据所述透视选择信息计算当前穿透面的步骤具体包括:通过所述鼠标坐标、视图方向获取选取点在世界坐标系中的世界坐标以及世界视图方向,并利用所述世界坐标、世界视图方向生成射线,根据所述射线获取当前穿透面。Further, the step of calculating the current penetration surface according to the perspective selection information specifically includes: obtaining the world coordinates of the selected point in the world coordinate system and the world view direction through the mouse coordinates and the view direction, and using the world coordinates and world view direction to generate rays, and obtain the current penetration surface according to the rays.
进一步地,所述根据所述当前穿透面确定透视信息的步骤具体包括:将所述射线与所述当前穿透面进行线面求交获取所述射线与所述当前穿透面的交点的坐标;根据所述交点的坐标、世界视图方向获取所述交点的深度值,基于所述深度值生成候选实体列表和剔除面列表。Further, the step of determining the perspective information according to the current penetration surface specifically includes: performing line-plane intersection on the ray and the current penetration surface to obtain the intersection point of the ray and the current penetration surface Coordinates: Obtain the depth value of the intersection point according to the coordinates of the intersection point and the world view direction, and generate a candidate entity list and a list of eliminated surfaces based on the depth value.
进一步地,所述基于所述深度值生成候选实体列表和剔除面列表的步骤具体包括:根据所述当前穿透面上交点的深度值对所述当前穿透面进行排序,基于排序生成候选实体列表,并将模型中处于剔除状态且与所述射线相交的面放入剔除面列表。Further, the step of generating a list of candidate entities and a list of eliminated surfaces based on the depth values specifically includes: sorting the current penetration surfaces according to the depth values of intersection points on the current penetration surface, and generating candidate entities based on the sorting list, and put the faces in the model that are culled and intersected by the ray into the culled faces list.
进一步地,所述持续获取用户深度值的步骤具体包括:持续检测深度输入信息,根据所述深度输入信息、当前坐标深度、透视信息确定用户深度值。Further, the step of continuously acquiring the user depth value specifically includes: continuously detecting depth input information, and determining the user depth value according to the depth input information, current coordinate depth, and perspective information.
进一步地,所述根据所述用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复的步骤具体包括:判断所述用户深度值是否小于当前面的深 度值;若是,则将所述剔除面列表中深度值大于所述当前面的剔除面确定为非透视状态,根据所述非透视状态修改所述透视信息,并基于修改后的透视信息绘制所述模型;若否,则在所述用户深度值大于所述当前面的深度值时,将所述候选实体列表中深度值小于所述用户深度值的面确定为透视状态,根据所述透视状态修改所述透视信息,并基于修改后的透视信息绘制所述模型。Further, the step of performing perspective and restoration of the model according to the user depth value, the current depth value and perspective information specifically includes: judging whether the user depth value is smaller than the current depth value; The elimination surface whose depth value is greater than the current one in the list of eliminated surfaces is determined to be in a non-perspective state, modify the perspective information according to the non-perspective state, and draw the model based on the modified perspective information; if not, then in When the user depth value is greater than the current depth value, determine the surface whose depth value is smaller than the user depth value in the candidate entity list as a perspective state, modify the perspective information according to the perspective state, and based on The modified perspective information draws the model.
进一步地,所述根据所述用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复的步骤之后还包括:判断是否检测到透视对象改变信息,所述透视对象改变信息包括模型旋转、鼠标移动中的至少一种;若是,则执行S101;若否,则根据用户深度值进行模型的显示。Further, after the step of performing perspective and restoration of the model according to the user depth value, the current depth value and perspective information, it further includes: judging whether perspective object change information is detected, and the perspective object change information includes model rotation 1. At least one of mouse movement; if yes, execute S101; if no, display the model according to the user's depth value.
基于相同的发明构思,本发明还提出一种智能终端,所述智能终端包括处理器、存储器,所述存储器存储有计算机程序,所述处理器根据所述计算机程序执行如上所述的模型透视方法。Based on the same inventive concept, the present invention also proposes an intelligent terminal, which includes a processor and a memory, the memory stores a computer program, and the processor executes the above-mentioned model perspective method according to the computer program .
基于相同的发明构思,本发明还提出一种存储装置,所述存储装置存储有程序数据,所述程序数据被用于执行如上所述的模型透视方法。Based on the same inventive idea, the present invention also proposes a storage device, the storage device stores program data, and the program data is used to execute the above-mentioned model perspective method.
相比现有技术,本发明的有益效果在于:根据鼠标坐标、视图方向计算当前穿透面,获取候选实体列表、剔除面列表,并根据获取的用户深度值、当前的深度值获取当前隐藏的面和显示的面以进行模型的透视和恢复,无需手动对模型进行隐藏和恢复显示操作,操作简单、耗时短,提高了用户的交互体验。Compared with the prior art, the present invention has the beneficial effects of calculating the current penetrating surface according to the mouse coordinates and the view direction, obtaining the list of candidate entities and the list of eliminated surfaces, and obtaining the currently hidden The surface and the displayed surface can be used for perspective and recovery of the model, without the need to manually hide and restore the display of the model, the operation is simple and time-consuming, and the user's interactive experience is improved.
图1为本发明模型透视方法一实施例的流程图;Fig. 1 is the flowchart of an embodiment of the model perspective method of the present invention;
图2为本发明应用模型透视方法终端一实施例的功能架构图;FIG. 2 is a functional architecture diagram of an embodiment of the terminal of the application model perspective method of the present invention;
图3为本发明模型透视方法另一实施例的流程图;Fig. 3 is a flowchart of another embodiment of the model perspective method of the present invention;
图4为本发明模型透视方法又一实施例的流程图;Fig. 4 is a flowchart of another embodiment of the model perspective method of the present invention;
图5为本发明智能终端一实施例的结构图;FIG. 5 is a structural diagram of an embodiment of an intelligent terminal of the present invention;
图6为本发明存储装置一实施例的结构图。FIG. 6 is a structural diagram of an embodiment of the storage device of the present invention.
下面,结合附图以及具体实施方式,对本发明做进一步描述,需要说明的是,在不相冲突的前提下,以下描述的各实施例之间或各技术特征之间可以任意组合形成新的实施例。Below, the present invention will be further described in conjunction with the accompanying drawings and specific implementation methods. It should be noted that, under the premise of not conflicting, the various embodiments described below or the technical features can be combined arbitrarily to form new embodiments. .
请参阅图1-4,图1为本发明模型透视方法一实施例的流程图;图2为本发明应用模型透视方法终端一实施例的功能架构图;图3为本发明模型透视方法另一实施例的流程图;图4为本发明模型透视方法又一实施例的流程图,结合图1-4对本发明模型透视方法进行详细说明。Please refer to FIGS. 1-4. FIG. 1 is a flow chart of an embodiment of the model perspective method of the present invention; FIG. The flowchart of the embodiment; FIG. 4 is a flowchart of another embodiment of the model perspective method of the present invention, and the model perspective method of the present invention will be described in detail in conjunction with FIGS. 1-4.
在本实施例中,应用模型透视方法的智能终端可以为手机、笔记本电脑、台式机、一体机、平板电脑、服务器、云平台以及其他能够显示模型并根据接收的指令对模型进行编辑的器件。In this embodiment, the smart terminal applying the model perspective method can be a mobile phone, a notebook computer, a desktop computer, an all-in-one computer, a tablet computer, a server, a cloud platform, and other devices capable of displaying a model and editing the model according to received instructions.
在本实施例中,智能终端用于执行模型透视方法的功能模块包括响应模块、显示模块、选取模块、透视技术模块和数据模块,其中,响应模块位于表示层,显示模块、选取模块、透视技术模块位于业务层,数据模块位于数据层。智能终端中的处理器根据存储器中的计算机程序运行上述功能模块实现模型透视方法。In this embodiment, the functional modules used by the smart terminal to execute the model perspective method include a response module, a display module, a selection module, a perspective technology module and a data module, wherein the response module is located at the presentation layer, and the display module, the selection module, and the perspective technology module The module is located in the business layer, and the data module is located in the data layer. The processor in the smart terminal runs the above functional modules according to the computer program in the memory to realize the model perspective method.
其中,响应模块包括智能终端的输入和输出,包括显示器和鼠标,响应模块获取的输入为鼠标坐标信息、鼠标滚轮信息和视图方向信息,输出为屏幕显示信息。显示模块用于模型显示,从而实现透视功能。显示模块应用了OpenGL 技术,CAD中的三维实体的遮挡显示就是通过OpenGL的深度测试实现的,OpenGL存储将所有深度信息于一个Z缓冲(Z-buffer)中,也被称为深度缓冲(Depth Buffer)。OpenGL的应用框架GLFW自动生成Z缓冲。模型中每个像素的深度值存储在Z缓冲的每个片段里面,当片段想要输出像素的颜色时,OpenGL会将像素的深度值和z缓冲进行比较,如果当前的片段在其他片段之后,它将会被丢弃,否则将会覆盖。这个过程称为深度测试(Depth Testing),它是由OpenGL自动完成的。Among them, the response module includes the input and output of the intelligent terminal, including the display and the mouse. The input obtained by the response module is mouse coordinate information, mouse wheel information and view direction information, and the output is screen display information. The display module is used for model display, so as to realize the perspective function. The display module applies OpenGL technology. The occlusion display of the 3D entity in CAD is realized through the depth test of OpenGL. OpenGL stores all the depth information in a Z buffer (Z-buffer), also known as the depth buffer (Depth Buffer). ). The OpenGL application framework GLFW automatically generates the Z buffer. The depth value of each pixel in the model is stored in each fragment of the Z buffer. When the fragment wants to output the color of the pixel, OpenGL will compare the depth value of the pixel with the z buffer. If the current fragment is behind other fragments, It will be discarded, otherwise it will be overwritten. This process is called depth testing (Depth Testing), it is done automatically by OpenGL.
选取模块主要用于模型的选取,从而实现透视情况下的面选取功能。选取功能主要应用了“射线法、面的策略排序”等技术对实体进行深度排序,通过构造候选实体,可以实现不同深度的实体的选取。The selection module is mainly used for model selection, so as to realize the function of surface selection in perspective. The selection function mainly uses technologies such as "ray method and surface strategy sorting" to sort entities in depth. By constructing candidate entities, entities of different depths can be selected.
透视技术模块存储透视功能中的一些计算方法和数据维护方式,主要有深度规则和剔除规则。深度规则指用户深度值的计算方式,用户深度值的计算方式,决定了功能的易用性,用户深度值D user与鼠标滚轮的滚动值m、当前的候选面个数n can、当前的剔除面个数n del有关以及当前面的深度值D pre有关,即 The perspective technology module stores some calculation methods and data maintenance methods in the perspective function, mainly including depth rules and elimination rules. The depth rule refers to the calculation method of the user depth value, which determines the ease of use of the function, the user depth value D user and the scroll value m of the mouse wheel, the current number of candidate faces n can , and the current elimination The number of faces n del is related to the current depth value D pre , that is
D user=f(m,n can,n del,D pre)。剔除规则指在透视过程中剔除面的判断准则和剔除面数据的维护方式,对于透视功能,需要考虑到被透视面恢复的情况,因此需要将剔除面数据进行保存。透视技术模块维护一个剔除面列表,与选取模块中的候选实体列表相对应(候选实体列表保存了所有被射线法穿透的面的数据,并按深度值大小进行排序)。需要注意,剔除面列表与候选实体列表的交集为空。根据模型的透视、回复情况实时对两个列表进行更新,并利用选取模块实现面的选取剔除。 D user = f(m, n can , n del , D pre ). Elimination rules refer to the criteria for judging and maintaining the data of eliminated surfaces during the perspective process. For the perspective function, it is necessary to consider the restoration of the perspective surface, so the data of the eliminated surfaces needs to be saved. The perspective technology module maintains a list of eliminated faces, corresponding to the list of candidate entities in the selection module (the list of candidate entities saves the data of all faces penetrated by the ray method, and is sorted by depth value). Note that the intersection of the culled face list and the candidate entity list is empty. The two lists are updated in real time according to the perspective and reply of the model, and the selection module is used to realize the selection and removal of faces.
数据模块主要用于保存各模块中的数据信息。The data module is mainly used to save the data information in each module.
当智能终端开启透视功能后,自动地获取设备的响应输入,主要包括鼠标坐标,视图方向和鼠标滚轮值。根据响应输入的信息,智能终端得以构造相应的数据,主要包括候选实体列表,剔除面列表和用户深度值。获得数据后,智能终端再次调用显示模块和选取模块,实现当前模型的透视或恢复功能。当鼠标移动或旋转视图时,智能终端动态地获取新的响应输入,并重复上述流程。通过坐标变换和视图旋转快速选择当前需透视和显示的面,模型透视方法的使用可以更加方便灵活,大大提高了设计人员的效率。When the smart terminal turns on the see-through function, it automatically obtains the response input of the device, mainly including mouse coordinates, view direction and mouse wheel value. According to the information input in response, the smart terminal can construct the corresponding data, mainly including the list of candidate entities, the list of eliminated faces and the user depth value. After obtaining the data, the smart terminal calls the display module and the selection module again to realize the perspective or recovery function of the current model. When the mouse moves or rotates the view, the smart terminal dynamically obtains new response input and repeats the above process. Through coordinate transformation and view rotation, quickly select the surface that needs to be perspectived and displayed, and the use of model perspective method can be more convenient and flexible, which greatly improves the efficiency of designers.
在本实施例中,模型透视方法包括:In this embodiment, the model perspective method includes:
S101:获取透视选择信息,根据透视选择信息计算当前穿透面,透视选择信息包括鼠标坐标、视图方向。S101: Obtain perspective selection information, and calculate a current penetration surface according to the perspective selection information, where the perspective selection information includes mouse coordinates and view directions.
智能终端获取鼠标坐标、视图方向,根据该鼠标坐标、视图方向计算模型当前可能被透视的当前穿透面。利用该当前穿透面为后续模型的透视、恢复提供参考信息。通过鼠标坐标获取当前选取的面以及选取点在模型上的三维坐标值。The smart terminal obtains the mouse coordinates and view direction, and calculates the current penetration surface of the model that may be perspectived according to the mouse coordinates and view direction. Use the current penetration surface to provide reference information for the perspective and restoration of the subsequent model. Obtain the currently selected surface and the 3D coordinates of the selected point on the model through the mouse coordinates.
在本实施例中,获取透视选择信息的步骤之前还包括:判断是否接收到透视指令;若是,则执行S101;若否,则不执行S101。In this embodiment, before the step of acquiring the perspective selection information, it further includes: judging whether a perspective instruction is received; if yes, execute S101; if not, do not execute S101.
其中,智能终端在获取透视选择信息时,还自动采集用户的鼠标滚轮值。Wherein, when the smart terminal obtains the perspective selection information, it also automatically collects the user's mouse wheel value.
在一个具体的实施例中,智能终端在确定接收到开启透视功能的指令后,开启透视功能,并自动获取用户的透视选择信息和鼠标滚轮值。In a specific embodiment, after the smart terminal determines to receive an instruction to enable the see-through function, it starts the see-through function, and automatically acquires the user's see-through selection information and the value of the mouse wheel.
根据透视选择信息计算当前穿透面的步骤具体包括:通过鼠标坐标、视图方向获取选取点在世界坐标系中的世界坐标以及世界视图方向,并利用世界坐 标、世界视图方向生成射线,根据射线获取当前穿透面。The steps of calculating the current penetration surface according to the perspective selection information include: obtaining the world coordinates of the selected point in the world coordinate system and the world view direction through the mouse coordinates and the view direction, and using the world coordinates and the world view direction to generate rays, and obtaining them according to the rays The current penetration face.
在本实施例中,鼠标坐标为鼠标在屏幕上的当前屏幕坐标点,通过当前屏幕坐标点p 0和视图方向 计算当前屏幕坐标点在世界坐标系中的世界坐标p 1和视图方向 对应的世界视图方向 In this embodiment, the mouse coordinates are the current screen coordinates of the mouse on the screen, through the current screen coordinates p 0 and the view direction Calculate the world coordinate p 1 and view direction of the current screen coordinate point in the world coordinate system Corresponding world view direction
在一个具体的实施例中,获取世界坐标、世界视图方向后,使用射线法计算当前穿透面,以世界坐标和世界视图方向构造射线,并使射线穿透当前模型,得到模型中被射线穿透的当前穿透面。In a specific embodiment, after obtaining the world coordinates and the world view direction, use the ray method to calculate the current penetration surface, construct the ray with the world coordinates and the world view direction, and make the ray penetrate the current model to obtain the The current penetrating surface of the penetrating surface.
S102:根据当前穿透面确定透视信息,持续获取用户深度值,根据用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复,透视信息包括候选实体列表、剔除面列表。S102: Determine the perspective information according to the current penetration surface, continuously obtain the user depth value, and perform perspective and restoration of the model according to the user depth value, current depth value and perspective information. The perspective information includes a list of candidate entities and a list of eliminated surfaces.
在本实施例中,根据当前穿透面确定透视信息的步骤具体包括:将射线与当前穿透面进行线面求交获取射线与当前穿透面的交点的坐标;根据交点的坐标、世界视图方向获取交点的深度值,基于深度值生成候选实体列表和剔除面列表。In this embodiment, the step of determining the perspective information according to the current penetration surface specifically includes: performing a line-plane intersection on the ray and the current penetration surface to obtain the coordinates of the intersection point of the ray and the current penetration surface; according to the coordinates of the intersection point, the world view The direction obtains the depth value of the intersection point, and generates a list of candidate entities and a list of culled faces based on the depth value.
其中,基于深度值生成候选实体列表和剔除面列表的步骤具体包括:根据当前穿透面上交点的深度值对当前穿透面进行排序,基于排序生成候选实体列表,并将模型中处于剔除状态且与射线相交的面放入剔除面列表。Among them, the step of generating the list of candidate entities and the list of eliminated surfaces based on the depth value specifically includes: sorting the current penetration surface according to the depth values of the intersection points on the current penetration surface, generating the list of candidate entities based on the sorting, and setting the model in the elimination state And the faces intersected by the ray are placed in the list of culled faces.
在一个具体的实施例中,交点在世界坐标系中的坐标为(p x,p y,p z),世界视图方向为(v x,v y,v z),交点的深度D=p x*v x+p y*v y+p z*v z。其中,p x,p y,p z分别为交点在世界坐标系的X、Y、Z轴上的坐标,v x,v y,v z为世界视图方向在世界坐标系的X、Y、Z轴上的向量,交点的深度值为D。根据交点的深度值对当前穿透面进行排序,其中,将被当前穿透面中被剔除即被透视的实体 添加到剔除面列表,未被透视的面添加到候选实体列表。 In a specific embodiment, the coordinates of the intersection point in the world coordinate system are (p x , p y , p z ), the world view direction is (v x , v y , v z ), and the depth of the intersection point D=p x *v x +p y *v y +p z *v z . Among them, p x , p y , p z are the coordinates of the intersection point on the X, Y, and Z axes of the world coordinate system respectively, and v x , v y , v z are the X, Y, and Z axes of the world view direction in the world coordinate system Vector on the axis, the depth value of the intersection point is D. Sort the current penetration surface according to the depth value of the intersection point. Among them, the entities that are eliminated in the current penetration surface and are perspectived are added to the list of eliminated surfaces, and the surfaces that are not perspectived are added to the list of candidate entities.
持续获取用户深度值的步骤具体包括:持续检测深度输入信息,根据深度输入信息、当前坐标深度、透视信息确定用户深度值。The step of continuously obtaining the user's depth value specifically includes: continuously detecting the depth input information, and determining the user's depth value according to the depth input information, current coordinate depth, and perspective information.
在本实施例中,深度输入信息为鼠标滚轮的滚动值,在其他实施例中,深度输入信息也可以为数字、字符、特殊字符以及其他能够识别且能够根据其获取用户深度值的信息。In this embodiment, the depth input information is the scrolling value of the mouse wheel. In other embodiments, the depth input information may also be numbers, characters, special characters and other information that can be identified and based on which the user's depth value can be obtained.
在一个具体的实施例中,智能终端检测鼠标滚轮的滚动事件,当获取鼠标滚轮的滚动事件后,根据鼠标滚轮的滚动值以及当前的候选实体个数,当前剔除面个数以及当前鼠标坐标的深度计算用户深度值,其中,通过D user=f(m,n can,n del,D pre)进行计算,用户深度值为D user,鼠标滚轮的滚动值为m、当前的候选面个数为n can、当前的剔除面个数为n del,当前面的深度值为D pre。将滚动值转换为当前的视图方向上的深度值,该深度值为用户深度值。 In a specific embodiment, the smart terminal detects the scrolling event of the mouse wheel. After obtaining the scrolling event of the mouse wheel, according to the scrolling value of the mouse wheel and the current number of candidate entities, the current number of eliminated faces and the current mouse coordinates Depth calculation of the user depth value, wherein, through D user = f (m, n can , n del , D pre ) to calculate, the user depth value is D user , the scroll value of the mouse wheel is m, and the number of current candidate faces is n can , the current number of eliminated faces is n del , and the current depth value is D pre . Convert the scroll value to the depth value in the current view direction, which is the user depth value.
在本实施例中,根据用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复的步骤具体包括:判断用户深度值是否小于当前面的深度值;若是,则将剔除面列表中深度值大于当前面的剔除面确定为非透视状态,根据非透视状态修改透视信息,并基于修改后的透视信息绘制模型;若否,则在用户深度值大于当前面的深度值时,将候选实体列表中深度值小于用户深度值的面确定为透视状态,根据透视状态修改透视信息,并基于修改后的透视信息绘制模型。其中,当前面为当前用户选取的面即鼠标坐标所在的面。In this embodiment, the step of perspective and restoration of the model according to the user depth value, current depth value and perspective information specifically includes: judging whether the user depth value is smaller than the current depth value; If the depth value is greater than the current culling surface, it is determined to be in a non-perspective state, modify the perspective information according to the non-perspective state, and draw the model based on the modified perspective information; if not, when the user's depth value is greater than the current depth value, the candidate will be selected The surface whose depth value in the entity list is smaller than the user's depth value is determined as the perspective state, and the perspective information is modified according to the perspective state, and the model is drawn based on the modified perspective information. Among them, the current face is the face selected by the current user, that is, the face where the mouse coordinates are located.
在一个具体的实施例中,智能终端将用户深度值与候选实体列表中候选面的深度值进行比较,当用户深度值大于当前面的深度值时,即当前面应当被透视。对于应当被透视的面,通过重绘当前模型的显示从而隐藏应当被透视的面, 再通过更新候选实体列表,将被透视的面剔除,并将其保存在剔除面列表中。此时被透视的面在屏幕上不可见且无法选取操作。此时,可以对智能终端上显示的面进行选取、执行建模命令等操作。In a specific embodiment, the smart terminal compares the user's depth value with the depth value of the candidate face in the candidate entity list, and when the user's depth value is greater than the current face's depth value, that is, the current face should be seen through. For the faces that should be perspective, hide the faces that should be perspective by redrawing the display of the current model, and then delete the faces that should be perspective by updating the list of candidate entities, and save them in the list of eliminated faces. At this time, the perspective surface is not visible on the screen and cannot be selected for operation. At this point, operations such as selecting surfaces displayed on the smart terminal and executing modeling commands can be performed.
在另一个具体的实施例中,当用户深度值小于当前面的深度值时,将其与剔除面列表中最近剔除面的深度值进行比较。当用户深度值大于最近剔除面深度时,即认为此时被剔除的面重新回到了非透视状态。同样地,重绘当前模型从而显示深度值小于等于用户深度值的剔除面中深度值最大的剔除面,并同步更新候选列表和剔除面列表。此时我们可以对当前显示面进行选取、执行建模命令等操作。In another specific embodiment, when the user's depth value is smaller than the current depth value, it is compared with the depth value of the nearest culled face in the culled face list. When the user depth value is greater than the depth of the nearest culled face, it is considered that the culled face has returned to the non-perspective state. Similarly, redraw the current model to display the culled surface with the largest depth value among the culled surfaces whose depth value is less than or equal to the user's depth value, and update the candidate list and the culled surface list synchronously. At this point, we can select the current display surface, execute modeling commands and other operations.
在本实施例中,根据用户深度值、当前面的深度值以及透视信息进行模型的透视和恢复的步骤之后还包括:判断是否检测到透视对象改变信息,透视对象改变信息包括模型旋转、鼠标移动中的至少一种;若是,则执行S101;若否,则根据用户深度值进行模型的显示。In this embodiment, after the step of perspective and restoration of the model according to the user's depth value, current depth value and perspective information, it further includes: judging whether perspective object change information is detected, perspective object change information includes model rotation, mouse movement at least one of them; if yes, execute S101; if not, display the model according to the user depth value.
在其他实施例中,透视对象改变信息还可以为用户输入的替换、隐藏、拖动等能够改变鼠标坐标的模型编辑操作。In other embodiments, the change information of the perspective object may also be a model editing operation input by the user, such as replacing, hiding, dragging, etc., which can change the coordinates of the mouse.
有益效果:本发明的模型透视方法根据鼠标坐标、视图方向计算当前穿透面,获取候选实体列表、剔除面列表,并根据获取的用户深度值、当前的深度值获取当前隐藏的面和显示的面以进行模型的透视和恢复,无需手动对模型进行隐藏和恢复显示操作,操作简单、耗时短,提高了用户的交互体验。Beneficial effects: the model perspective method of the present invention calculates the current penetration surface according to the mouse coordinates and the view direction, obtains the list of candidate entities and the list of eliminated surfaces, and obtains the currently hidden surface and the displayed surface according to the obtained user depth value and the current depth value. The perspective and recovery of the model can be carried out without manual operations of hiding and restoring the display of the model. The operation is simple and time-consuming, which improves the user's interactive experience.
基于相同的发明构思,本发明还提出一种智能终端,请参阅图5,图5为本发明智能终端一实施例的结构图。结合图5对本发明的智能终端进行说明。Based on the same inventive concept, the present invention also proposes an intelligent terminal, please refer to FIG. 5 , which is a structural diagram of an embodiment of the intelligent terminal of the present invention. The smart terminal of the present invention will be described with reference to FIG. 5 .
在本实施例中,智能终端包括处理器、存储器,存储器存储有计算机程序, 处理器根据计算机程序执行如上述实施例所述的模型透视方法。In this embodiment, the smart terminal includes a processor and a memory, the memory stores a computer program, and the processor executes the model perspective method as described in the above embodiments according to the computer program.
基于相同的发明构思,本发明还提出一种存储装置,请参阅图6为本发明存储装置一实施例的结构图,结合图6对本发明的存储装置进行说明。Based on the same inventive concept, the present invention also proposes a storage device. Please refer to FIG. 6 , which is a structural diagram of an embodiment of the storage device of the present invention. The storage device of the present invention will be described in conjunction with FIG. 6 .
在本实施例中,存储装置存储有程序数据,该程序数据被用于执行如上述实施例所述的模型透视方法。In this embodiment, the storage device stores program data, and the program data is used to execute the model perspective method described in the above embodiments.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts of each embodiment can be referred to each other.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其他实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111037224.6 | 2021-09-06 | ||
| CN202111037224.6A CN113486415B (en) | 2021-09-06 | 2021-09-06 | Model perspective method, intelligent terminal and storage device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023029475A1 true WO2023029475A1 (en) | 2023-03-09 |
Family
ID=77947196
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/085105 Ceased WO2023029475A1 (en) | 2021-09-06 | 2022-04-02 | Model perspective method, intelligent terminal and storage device |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN113486415B (en) |
| WO (1) | WO2023029475A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113486415B (en) * | 2021-09-06 | 2022-01-07 | 广州中望龙腾软件股份有限公司 | Model perspective method, intelligent terminal and storage device |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100303302A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Estimating An Occluded Body Part |
| CN102440788A (en) * | 2010-08-24 | 2012-05-09 | 富士胶片株式会社 | stereoscopic image display method and device |
| CN103198517A (en) * | 2011-12-23 | 2013-07-10 | 联发科技股份有限公司 | Method for generating target perspective model and perspective model estimation device thereof |
| CN106203433A (en) * | 2016-07-13 | 2016-12-07 | 西安电子科技大学 | In a kind of vehicle monitoring image, car plate position automatically extracts and the method for perspective correction |
| CN113486415A (en) * | 2021-09-06 | 2021-10-08 | 广州中望龙腾软件股份有限公司 | Model perspective method, intelligent terminal and storage device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7536410B2 (en) * | 2005-04-22 | 2009-05-19 | Microsoft Corporation | Dynamic multi-dimensional scrolling |
-
2021
- 2021-09-06 CN CN202111037224.6A patent/CN113486415B/en active Active
-
2022
- 2022-04-02 WO PCT/CN2022/085105 patent/WO2023029475A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100303302A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Estimating An Occluded Body Part |
| CN102440788A (en) * | 2010-08-24 | 2012-05-09 | 富士胶片株式会社 | stereoscopic image display method and device |
| CN103198517A (en) * | 2011-12-23 | 2013-07-10 | 联发科技股份有限公司 | Method for generating target perspective model and perspective model estimation device thereof |
| CN106203433A (en) * | 2016-07-13 | 2016-12-07 | 西安电子科技大学 | In a kind of vehicle monitoring image, car plate position automatically extracts and the method for perspective correction |
| CN113486415A (en) * | 2021-09-06 | 2021-10-08 | 广州中望龙腾软件股份有限公司 | Model perspective method, intelligent terminal and storage device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113486415B (en) | 2022-01-07 |
| CN113486415A (en) | 2021-10-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111815755B (en) | Method, device, and terminal equipment for determining an occluded area of a virtual object | |
| CN101002252B (en) | Method for rendering a composite desktop window | |
| US6229542B1 (en) | Method and apparatus for managing windows in three dimensions in a two dimensional windowing system | |
| CN111080766B (en) | GPU (graphics processing unit) acceleration mass target efficient rendering method based on WebGL | |
| CN102915232B (en) | The exchange method of a kind of 3D control and communication terminal | |
| CN101288104A (en) | dynamic window decomposition | |
| US10535188B2 (en) | Tessellation edge shaders | |
| JP2012190428A (en) | Stereoscopic image visual effect processing method | |
| EP2701048A1 (en) | Method and device for software interface display on terminal, and computer storage medium | |
| CN114529658A (en) | Graph rendering method and related equipment thereof | |
| US10675538B2 (en) | Program, electronic device, system, and method for determining resource allocation for executing rendering while predicting player's intent | |
| CN115228083A (en) | Resource rendering method and device | |
| US9483873B2 (en) | Easy selection threshold | |
| CN114327057A (en) | Object selection method, apparatus, apparatus, medium and program product | |
| US9898842B2 (en) | Method and system for generating data-efficient 2D plots | |
| US11301125B2 (en) | Vector object interaction | |
| WO2018175869A1 (en) | System and method for mass-animating characters in animated sequences | |
| CN113706504B (en) | Residual image processing method and device, storage medium and electronic equipment | |
| WO2023029475A1 (en) | Model perspective method, intelligent terminal and storage device | |
| CN113838217B (en) | Information display method and device, electronic equipment and readable storage medium | |
| CN115047976A (en) | Multi-level AR display method and device based on user interaction and electronic equipment | |
| US12340476B2 (en) | Method of learning a target object by detecting an edge from a digital model of the target object and setting sample points, and method of augmenting a virtual model on a real object implementing the target object using the learning method | |
| CN115761123A (en) | Three-dimensional model processing method and device, electronic device and storage medium | |
| CN115878248A (en) | Image processing method, device, equipment and storage medium | |
| CN116778114A (en) | Method for operating component, electronic device, storage medium and program product |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22862622 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/07/2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22862622 Country of ref document: EP Kind code of ref document: A1 |