Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
For a better understanding of aspects of embodiments of the present application, related terms and concepts that may be related to embodiments of the present application are described below.
The electronic device described in the embodiments of the present application may include a smart Phone (such as an Android Mobile Phone, an iOS Mobile Phone, a Windows Phone Mobile Phone, etc.), a tablet computer, a palm computer, a notebook computer, a video matrix, a monitoring platform, a Mobile internet device (MID, mobile INTERNET DEVICES), a wearable device, etc., which are merely examples, but not exhaustive, including but not limited to the above devices, and of course, the above electronic device may also be a server, for example, a cloud server.
In the following, an architecture of a data processing system according to an embodiment of the present application is described with reference to fig. 1, and fig. 1 is a schematic diagram of an architecture of a data processing system according to an embodiment of the present application, where a data processing system 100 includes a development device 110, a first cloud server 120, and a second cloud server 130.
The development device 110 may be an electronic device on which the illusion engine 4 (Unreal Engine, UE 4) is mounted, and is configured to perform a first process on the target illusion engine 4 model data, where the target illusion engine 4 model data may be a BIM model rendered by the UE4, such as an underground garage model, and the like.
In one possible embodiment, the development device 110 first obtains building drawing data, where the building drawing data generally includes CAD drawings, then generates a BIM model of the target building according to the building drawing data, then renders and further processes the BIM model by the UE4 to obtain the target illusion engine 4 model data, and finally performs the first process on the target illusion engine 4 model data, that is, packages the target illusion engine 4 model data into target engineering data, and then uploads the target engineering data to the first cloud server 120, where a preset compiler or a related plug-in may be called by an editor of the UE4 to package the target illusion engine 4 model data into an executable file in EXE format. It can be seen that this may facilitate the first cloud server 120 to launch the executable file.
The first cloud server 120 may be a cloud GPU server, and may directly start the executable file in the EXE format after receiving the target engineering data, and then output the target engineering data to the second cloud server 130 in a video stream form according to a preset video stream protocol, specifically, may convert the target engineering data into the target video stream data according to the preset video stream protocol, and then send the target video stream data to the second cloud server 130.
The second cloud server 130 may be a cloud front-end server, and is configured to receive and perform third processing on the target video stream data to output front-end page data. Specifically, after receiving the target video stream data from the first cloud server 120, the second cloud server 130 may generate a target interaction page and an interaction portal link according to the target video stream data, where the target interaction page is used to interact with the target user and the target illusion engine 4 model, and the interaction portal link is used to jump to the target interaction page. The interactive portal link may be a uniform resource locator (Uniform Resource Locator, URL), a two-dimensional code, etc., which is not particularly limited herein.
Therefore, through the system architecture, a user can interact with the target illusion engine 4 model on the webpage without downloading the client and building model data, so that the interaction convenience and the user experience are greatly improved.
In a possible embodiment, the data processing system 100 may further interact with the target device 140, where the target device 140 may be an electronic device used by a target user, and the target user may click on the interaction portal link through the target device 140 to log in the target interaction page, where the second cloud server 130 side may output the target interaction page to the target device in response to a click instruction of the target device on the interaction portal link, further, the target user may view an arbitrary region of the target ghost engine 4 model by inputting an interaction instruction such as zooming in, zooming out, querying, etc. on the target device 140, where the second cloud server 130 side may output a target region page of the target ghost engine 4 model conforming to the interaction instruction to the target device in response to an interaction instruction of the target device on the target interaction page, where the interaction instruction is used to view an arbitrary region of the target ghost engine 4 model.
For the convenience of understanding, taking an underground garage scene as an example, under the condition that the target illusion engine 4 model data is 3D model data of the underground garage, firstly, the development equipment firstly packages the 3D model data of the underground garage, uploads the data to the cloud end in an EXE format, the cloud GPU server in the cloud end starts a file in the EXE format, converts the file into a video stream form and outputs the video stream form to the cloud front end server, the cloud front end server can generate a target interaction page and a URL link of a corresponding interaction entrance according to the target video stream data, then the target equipment can log in the target interaction page through the URL link, directly see the 3D model of the underground garage and can interact, such as checking the position of a vacant parking space, navigation of a garage path and the like, and the method is not particularly limited.
Therefore, through the system architecture of the cloud GPU service, the UE4 service EXE engineering and the web front-end service, after the target illusion engine 4 model data is uploaded to the cloud server, a target user can directly interact with the target illusion engine 4 model without downloading a huge client, so that interaction convenience and user experience are greatly improved.
The electronic device in the embodiment of the present application is described below, and the electronic device may include a target device and a server required for data processing.
Referring to fig. 2, a schematic structural diagram of an electronic device 200 according to an exemplary embodiment of the present application is shown. The electronic device 200 may be a communication-capable device, and the electronic device 200 may include various handheld devices, vehicle mounted devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, as well as various forms of User Equipment (UE), mobile Station (MS), terminal devices (TERMINAL DEVICE), and the like. The electronic device 200 of the present application may include one or more of a processor 210, a memory 220, and an input-output device 230.
Processor 210 may include one or more processing cores. The processor 210 utilizes various interfaces and lines to connect various portions of the overall electronic device 200, perform various functions of the electronic device 200, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 220, and invoking data stored in the memory 220. Processor 210 may include one or more processing units, for example, processor 210 may include a central processor (Central Processing Unit, CPU), an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-network processing unit, NPU, etc. The controller may be a neural hub and a command center of the electronic device 200, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing display contents, and the modem is used for processing wireless communication. The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 200 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like. Video codecs are used to compress or decompress digital video. The electronic device 200 may support one or more video codecs. Thus, the electronic device 200 may play or record video in a variety of encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and the like. The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. The application such as intelligent cognition of the electronic device 200 can be realized through the NPU.
A memory may be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Avoiding duplicate accesses, reducing the latency of the processor 210, and improving system efficiency.
The processor 210 may include one or more interfaces, such as an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derail clock line, SCL). The processor 210 may contain multiple sets of I2C interfaces through which the touch sensor, charger, flash, camera, etc., may be coupled separately. For example, the processor 210 may couple the touch sensor through an I2C interface, causing the processor 210 to communicate with the touch sensor through the I2C interface, implementing the touch functionality of the electronic device 200.
The I2S interface may be used for audio communication. The processor 210 may include multiple sets of I2S interfaces, coupled to the audio module via the I2S interfaces, to enable communication between the processor 210 and the audio module. The audio module can transmit audio signals to the wireless communication module through the I2S interface, so that the function of answering a call through the Bluetooth headset is realized.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. The audio module and the wireless communication module can be coupled through the PCM interface, and particularly can transmit audio signals to the wireless communication module through the PCM interface, so that the function of answering a call through the Bluetooth headset is realized. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. UART interfaces are typically used to connect the processor 210 with the wireless communication module. For example, the processor 210 communicates with a bluetooth module in the wireless communication module through a UART interface to implement a bluetooth function. The audio module can transmit audio signals to the wireless communication module through the UART interface, so that the function of playing music through the Bluetooth headset is realized.
The MIPI interface may be used to connect the processor 210 with peripheral devices such as a display screen, camera, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 210 and the camera communicate through a CSI interface to implement the photographing function of electronic device 200. The processor 210 communicates with the display screen via a DSI interface to implement the display functionality of the electronic device 200.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 210 with a camera, display screen, wireless communication module, audio module, sensor module, or the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface is an interface conforming to the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface may be used to connect a charger to charge the electronic device 200, or may be used to transfer data between the electronic device 200 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It is understood that the processor 210 may be mapped to a System on a Chip (SOC) in an actual product, and the processing unit and/or the interface may not be integrated into the processor 210, and the corresponding functions may be implemented by a single communication Chip or electronic component. The above-described interface connection relationship between the modules is merely illustrative, and does not constitute a unique limitation on the structure of the electronic device 200.
Memory 220 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Optionally, the memory 220 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 220 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 220 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, which may be an Android (Android) system (including a system developed based on the Android system), an IOS system developed by apple corporation (including a system developed based on the IOS system), or other systems, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the electronic device 200 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
The input output device 230 may include a touch display screen for receiving touch operations by a user using a finger, a stylus, or any other suitable object thereon or thereabout, and displaying a user interface for each application. The touch display screen is typically disposed on the front panel of the electronic device 200. The touch display screen may be designed as a full screen, a curved screen, or a shaped screen. The touch display screen may also be designed as a combination of a full screen and a curved screen, and the combination of a special-shaped screen and a curved screen is not limited in this embodiment of the present application.
It should be understood that the electronic device 200 may be any device according to the embodiments of the present application, which is not specifically limited herein.
The following describes a data processing method in an embodiment of the present application with reference to fig. 3, and fig. 3 is a schematic flow chart of the data processing method provided in the embodiment of the present application, where the method is applied to a data processing system, and the data processing system includes a development device, a first cloud server and a second cloud server, and the method specifically includes the following steps:
in step 301, the development device performs a first process on the model data of the target illusion engine 4 to output target engineering data in a preset file format to the first cloud server.
The preset file format may be EXE format, and the step of the first processing includes packaging the target illusion engine 4 model data into an executable file in EXE format, that is, the target engineering data.
And step 302, receiving and performing second processing on the target engineering data through the first cloud server so as to output target video stream data to a second cloud server.
The second processing step includes converting the target engineering data into target video stream data in the form of video stream and outputting the target video stream data to the second cloud server.
And step 303, receiving and performing third processing on the target video stream data through the second cloud server so as to output front-end page data.
Wherein the step of the third processing includes generating front-end page data from the target video stream data, the front-end page data including a target interaction page and an interaction portal link.
By the method, the target illusion engine 4 model can be uploaded to the cloud server and processed correspondingly, so that a user can interact with the target illusion engine 4 model without downloading a huge client or illusion engine 4 model file, and user experience is greatly improved.
Further, another data processing method in the embodiment of the present application is described below with reference to fig. 4, and fig. 4 is a schematic flow chart of another data processing method provided in the embodiment of the present application, where the method is applied to a data processing system, the data processing system includes a development device, a first cloud server and a second cloud server, and the method specifically includes the following steps:
in step 401, the development device performs a first process on the model data of the target illusion engine 4, so as to output target engineering data in a preset file format to the first cloud server.
And step 402, receiving and performing second processing on the target engineering data through the first cloud server so as to output target video stream data to a second cloud server.
And step 403, receiving and performing third processing on the target video stream data through the second cloud server so as to output front-end page data.
And step 404, responding to a click command of target equipment aiming at the interaction entrance link through the second cloud server, and outputting the target interaction page to the target equipment.
The target device is an electronic device used by a target user, the interactive portal link can be a URL link, and the target user can jump to the target interactive page by clicking the interactive portal link through the target device.
And step 405, responding to an interaction instruction of the target device for the target interaction page through the second cloud server, and outputting a target area page of the target illusion engine 4 model conforming to the interaction instruction to the target device.
Wherein the interaction instruction is used for viewing any area of the target illusion engine 4 model.
The target illusion engine 4 model can be uploaded to the cloud server through the method and is correspondingly processed, so that a user can interact with the target illusion engine 4 model without downloading a huge client or illusion engine 4 model file, and user experience is greatly improved.
The steps not described in detail above may refer to all or part of the steps of the method in fig. 3, and are not described herein.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the method embodiments, and the computer includes a fish shoal detection device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. The Memory includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program codes.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable Memory, and the Memory may include a flash disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.