Disclosure of Invention
In order to solve the technical problems that an additional image acquisition device is needed in a projection immersive content interaction method in the prior art, the cost is high, the calculation amount of algorithms such as image recognition is large, and the method is not suitable for low-cost devices with weak calculation capability.
In one aspect, the invention features an interaction method for immersive content, comprising:
s1: acquiring initial attitude information of projection equipment in space, wherein the attitude information comprises the spatial attitude and the orientation of the projection equipment;
s2: according to a file to be projected, a virtual 3D scene model is built, and a virtual camera and a virtual head auditory model are built in the virtual 3D scene model, wherein the virtual 3D scene model comprises an object which is set to change state or position after receiving an interaction instruction;
s3: mapping the virtual camera and the virtual head auditory model to projection equipment, and calculating to obtain projection pictures and audio data under current posture information and/or interaction instructions;
s4: and outputting the projection picture and audio data under the current posture information and/or the interactive instruction.
Preferably, the 3D scene model includes foreground objects, background objects and audio sources, and the foreground objects and the background objects change positions and appearances with time. The 3D scene model structure can make the projected content have more immersive effect.
Further preferably, the object which is set to change state or position after receiving the interaction instruction is one or more of the foreground objects. By setting the foreground object to be capable of changing the state position after receiving the interactive instruction, the interactive operation in the projection process can be realized.
Preferably, the obtaining of the attitude information in step S1 is performed by using an attitude sensor disposed on the projection device to perform attitude solution, the attitude solution is solved by using a sensor fusion algorithm of kalman filtering, and the attitude sensor is a 6-axis or 9-axis sensor including an acceleration sensor, an angular velocity sensor, and a geomagnetic data sensor. By means of the algorithm, the attitude information of the projection equipment can be rapidly obtained through calculation, and more accurate attitude and orientation can be obtained through the arrangement of the multi-axis sensor.
Further preferably, the viewing angle of the virtual camera and the aspect ratio of the imaging plane are the same as those of the projection apparatus, and the initial pose information of the virtual camera and the virtual head auditory model is the same as those of the projection apparatus. Projection information can be calculated in advance through the arrangement of the virtual camera and the virtual head auditory model, and then the projection information is directly projected in the projection equipment.
Preferably, the step S3 of calculating the projection picture and/or the audio data under the interactive instruction is specifically:
recording the object as a virtual object to be operated in response to the fact that the axis of the virtual camera intersects with the foreground object and the foreground object is set to be an object capable of changing state or position after receiving the interaction instruction;
responding to an interaction instruction sent by a user and a virtual object to be operated currently exists, and switching the virtual object to be operated into an interactive state, wherein the interactive state comprises form transformation, position transformation, size transformation or a combination thereof;
and responding to the user to remove the interactive instruction and the virtual object to be operated currently exists, restoring the virtual object to be operated to a normal state, and emptying the record of the virtual object to be operated.
According to a second aspect of the invention, a computer-readable storage medium is proposed, on which one or more computer programs are stored, which when executed by a computer processor implement the above-mentioned method.
According to a third aspect of the invention, there is provided an interactive system for immersive content, the system comprising:
the projection equipment is used for receiving a picture to be projected and projecting the picture to a space surface;
the attitude sensor module is used for acquiring attitude information of the projection equipment in a current space, wherein the attitude information comprises a spatial attitude and an orientation of the projection equipment;
the interaction module is used for receiving an interaction instruction made by a user;
the virtual 3D scene model comprises a processor module, a virtual camera module and a virtual head auditory model, wherein the processor module is used for constructing the virtual 3D scene model according to a file to be projected, and establishing the virtual camera and the virtual head auditory model in the virtual 3D scene model, and the virtual 3D scene model comprises an object which is set to change state or position after receiving an interactive instruction; the virtual camera and the virtual head auditory model are mapped to the projection equipment, projection pictures and audio data under the current posture information and/or interactive instructions are obtained, and the projection pictures and the audio data under the current posture information and/or interactive instructions of the projection equipment are determined according to different posture information and/or interactive instructions when the projection equipment rotates in space.
Preferably, the projection device further comprises an audio output module for outputting audio of at least one channel or stereo audio of at least two channels, the attitude sensor module comprises a 6-axis or 9-axis sensor for acceleration, angular velocity and geomagnetic data, the interaction module comprises a button, a rocker or a handle arranged on the system, and the interaction instruction comprises a click instruction, a toggle instruction or different key instructions. The audio output of multichannel can promote immersive effect, and the setting of multiaxis sensor can obtain more accurate gesture and orientation, and interactive module's diversification makes can carry out different interactive module's operation according to different projection equipment, and different interactive effect can be realized to different interactive instruction.
Preferably, the processor module is further configured to record the object as a virtual object to be operated in response to the axis of the virtual camera intersecting the foreground object and the foreground object being an object set to change state or position after receiving the interactive instruction; responding to an interaction instruction sent by a user and the virtual object to be operated currently exists, and switching the virtual object to be operated into an interactive state, wherein the interactive state comprises form transformation, position transformation, size transformation or a combination thereof; and responding to the user to remove the interactive instruction and the virtual object to be operated currently exists, restoring the virtual object to be operated to be in a normal state, and clearing the record of the virtual object to be operated.
The invention provides an interaction method and system for immersive content, which skillfully utilize the cooperation of a gesture sensor, an interaction device and projection equipment, position the gesture and orientation of the current projection according to the gesture sensor, and change the position, state and the like of a virtual object according to the user operation received by the interaction device; and calculating and projecting the content changing along with the posture and the orientation. The projected content is coherent, and the small pictures displayed in different postures and orientations can be spliced into a continuous 360-degree panoramic picture. A 360-degree panoramic picture can be displayed by using one small projector; the used interaction device is simple and is integrated with the projection equipment; the related algorithm is simple, uses few system resources such as a memory and the like, is suitable for low-cost equipment with weak computing capability, has small equipment volume and low hardware cost, does not need to be installed, can be flexibly used at any time, and is suitable for various occasions such as families, schools and the like.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows a flowchart of an interaction method for immersive content according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
s101: initial pose information of a projection device in space is obtained, wherein the pose information includes a spatial pose and orientation of the projection device. According to the initial posture information of the projection equipment, the analysis of what kind of image needs to be output under the current posture of the equipment can be facilitated.
In a specific embodiment, the obtaining of the attitude information may be performed by an attitude sensor disposed on the projection device, and the attitude sensor may be a 6-axis attitude sensor including 3-axis acceleration plus 3-axis angular velocity, or may be further added with a 3-axis magnetic sensor to form a 9-axis sensor. When the 6-axis attitude sensor is used, the accurate attitude and the orientation (yaw angle) of the equipment relative to the initial state can be obtained; using a 9-axis attitude sensor, the exact attitude of the device and absolute orientation relative to the earth can be obtained.
In a particular embodiment, a projection device generally includes a light source, a display assembly, and a set of optical mirrors. Projection technologies that may be used include single-chip LCD (liquid crystal display) projection, 3LCD projection, DLP (digital light processing) projection, LCOS (reflective micro-LCD) projection, and the like. And particularly, selecting equipment of a proper projection technology according to the projection environment and picture requirements.
S102: according to a file to be projected, a virtual 3D scene model is built, and a virtual camera and a virtual head auditory model are built in the virtual 3D scene model, wherein the virtual 3D scene model comprises an object which is set to change state or position after receiving an interaction instruction. By utilizing the constructed virtual 3D scene model, the picture or audio output relative to the virtual camera and the virtual head auditory model and the picture or audio output of the virtual camera and the virtual head auditory model under the interactive instruction can be obtained in advance through calculation, so that the picture or audio output can be projected on a projection device correspondingly.
In a specific embodiment, a virtual 3D scene model is established according to picture and audio information in a file to be projected, wherein the scene comprises a background object, a foreground object and a sound source; the background object and the foreground object can change positions, appearances and the like along with time; a virtual camera and virtual head auditory model are established, which is located in the center of the virtual scene, with the initial pose, orientation being the default pose, orientation. Preferably, the object set to change state or position after receiving the interaction instruction is one or more of the foreground objects. The changeable state or position attributes of the foreground object enable an interactive projection experience. It should be appreciated that the background object may also be similarly configured to have changeable state or position attributes that also achieve the technical effects of the present invention.
S103: and mapping the virtual camera and the virtual head auditory model to projection equipment, and calculating to obtain projection pictures and audio data under the current posture information and/or the interaction instruction. The image and audio information in the virtual camera and the virtual head auditory model can be directly acquired on the projection equipment through the mapping relation, and the corresponding image and audio are projected according to different posture information and interactive instructions.
In a specific embodiment, mapping the virtual camera and the virtual head hearing model to the projection device is embodied as: the field angle of the virtual camera is the same as the projection field angle of the projection equipment, and the aspect ratio of the imaging surface of the virtual camera is the same as the aspect ratio of the projection picture of the projection equipment; a virtual head auditory model is established with the same initial position, pose, orientation as the virtual camera.
In a specific embodiment, an imaging picture of a virtual scene in a virtual camera is calculated, and a calculation method can use a computer 3D graphics method; the audio synthesized in the virtual head auditory model by the sound emitted by each sound source in the virtual scene is calculated, and the calculation method can use a method based on Head Related Transform Function (HRTF).
In a specific embodiment, the calculation of the projection picture under the interactive instruction is specifically as follows:
and recording the object as a virtual object to be operated in response to the fact that the axis of the virtual camera intersects with the foreground object and the foreground object is set to be capable of changing state or position after receiving the interaction instruction. The following method can be specifically adopted for judging the intersection of the axis of the virtual camera and the foreground object: making a ray along the axis direction of the camera from the position of the virtual camera, and calculating whether the ray intersects with a virtual object in the virtual scene; carrying out intersection detection by using a surrounding ball of the virtual object, wherein the surrounding ball is a ball with the position of the virtual object as the center of the ball and with a preset radius; the intersection is judged by the method that the distance from the center of the enclosing sphere to the ray in the axial direction of the camera is calculated, if the distance is smaller than the radius of the enclosing sphere, the intersection is judged, and if not, the intersection is not judged.
And responding to the interaction instruction sent by the user and the virtual object to be operated currently exists, and switching the virtual object to be operated into an interactive state, wherein the interactive state comprises form transformation, position transformation, size transformation or a combination thereof. In a specific embodiment, the interactive state may be a dragging state, and the display effect of the object in the dragging state is that the size becomes 1.2 times larger than the original size. And responding to the user to remove the interactive instruction and the virtual object to be operated currently exists, restoring the virtual object to be operated to a normal state, and emptying the record of the virtual object to be operated.
S104: and outputting the projection picture and audio data under the current posture information and/or the interactive instruction. And finally, outputting the projection picture and the audio data under the current posture information on a space surface through projection equipment, and simultaneously changing the projection picture and the audio output according to the interaction instruction of the user to obtain immersive interaction experience.
Fig. 2 shows an interaction method for immersive content according to a specific embodiment of the present invention, as shown in fig. 2, the method comprising the steps of:
step 201, acquiring an initial posture and orientation of equipment, and recording the initial orientation; and establishing a virtual 3D scene model and a virtual camera. After the system is started, acquiring the initial attitude and orientation of the equipment from the attitude sensor, and recording the initial orientation of the equipment;
in a specific embodiment, a virtual 3D scene model is established, wherein a scene comprises a background object and a foreground object, and the position, the appearance and the like of the background object and the foreground object can be changed along with time; some foreground objects are objects that can interact, and others are not. The interactive operation can be realized through the construction of the interactive objects.
In a specific embodiment, a virtual camera is established, which is located in the center of the virtual scene, and the initial pose and orientation are default poses and orientations; the viewing angle of the virtual camera is the same as the projection viewing angle of the projection device, and the aspect ratio of the imaging surface of the virtual camera is the same as the aspect ratio of the projection picture of the projection device.
Step 202, obtaining the current absolute attitude and orientation of the equipment, and calculating the relative orientation. Optionally, from the initial orientation recorded in step 201, the relative orientation of the device to the initial state is calculated.
Step 203, setting the posture and orientation of the virtual camera in the virtual 3D scene, and calculating the virtual object to be operated pointed by the virtual camera. The pose and orientation of the virtual camera are specifically the pose and orientation obtained in step 202.
In a specific embodiment, virtual objects intersecting the direction pointed by the axis of the virtual camera in the virtual 3D scene are calculated, and one of the interactive virtual objects is selected and recorded as a virtual object to be operated.
And 204, acquiring the state of the interaction device, and operating the virtual object to be operated in the virtual 3D scene. And if the user carries out interactive operation on the interactive device and the virtual object to be operated currently exists, changing the state, the position and the like of the virtual object to be operated.
Step 205, calculating an imaging picture of the virtual scene in the virtual camera, and projecting a current picture.
The repeated execution from step 203 enables the projection apparatus of the present invention to change the projected picture in real time according to the change of the posture and orientation.
The projection and interaction of the virtual 3D scene by the method are simultaneously completed on the same equipment, namely, the interactive operation of the virtual object is calculated while the immersive projection content is calculated. The method of the invention has consistency in the change of the projection content. In essence, when the projection device rotates in space, the projected pictures in different directions can be spliced into a complete 360-degree surrounding panoramic picture. The projection lens only projects a partial area each time, and the change of projection contents can be realized by changing the spatial posture and the orientation of the projection lens. When the projection equipment is used for exploring the immersive panoramic picture, the interaction device is used for operating the virtual object pointed by the projection lens, and the interaction mode is simple.
With continued reference to fig. 3, fig. 3 shows a block diagram of an interactive system for immersive content according to a specific embodiment of the present invention, as shown in fig. 3, the projection system including a gesture sensor 301, a processor 302, a projection device 303, and an interaction module 304. Wherein the attitude sensor 301 can obtain the attitude and orientation of the system in the space; the processor module 302 has certain computing capability, can acquire the posture and orientation of the system in space, and judges which virtual object in the virtual 3D scene needs to be operated; acquiring the current state of the interaction module 304, and making changes to the virtual object; the audio output module is used for calculating the pictures needing to be projected by the system under the current posture and orientation, sending the pictures to the projection equipment 303, and also used for calculating the audio needing to be played by the system under the current posture and orientation, and sending the audio to the projection equipment 303; the projection device 303 can receive a picture to be projected, project the picture to a surface of a real space using an optical principle or the like.
In a specific embodiment, the audio output module can output at least 1 channel of audio, or at least 2 channels of stereo audio, and may be a speaker or a headphone. The system may also include a housing 305 containing a hand-holdable member or may be secured to a movable member such as a user's body; when this system is used, the attitude and orientation of the attitude sensor 301 in the space are changed by rotating and moving the housing. Through the handheld setting, can be convenient for the user rotate according to the picture position of needs, and then obtain the content of image picture and audio frequency under this gesture.
In a specific embodiment, the processor 302 is further configured to create a virtual 3D scene 311, where the virtual 3D scene 311 has a certain spatial size and shape, and includes a plurality of virtual objects 312, and the virtual objects 312 specifically include background objects, foreground objects, sound sources, and the like, and can change position, appearance, and sound over time, and some of the virtual objects are marked as interactive. The processor 302 may also be used to change the pose and orientation of the virtual camera 313 and the virtual head 314 located in the center of the virtual 3D scene 311, calculate the pictures and sounds that the system needs to project at the current time, and calculate the virtual objects that need to interact currently.
In a specific embodiment, the processor module is further configured to record the object as a virtual object to be operated in response to the axis of the virtual camera intersecting the foreground object and the foreground object being an object that is set to change state or position after receiving the interaction instruction; responding to an interaction instruction sent by a user and the virtual object to be operated currently exists, and switching the virtual object to be operated into an interactive state, wherein the interactive state comprises form transformation, position transformation, size transformation or a combination thereof; and responding to the user to remove the interactive instruction and the virtual object to be operated currently exists, restoring the virtual object to be operated to be in a normal state, and clearing the record of the virtual object to be operated.
In a specific embodiment, the attitude sensor 301 may be a 6-axis attitude sensor including 3-axis acceleration plus 3-axis angular velocity, or may be a 9-axis sensor formed by adding a 3-axis magnetic sensor. Projection device 303 generally includes components such as a light source, a display assembly, and optics; the available projection technologies include single-chip LCD (liquid crystal display) projection, 3LCD projection, DLP (digital light processing) projection, LCOS (reflective micro LCD) projection, etc., and each technology specifically contains different components, which are determined according to different projection environments and picture requirements.
Compared with the problems that in the prior art, the number of projectors is large, extra image acquisition equipment is needed to complete interaction, the size is large, the cost is high, the projectors can be used only by being installed in advance and well debugged, the applicable occasions are few, the calculation amount of a computer vision algorithm is large, and the projectors are not suitable for equipment with weak calculation capacity.
With continued reference to FIG. 4, FIGS. 4a-c are diagrams of the effects of an interactive system for immersive content according to a specific embodiment of the present application. As shown in fig. 4a-d, 411 is a wall surface in real space on which a picture can be projected; the projection device 412, corresponding to the whole apparatus described above, includes a housing that can be held by hand, and the user holds the projection device 412 to move and rotate in space; the projection device 412 has a button as an interaction means; 413. 415 indicates that the light emitted from 412 illuminates the screen projected on the wall 411.
Referring first to fig. 4a, as shown in fig. 4 a: 401. 402, 403 are the walls and the ground in a virtual 3D scene, which in total contain 6 inner surfaces of a cube, where only 3 faces are shown for clarity; projecting a 360-degree panorama into 6 square maps by using a cube mapping projection (cube mapping projection) method in advance, and respectively attaching the 6 square maps to 6 inner surfaces of a cube to serve as a background of a virtual scene; it should be appreciated that the virtual scene background is not limited to the cube map method, and various projection methods can be adopted to achieve the technical effects of the invention. 5 foreground objects 404 in the virtual 3D scene, including 5 stereo letter models of "ABCDE", are arranged on a vertical arc; in a specific embodiment, 5 objects are marked as interactable objects, and the supported interaction operation is movement along the vertical arc; the virtual camera 405 is located inside the cube, in this embodiment at coordinates (0,0, 1.5); the field of view (FOV) of the virtual camera 405 is set to be the same as the projection field of view of the projection device 412, 30 °; the aspect ratio of the imaging plane of the virtual camera 405 is set to be the same as the projection aspect ratio of the projection device 312, with a 4:3 projection aspect ratio.
With continued reference to FIG. 4b, as shown in FIG. 4b, the user holds the projection device 412 and points to the left of the middle of the wall 411 to activate the projection device 412; the processor obtains the current initial attitude and orientation of the projection device 412, which are expressed as an orientation (yaw angle), a pitch angle and a roll angle (-22.5,10,0), and records that the initial absolute orientation is-22.5 degrees; the relative orientation is calculated to be-22.5- (-22.5) ═ 0 °, so the current relative pose, orientation is (0,10,0), as in fig. 4a, the pose, orientation of the virtual camera 405 is set to (0,10, 0);
from the position of the virtual camera 405, making a ray along the axis direction of the camera, and calculating whether the ray intersects with a virtual object in the virtual scene; in this embodiment, a bounding sphere of the virtual object is used for intersection detection, the bounding sphere being a sphere with a preset radius with the position of the virtual object as the center of the sphere; the intersection is judged by the method that the distance from the center of the enclosing sphere to the ray in the axial direction of the camera is calculated, and if the distance is smaller than the radius of the enclosing sphere, the intersection is judged;
in a specific embodiment, through calculation, the letter "B" of the virtual object intersects with the ray, and the virtual object is an interactive object, so that the object is recorded as a virtual object to be currently operated; at this time, the user presses the interactive button on the projection device 412, the state of the system acquiring the interactive button changes from loose to pressed, and the virtual object to be operated currently exists, and the virtual object to be operated is switched to a dragging state; the display effect of the virtual object in the dragging state is that the size of the virtual object is 1.2 times larger than the original size; using 3D graphics techniques, the computed virtual camera 405 images a picture that is the "ABC" partial typeface 413 within the dashed box, the virtual object directly opposite the virtual camera 405 is the letter "B", the center of the projected picture is also the letter "B", and the size of "B" is 1.2 times that of "a", "C"; as shown in fig. 4b, the projection device 412 projects the ABC partial typeface screen 413 onto the middle left area of the wall 411.
Referring to fig. 4c, the user holds the projection device 412 and rotates it to the right, pointing to the middle of the wall 411; the processor obtains the current initial pose and orientation of the projection device 412 as (-5,10,0), and calculates the relative orientation as-5- (-22.5) ═ 17.5 °, so that the current relative pose and orientation is (17.5,10,0), as shown in fig. 4c, the pose and orientation of the virtual camera 405 is set as (17.5,10, 0); because the current virtual object "B" is in a dragging state, its position changes following the virtual camera 405, the position changes by keeping the relative position of the virtual camera 405 axis and the virtual object "B" unchanged, moving along the arc while rotating the virtual object "B", and the object moves to a position partially overlapping the virtual object "C"; at this time, the user releases the interactive button on the projection device 412, the state of the system acquiring the interactive button is changed from being pressed to being released, and the virtual object to be operated currently exists, the virtual object to be operated is switched to be in a normal state, the size of the virtual object to be operated is changed to be large, and the record of the virtual object to be operated is emptied;
calculating an imaging picture of the virtual camera 405 as a 'BC' typeface picture 415 in a dotted line frame, wherein the center of the projection picture is still the letter 'B', and the size of the 'B' is the same as that of the 'C'; as shown in fig. 4d, the projection device 412 projects a "BC" typeface picture 415 on the middle area of the wall 411.
As the user rotates the projection apparatus 412, the picture projected on the wall 411 by the projection apparatus 412 changes. In the mind of a user, projection pictures at each moment are spliced into a 360-degree panoramic image, and visual factors such as a background object and a foreground object of a virtual 3D scene in FIG. 4a are restored; the user drags the virtual object in the virtual 3D scene by operating the interactive button so as to change the position of the virtual object in the virtual 3D scene, and the change is reflected on the projection picture in real time.
Referring now to FIG. 5, shown is a block diagram of a computer system 500 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the system 500 are also stored. The CPU501, ROM502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 501. It should be noted that the computer readable storage medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring initial attitude information of projection equipment in space, wherein the attitude information comprises the spatial attitude and the orientation of the projection equipment; according to a file to be projected, a virtual 3D scene model is built, and a virtual camera and a virtual head auditory model are built in the virtual 3D scene model, wherein the virtual 3D scene model comprises an object which is set to change state or position after receiving an interaction instruction; mapping the virtual camera and the virtual head auditory model to projection equipment, and calculating to obtain a projection picture, audio data and/or an interactive instruction under the current posture information; and outputting the projection picture and audio data under the current posture information and/or the interactive instruction.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.