US20220044462A1 - Animation production system - Google Patents
Animation production system Download PDFInfo
- Publication number
- US20220044462A1 US20220044462A1 US17/506,420 US202117506420A US2022044462A1 US 20220044462 A1 US20220044462 A1 US 20220044462A1 US 202117506420 A US202117506420 A US 202117506420A US 2022044462 A1 US2022044462 A1 US 2022044462A1
- Authority
- US
- United States
- Prior art keywords
- virtual space
- user
- controller
- camera
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2213/00—Indexing scheme for animation
- G06T2213/08—Animation software package
Definitions
- the present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
- the present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- the principal invention for solving the above-described problem is an animation production method comprising: placing a virtual camera in a virtual space; placing one or more objects in the virtual space; a user input detector for detecting the user input from at least one of a head mount display and a controller mounted by a user; receiving at least one choice of the object in response to the input; and removing the object from the virtual space in response to the input.
- animations can be captured in a virtual space.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system 300 of the present embodiment.
- HMD head mount display
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- FIG. 3 is a diagram schematically illustrating the appearance of the HMD 110 according to the present embodiment.
- FIG. 4 is a diagram illustrating an example of a functional configuration of the HMD 110 according to the present embodiment
- FIG. 5 is a diagram schematically illustrating the appearance of the controller 210 according to the present embodiment.
- FIG. 6 is a diagram illustrating an example of a functional configuration of a controller 210 according to the present embodiment
- FIG. 7 is a diagram illustrating a functional configuration of an image producing device 310 according to the present embodiment.
- the present invention includes, for example, the following configurations.
- An animation production method comprising:
- a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted;
- step of accepting the choice includes accepting the choice of at least one of the object from the list in response to the input.
- step of accepting the choice includes making the locked object selectable.
- FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in an animation production system 300 according to the present embodiment.
- the animation production system 300 according to the present exemplary embodiment is intended to create an animation by placing a virtual character 4 and a virtual camera 3 in a virtual space 1 and taking a character 4 using a camera 3 .
- a photographer 2 (a photographer character) is disposed in the virtual space 1 , and the camera 3 is virtually operated by the photographer 2 .
- the user arranges the character 4 and the camera 3 while viewing the virtual space 1 from a bird's eye (Third Person's View), shoots the character 4 using the FPV (First Person View) as the photographer 2 , and performs the performance of the character 4 using the FPV, while producing the animation.
- a plurality of characters 4 in the example of FIG. 1 , characters 4 - 1 and 4 - 2 ) can be disposed within the virtual space 1 , and the user can perform the performance while possessing the selected character 4 (setting the operation target).
- the user can switch the possessed object (the object to be operated) to one of the characters 4 (e.g., characters 4 - 1 and 4 - 2 ).
- the animation production system 300 of the present embodiment one can play a number of roles (roles).
- the camera 2 can be virtually operated as the photographer 2 , natural camera work can be realized and the representation of the movie to be shot can be enriched.
- FIG. 2 is a diagram illustrating an example of the overall configuration of an animation production system 300 according to an embodiment of the present invention.
- the animation production system 300 may comprise, for example, an HMD 110 , a controller 210 , and an image generating device 310 that functions as a host computer.
- An infrared camera (not shown) or the like can also be added to the animation production system 300 for detecting the position, orientation and slope of the HMD 110 or controller 210 .
- These devices may be connected to each other by wired or wireless means.
- each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, BluetoothTM, WiFiTM.
- the image generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function.
- FIG. 3 is a diagram schematically illustrating the appearance of the HMD 110 according to the present embodiment.
- FIG. 4 is a diagram illustrating an example of a functional configuration of the HMD 110 according to the present embodiment.
- the HMD 110 is mounted on the user's head and includes a display panel 120 for placement in front of the user's left and right eyes.
- the display panel 120 may be an optically transmissive or non-transmissive display, the present embodiment illustrates a non-transmissive display panel that can provide more immersion.
- the display panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided.
- the housing portion 130 of the HMD 110 includes a sensor 140 .
- Sensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect movements such as the orientation or tilt of the user's head.
- the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of the display panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis
- the sensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle).
- the housing portion 130 of the HMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs).
- a camera e.g., an infrared light camera, a visible light camera
- the HMD 110 may be provided with a camera for detecting a light source installed in the housing portion 130 of the HMD 110 .
- the housing portion 130 of the HMD 110 may also include an eye tracking sensor.
- the eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze.
- FIG. 5 is a diagram schematically illustrating the appearance of the controller 210 according to the present embodiment.
- FIG. 6 is a diagram illustrating an example of a functional configuration of a controller 210 according to this embodiment.
- the controller 210 can support the user to make predetermined inputs in the virtual space.
- the controller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers.
- the left hand controller 220 and the right hand controller 230 may each have an operational trigger button 240 , an infrared LED 250 , a sensor 260 , a joystick 270 , and a menu button 280 .
- the operation trigger button 240 is positioned as 240 a , 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the controller 210 .
- the frame 245 formed in a ring-like fashion downward from both sides of the controller 210 is provided with a plurality of infrared LEDs 250 , and a camera (not shown) provided outside the controller can detect the position, orientation and slope of the controller 210 in a particular space by detecting the position of these infrared LEDs.
- the controller 210 may also incorporate a sensor 260 to detect movements such as the orientation and tilt of the controller 210 .
- sensor 260 it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof.
- the top surface of the controller 210 may include a joystick 270 and a menu button 280 . It is envisioned that the joystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of the controller 210 . Menu buttons 280 are also assumed to be operated with the thumb.
- the controller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating the controller 210 .
- the controller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of the controller 210 via a button or a joystick, and for receiving information from the host computer.
- the system can determine the movement and attitude of the user's hand, pseudo-displaying and operating the user's hand in the virtual space.
- FIG. 7 is a diagram illustrating a functional configuration of an image producing device 310 according to the present embodiment.
- the image producing device 310 may use a device such as a PC, a game machine, a portable communication terminal, or the like, which has a function for storing information on the user's head movement or the movement or operation of the controller acquired by the user input information or the sensor, which is transmitted from the HMD 110 or the controller 210 , performing a predetermined computational processing, and generating an image.
- the image producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, an HMD 110 or a controller 210 , and a communication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark).
- a peripheral device such as, for example, an HMD 110 or a controller 210
- a communication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark).
- the information received from the HMD 110 and/or the controller 210 regarding the movement of the user's head or the movement or operation of the controller is detected in the control unit 340 as input content including the operation of the user's position, line of sight, attitude, speech, operation, etc.
- the control unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved.
- the image generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing.
- the control unit 340 includes a user input detecting unit 410 that detects information received from the HMD 110 and/or the controller 210 regarding the movement of the user's head and the movement or operation of the controller, a character control unit 420 that executes a control program stored in the control program storage unit 460 for the character data storage unit 450 of the storage unit 350 , a camera control unit 440 that controls the virtual camera 3 disposed in the virtual space 1 according to the character control, an operation processing unit 480 that performs the operation of a user in the virtual space 1 , and an image producing unit 430 that generates an image in which the camera 3 captures the virtual space 1 based on the character control.
- a user input detecting unit 410 that detects information received from the HMD 110 and/or the controller 210 regarding the movement of the user's head and the movement or operation of the controller
- a character control unit 420 that executes a control program stored in the control program storage unit 460 for the character data storage unit 450 of the storage unit 350
- the movement of the character 4 or the camera man 2 is controlled by converting information such as the direction, inclination, and hand movement of the user head detected through the HMD 110 or the controller 210 into the movement of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the bone structure movement to the previously stored character data.
- the control of the camera 3 is performed, for example, by changing various settings (e.g., the position within the virtual space 1 of the camera 3 , the shooting direction, the focus position, and the zoom of the camera 3 ) with respect to the camera 3 according to the movement of the character 4 by hand.
- the storage unit 350 stores in the aforementioned character data storage unit 450 information related to the character 4 , such as the attribute of the character 4 , as well as the image data of the character 4 .
- the control program storage unit 460 controls the operation and expression of the character 4 in the virtual space and stores a program for controlling an object such as the camera 3 .
- the image data storage unit 470 stores the image generated by the image producing unit 430 .
- the image stored in the image data storage unit 470 can be an action data for generating a moving image.
- the action data may include, for example, 3D data for displaying the character 4 in the virtual space 1 , pause data for identifying the bone structure of the 3D data, motion data for identifying the movement of the bone structure, and the like.
- the image producing unit 430 may create (render) a moving image based on the action data and register the video data as a result of rendering in the image data storage unit 470 .
- the operation processing unit 480 displays the operation panel 7 in the virtual space 1 , and can activate various functions in the virtual space 1 according to operation of the operation panel 7 , for example, the fingers 21 .
- the operation panel 7 may appear in the virtual space 1 , for example, when a button of the controller 210 is pressed or when a predetermined operation is performed by the controller 210 .
- the operation panel 7 may be freely disposed in the virtual space 1 by operation of the user, but may be moved in the vicinity of a user's hand or finger 21 , etc. in the virtual space 1 , for example, if any button of the controller 210 is pressed or if predetermined operation is performed by the controller 210 .
- the operation panel 7 is a viewing field of view of the user (in the case of FIG. 1 , a viewing field of view of the user, for example, if the user is operating the character 4 ), and is visible to the user, for example, the object can be moved to a space in which the object is placed on the user's viewing line of view.
- the operation panel 7 may include a display unit 71 and an indication unit 72 .
- Various types of information can be displayed on the display unit 71 .
- a button 721 may be disposed in the indicator portion 72 of the operation panel 7 . Pressing button 721 can invoke various functions necessary for performing character 4 or taking pictures with camera 3 . For example, as shown in FIG. 1 , when a button 721 is pressed, a list 711 of objects disposed in the virtual space 1 can be displayed on the display unit 71 .
- any function can be assigned to the button provided by the indicating unit 72 , such as, for example, a function to call an asset panel (not shown) that displays a list of newly arranged assets (objects), a function to display an image of the camera 3 on the display unit 71 , or a function to display a view field from the character 4 on the display unit 71 .
- the display unit 71 may include a scroll bar 712 .
- a lever 713 for manipulating the scroll bar 712 is provided in the display 71 .
- the user may use a finger 21 or the like to grab and slide the lever 713 and scroll the display 71 according to the slide of the lever 713 .
- objects (levers 713 , etc.) in virtual space 1 that are gripped by a hand can be manipulated to improve operability.
- a variety of three-dimensional controllers may be employed, not limited to the lever 713 .
- the display unit 71 may be provided with a knob used for the volume control of the mixer device, and the display unit 71 may be scrolled by rotating the knob.
- the sliding direction of the lever 713 is up and down in the example of FIG. 1 , but can be set to slide in any direction, such as a horizontal direction or a depth direction.
- a list 711 of objects in the virtual space 1 is displayed in the display unit 71 , and a delete icon 714 is distributed to each item. If the user selects the deletion icon 714 by a finger 21 or the like, the object pertaining to the item can be deleted from the virtual space 1 . It is also possible to delete the object corresponding to the selected item by selecting the item in the list 711 and pressing the deletion button 721 of the indication unit 72 .
- a user selects an object by a finger 21 or the like (in the example of FIG. 1 , the object shown by the cone 8 can be shown to be selected), and the selected object may be deleted in response to a depression such as a deletion icon 714 of the display unit 71 or a button 721 of the indication unit 72 . It is also possible to display a deletion mark near a selected object (e.g., a character 4 - 1 shown in FIG. 1 in which a cone 8 is illustrated), such as a vicinity of a cone 8 , and to delete the selected object by selecting the deletion mark with a finger 21 or the like.
- a deletion mark near a selected object (e.g., a character 4 - 1 shown in FIG. 1 in which a cone 8 is illustrated), such as a vicinity of a cone 8 , and to delete the selected object by selecting the deletion mark with a finger 21 or the like.
- the object to be locked may be specified.
- the background image data 65 (a plate polygon having an image attached as a texture, etc.) can be locked as a deletion target without being selected.
- the list 711 of the display unit 711 may not display the locked object, or the item corresponding to the locked object may not display the delete icon 714 , or may be rendered unusable.
- the object to be locked (e.g., the image data 65 ) is selected from the list 711 displayed in the display unit 71 , and the selected object is locked.
- the locked object can not be selected by a finger 21 or the like in the virtual space 1 .
- the object lock can be used not only for deletion but also for the entire work time in the virtual space 1 .
- the background image data 65 is locked in the virtual space 1 to select a character 4 or other object disposed near the background, it is possible to prevent the background from being moved by mistake.
- a user can operate the camera 3 as the camera man 2 in the virtual space 1 to take video images. Accordingly, since the camera 3 can be operated in the same way as in the real world to take photographs, it is possible to realize a natural camera work and to provide a richer representation of the animated video.
- the character 4 can be performed sequentially while switching the character 4 to be possessed in the virtual space 1 .
- animation can be efficiently captured.
- the image generating device 310 may be a single computer, but not limited to the HMD 110 or the controller 210 may be provided with all or some of the functions of the image generating device 310 . It may also include a function of a portion of the image generating device 310 to other computers that are communicatively connected with the image generating device 310 .
- the animation production system 300 of the present exemplary embodiment is not limited to an extended reality (AR; Augmented Reality) space or a complex reality (MR; Mixed Reality) space, but the animation production system 300 of the present exemplary embodiment is still applicable.
- the second choice is whether the operation mode is a bird's-eye mode or not (S 603 ).
- the rendering mode for creating the final animated image based on the action data may be provided.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Position Input By Displaying (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
To take animations in a virtual space an animation production method comprising: a step of placing a virtual camera in a virtual space; a step of placing one or more objects in the virtual space; a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted; a step of accepting at least one choice of the object in response to the input; and a step of removing the object from the virtual space in response to the input.
Description
- This is a continuation application of U.S. patent application Ser. No. 17/008,387 filed Aug. 31, 2020, which claims the priority benefit of Japan Patent Application, Serial No. JP2020-128309, filed Jul. 29, 2020, the disclosure of which is incorporated herein by reference.
- The present invention relates to an animation production system.
- Virtual cameras are arranged in a virtual space (see Patent Document 1).
-
- [PTL 1] Patent Application Publication No. 2017-146651
- No attempt was made to capture animations in the virtual space.
- The present invention has been made in view of such a background, and is intended to provide a technology capable of capturing animations in a virtual space.
- The principal invention for solving the above-described problem is an animation production method comprising: placing a virtual camera in a virtual space; placing one or more objects in the virtual space; a user input detector for detecting the user input from at least one of a head mount display and a controller mounted by a user; receiving at least one choice of the object in response to the input; and removing the object from the virtual space in response to the input.
- The other problems disclosed in the present application and the method for solving them are clarified in the sections and drawings of the embodiments of the invention.
- According to the present invention, animations can be captured in a virtual space.
-
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in ananimation production system 300 of the present embodiment. -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. -
FIG. 3 is a diagram schematically illustrating the appearance of theHMD 110 according to the present embodiment. -
FIG. 4 is a diagram illustrating an example of a functional configuration of theHMD 110 according to the present embodiment; -
FIG. 5 is a diagram schematically illustrating the appearance of thecontroller 210 according to the present embodiment. -
FIG. 6 is a diagram illustrating an example of a functional configuration of acontroller 210 according to the present embodiment; -
FIG. 7 is a diagram illustrating a functional configuration of animage producing device 310 according to the present embodiment; - The contents of embodiments of the present invention will be described with reference. The present invention includes, for example, the following configurations.
- An animation production method comprising:
- a step of placing a virtual camera in a virtual space;
- a step of placing one or more objects in the virtual space;
- a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted;
- a step of accepting at least one choice of the object in response to the input; and
- a step of removing the object from the virtual space in response to the input.
- The animation production method according to
item 1, the method further comprising - a step of displaying in the virtual space a list of the objects disposed in the virtual space,
- wherein the step of accepting the choice includes accepting the choice of at least one of the object from the list in response to the input.
- The animation production method according to
item 1, the method further comprising - a step of accepting a designation of the object to be locked,
- wherein the step of accepting the choice includes making the locked object selectable.
- A specific example of an
animation production system 300 according to an embodiment of the present invention will be described below with reference to the drawings. It should be noted that the present invention is not limited to these examples, and is intended to include all modifications within the meaning and scope of equivalence with the appended claims, as indicated by the appended claims. In the following description, the same elements are denoted by the same reference numerals in the description of the drawings and overlapping descriptions are omitted. -
FIG. 1 is a diagram illustrating an example of a virtual space displayed on a head mount display (HMD) mounted by a user in ananimation production system 300 according to the present embodiment. Theanimation production system 300 according to the present exemplary embodiment is intended to create an animation by placing a virtual character 4 and avirtual camera 3 in avirtual space 1 and taking a character 4 using acamera 3. - A photographer 2 (a photographer character) is disposed in the
virtual space 1, and thecamera 3 is virtually operated by thephotographer 2. In theanimation production system 300 of this embodiment, as shown inFIG. 1 , the user arranges the character 4 and thecamera 3 while viewing thevirtual space 1 from a bird's eye (Third Person's View), shoots the character 4 using the FPV (First Person View) as thephotographer 2, and performs the performance of the character 4 using the FPV, while producing the animation. A plurality of characters 4 (in the example ofFIG. 1 , characters 4-1 and 4-2) can be disposed within thevirtual space 1, and the user can perform the performance while possessing the selected character 4 (setting the operation target). When a plurality of characters 4 are disposed, the user can switch the possessed object (the object to be operated) to one of the characters 4 (e.g., characters 4-1 and 4-2). - In this manner, in the
animation production system 300 of the present embodiment, one can play a number of roles (roles). In addition, since thecamera 2 can be virtually operated as thephotographer 2, natural camera work can be realized and the representation of the movie to be shot can be enriched. -
FIG. 2 is a diagram illustrating an example of the overall configuration of ananimation production system 300 according to an embodiment of the present invention. Theanimation production system 300 may comprise, for example, an HMD 110, acontroller 210, and animage generating device 310 that functions as a host computer. An infrared camera (not shown) or the like can also be added to theanimation production system 300 for detecting the position, orientation and slope of the HMD 110 orcontroller 210. These devices may be connected to each other by wired or wireless means. For example, each device may be equipped with a USB port to establish communication by cable connection, or communication may be established by wired or wireless, such as HDMI, wired LAN, infrared, Bluetooth™, WiFi™. Theimage generating device 310 may be a PC, a game machine, a portable communication terminal, or any other device having a calculation processing function. -
FIG. 3 is a diagram schematically illustrating the appearance of theHMD 110 according to the present embodiment.FIG. 4 is a diagram illustrating an example of a functional configuration of theHMD 110 according to the present embodiment. - The
HMD 110 is mounted on the user's head and includes adisplay panel 120 for placement in front of the user's left and right eyes. Although thedisplay panel 120 may be an optically transmissive or non-transmissive display, the present embodiment illustrates a non-transmissive display panel that can provide more immersion. Thedisplay panel 120 displays a left-eye image and a right-eye image, which can provide the user with a three-dimensional image by utilizing the visual difference of both eyes. If left- and right-eye images can be displayed, a left-eye display and a right-eye display can be provided separately, and an integrated display for left-eye and right-eye can be provided. - The
housing portion 130 of theHMD 110 includes asensor 140.Sensor 140 may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof, to detect movements such as the orientation or tilt of the user's head. When the vertical direction of the user's head is Y-axis, the axis corresponding to the user's anteroposterior direction is Z-axis, which connects the center of thedisplay panel 120 with the user, and the axis corresponding to the user's left and right direction is X-axis, thesensor 140 can detect the rotation angle around the X-axis (so-called pitch angle), rotation angle around the Y-axis (so-called yaw angle), and rotation angle around the Z-axis (so-called roll angle). - In place of or in addition to the
sensor 140, thehousing portion 130 of theHMD 110 may also include a plurality of light sources 150 (e.g., infrared light LEDs, visible light LEDs). A camera (e.g., an infrared light camera, a visible light camera) installed outside the HMD 110 (e.g., indoor, etc.) can detect the position, orientation, and tilt of theHMD 110 in a particular space by detecting these light sources. Alternatively, for the same purpose, theHMD 110 may be provided with a camera for detecting a light source installed in thehousing portion 130 of theHMD 110. - The
housing portion 130 of theHMD 110 may also include an eye tracking sensor. The eye tracking sensor is used to detect the user's left and right eye gaze directions and gaze. There are various types of eye tracking sensors. For example, the position of reflected light on the cornea, which can be irradiated with infrared light that is weak in the left eye and right eye, is used as a reference point, the position of the pupil relative to the position of reflected light is used to detect the direction of the eye line, and the intersection point in the direction of the eye line in the left eye and right eye is used as a focus point. -
FIG. 5 is a diagram schematically illustrating the appearance of thecontroller 210 according to the present embodiment.FIG. 6 is a diagram illustrating an example of a functional configuration of acontroller 210 according to this embodiment. - The
controller 210 can support the user to make predetermined inputs in the virtual space. Thecontroller 210 may be configured as a set of left-hand 220 and right-hand 230 controllers. Theleft hand controller 220 and theright hand controller 230 may each have an operational trigger button 240, aninfrared LED 250, asensor 260, ajoystick 270, and amenu button 280. - The operation trigger button 240 is positioned as 240 a, 240 b in a position that is intended to perform an operation to pull the trigger with the middle finger and index finger when gripping the grip 235 of the
controller 210. Theframe 245 formed in a ring-like fashion downward from both sides of thecontroller 210 is provided with a plurality ofinfrared LEDs 250, and a camera (not shown) provided outside the controller can detect the position, orientation and slope of thecontroller 210 in a particular space by detecting the position of these infrared LEDs. - The
controller 210 may also incorporate asensor 260 to detect movements such as the orientation and tilt of thecontroller 210. Assensor 260, it may comprise, for example, a magnetic sensor, an acceleration sensor, or a gyro sensor, or a combination thereof. Additionally, the top surface of thecontroller 210 may include ajoystick 270 and amenu button 280. It is envisioned that thejoystick 270 may be moved in a 360 degree direction centered on the reference point and operated with a thumb when gripping the grip 235 of thecontroller 210.Menu buttons 280 are also assumed to be operated with the thumb. In addition, thecontroller 210 may include a vibrator (not shown) for providing vibration to the hand of the user operating thecontroller 210. Thecontroller 210 includes an input/output unit and a communication unit for outputting information such as the position, orientation, and slope of thecontroller 210 via a button or a joystick, and for receiving information from the host computer. - With or without the user grasping the
controller 210 and manipulating the various buttons and joysticks, and with information detected by the infrared LEDs and sensors, the system can determine the movement and attitude of the user's hand, pseudo-displaying and operating the user's hand in the virtual space. -
FIG. 7 is a diagram illustrating a functional configuration of animage producing device 310 according to the present embodiment. Theimage producing device 310 may use a device such as a PC, a game machine, a portable communication terminal, or the like, which has a function for storing information on the user's head movement or the movement or operation of the controller acquired by the user input information or the sensor, which is transmitted from theHMD 110 or thecontroller 210, performing a predetermined computational processing, and generating an image. Theimage producing device 310 may include an input/output unit 320 for establishing a wired connection with a peripheral device such as, for example, anHMD 110 or acontroller 210, and acommunication unit 330 for establishing a wireless connection such as infrared, Bluetooth, or WiFi (registered trademark). The information received from theHMD 110 and/or thecontroller 210 regarding the movement of the user's head or the movement or operation of the controller is detected in thecontrol unit 340 as input content including the operation of the user's position, line of sight, attitude, speech, operation, etc. through the I/O unit 320 and/or thecommunication unit 330, and a control program stored in thestorage unit 350 is executed in accordance with the user's input content to perform a process such as controlling the character 4 and generating an image. Thecontrol unit 340 may be composed of a CPU. However, by further providing a GPU specialized for image processing, information processing and image processing can be distributed and overall processing efficiency can be improved. Theimage generating device 310 may also communicate with other computing processing devices to allow other computing processing devices to share information processing and image processing. - The
control unit 340 includes a userinput detecting unit 410 that detects information received from theHMD 110 and/or thecontroller 210 regarding the movement of the user's head and the movement or operation of the controller, acharacter control unit 420 that executes a control program stored in the controlprogram storage unit 460 for the characterdata storage unit 450 of thestorage unit 350, acamera control unit 440 that controls thevirtual camera 3 disposed in thevirtual space 1 according to the character control, an operation processing unit 480 that performs the operation of a user in thevirtual space 1, and animage producing unit 430 that generates an image in which thecamera 3 captures thevirtual space 1 based on the character control. Here, the movement of the character 4 or thecamera man 2 is controlled by converting information such as the direction, inclination, and hand movement of the user head detected through theHMD 110 or thecontroller 210 into the movement of each part of the bone structure created in accordance with the movement or restriction of the joints of the human body, and applying the bone structure movement to the previously stored character data. The control of thecamera 3 is performed, for example, by changing various settings (e.g., the position within thevirtual space 1 of thecamera 3, the shooting direction, the focus position, and the zoom of the camera 3) with respect to thecamera 3 according to the movement of the character 4 by hand. - The
storage unit 350 stores in the aforementioned characterdata storage unit 450 information related to the character 4, such as the attribute of the character 4, as well as the image data of the character 4. The controlprogram storage unit 460 controls the operation and expression of the character 4 in the virtual space and stores a program for controlling an object such as thecamera 3. The imagedata storage unit 470 stores the image generated by theimage producing unit 430. In this embodiment, the image stored in the imagedata storage unit 470 can be an action data for generating a moving image. The action data may include, for example, 3D data for displaying the character 4 in thevirtual space 1, pause data for identifying the bone structure of the 3D data, motion data for identifying the movement of the bone structure, and the like. In addition, theimage producing unit 430 may create (render) a moving image based on the action data and register the video data as a result of rendering in the imagedata storage unit 470. - As illustrated in
FIG. 1 , the operation processing unit 480 displays theoperation panel 7 in thevirtual space 1, and can activate various functions in thevirtual space 1 according to operation of theoperation panel 7, for example, thefingers 21. Theoperation panel 7 may appear in thevirtual space 1, for example, when a button of thecontroller 210 is pressed or when a predetermined operation is performed by thecontroller 210. Theoperation panel 7 may be freely disposed in thevirtual space 1 by operation of the user, but may be moved in the vicinity of a user's hand orfinger 21, etc. in thevirtual space 1, for example, if any button of thecontroller 210 is pressed or if predetermined operation is performed by thecontroller 210. In this case, theoperation panel 7 is a viewing field of view of the user (in the case ofFIG. 1 , a viewing field of view of the user, for example, if the user is operating the character 4), and is visible to the user, for example, the object can be moved to a space in which the object is placed on the user's viewing line of view. - The
operation panel 7 may include adisplay unit 71 and anindication unit 72. Various types of information can be displayed on thedisplay unit 71. - For example, a
button 721 may be disposed in theindicator portion 72 of theoperation panel 7.Pressing button 721 can invoke various functions necessary for performing character 4 or taking pictures withcamera 3. For example, as shown inFIG. 1 , when abutton 721 is pressed, alist 711 of objects disposed in thevirtual space 1 can be displayed on thedisplay unit 71. In addition, any function can be assigned to the button provided by the indicatingunit 72, such as, for example, a function to call an asset panel (not shown) that displays a list of newly arranged assets (objects), a function to display an image of thecamera 3 on thedisplay unit 71, or a function to display a view field from the character 4 on thedisplay unit 71. - For example, in the case where a large number of objects are disposed in the
virtual space 1, when there is a large amount of information displayed on thedisplay unit 71, thedisplay unit 71 may include ascroll bar 712. Although the user may operate thescroll bar 712 using afinger 21 or the like, in this embodiment, alever 713 for manipulating thescroll bar 712 is provided in thedisplay 71. The user may use afinger 21 or the like to grab and slide thelever 713 and scroll thedisplay 71 according to the slide of thelever 713. Rather than manipulating a planar controller, such as a scroll bar, that is displayed in a typical operating system windowing system, objects (levers 713, etc.) invirtual space 1 that are gripped by a hand can be manipulated to improve operability. In other words, a variety of three-dimensional controllers may be employed, not limited to thelever 713. For example, thedisplay unit 71 may be provided with a knob used for the volume control of the mixer device, and thedisplay unit 71 may be scrolled by rotating the knob. The sliding direction of thelever 713 is up and down in the example ofFIG. 1 , but can be set to slide in any direction, such as a horizontal direction or a depth direction. - A
list 711 of objects in thevirtual space 1 is displayed in thedisplay unit 71, and adelete icon 714 is distributed to each item. If the user selects thedeletion icon 714 by afinger 21 or the like, the object pertaining to the item can be deleted from thevirtual space 1. It is also possible to delete the object corresponding to the selected item by selecting the item in thelist 711 and pressing thedeletion button 721 of theindication unit 72. - In addition, in the
virtual space 1, a user selects an object by afinger 21 or the like (in the example ofFIG. 1 , the object shown by thecone 8 can be shown to be selected), and the selected object may be deleted in response to a depression such as adeletion icon 714 of thedisplay unit 71 or abutton 721 of theindication unit 72. It is also possible to display a deletion mark near a selected object (e.g., a character 4-1 shown inFIG. 1 in which acone 8 is illustrated), such as a vicinity of acone 8, and to delete the selected object by selecting the deletion mark with afinger 21 or the like. - When deleting an object from
virtual space 1, the object to be locked may be specified. For example, inFIG. 1 , the background image data 65 (a plate polygon having an image attached as a texture, etc.) can be locked as a deletion target without being selected. In this case, thelist 711 of thedisplay unit 711 may not display the locked object, or the item corresponding to the locked object may not display thedelete icon 714, or may be rendered unusable. - For example, the object to be locked (e.g., the image data 65) is selected from the
list 711 displayed in thedisplay unit 71, and the selected object is locked. When the object to be deleted is specified in thevirtual space 1, the locked object can not be selected by afinger 21 or the like in thevirtual space 1. It should be noted that the object lock can be used not only for deletion but also for the entire work time in thevirtual space 1. For example, when thebackground image data 65 is locked in thevirtual space 1 to select a character 4 or other object disposed near the background, it is possible to prevent the background from being moved by mistake. - As described above, according to the
animation production system 300 of the present exemplary embodiment, a user can operate thecamera 3 as thecamera man 2 in thevirtual space 1 to take video images. Accordingly, since thecamera 3 can be operated in the same way as in the real world to take photographs, it is possible to realize a natural camera work and to provide a richer representation of the animated video. - Further, according to the
animation production system 300 of the present embodiment, the character 4 can be performed sequentially while switching the character 4 to be possessed in thevirtual space 1. Thus, animation can be efficiently captured. - Although the present embodiment has been described above, the above-described embodiment is intended to facilitate the understanding of the present invention and is not intended to be a limiting interpretation of the present invention. The present invention may be modified and improved without departing from the spirit thereof, and the present invention also includes its equivalent.
- For example, in the present embodiment, the
image generating device 310 may be a single computer, but not limited to theHMD 110 or thecontroller 210 may be provided with all or some of the functions of theimage generating device 310. It may also include a function of a portion of theimage generating device 310 to other computers that are communicatively connected with theimage generating device 310. - In the present exemplary embodiment, a virtual space based on the virtual reality (VR; Virtual Reality) was assumed. However, the
animation production system 300 of the present exemplary embodiment is not limited to an extended reality (AR; Augmented Reality) space or a complex reality (MR; Mixed Reality) space, but theanimation production system 300 of the present exemplary embodiment is still applicable. - In the present embodiment, the second choice is whether the operation mode is a bird's-eye mode or not (S603). However, the rendering mode for creating the final animated image based on the action data may be provided.
-
-
- 1 virtual space
- 2 cameraman
- 3 cameras
- 4 characters
- 110 HMD
- 120 display panel
- 130 housing
- 140 sensor
- 150 light source
- 210 controller
- 220 left hand controller
- 230 right hand controller
- 235 grip
- 240 trigger button
- 250 Infrared LED
- 260 sensor
- 270 joystick
- 280 menu button
- 300 Animation Production System
- 310 Image Generator
- 320 I/O portion
- 330 communication section
- 340 controller
- 350 storage
- 410 User Input Detector
- 420 character control unit
- 430 Image Generator
- 440 Camera Control
- 450 character data storage section
- 460 Program Storage
- 470 Image Data Storage
- 480 Operation Processing Block
- 510 handle
- 520 display
- 530 rail
- 540 slider
- 550 recording button
Claims (1)
1. An animation production method comprising:
a step of placing a virtual camera in a virtual space;
a step of placing one or more objects in the virtual space;
a user input detection unit that detects an input of a user from at least one of a head mounted display and a controller which the user mounted;
a step of accepting at least one choice of the object in response to the input; and
a step of removing the object from the virtual space in response to the input.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/506,420 US20220044462A1 (en) | 2020-07-29 | 2021-10-20 | Animation production system |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020128309A JP7637481B2 (en) | 2020-07-29 | 2020-07-29 | Animation Production System |
| JP2020-128309 | 2020-07-29 | ||
| US17/008,387 US11182944B1 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
| US17/506,420 US20220044462A1 (en) | 2020-07-29 | 2021-10-20 | Animation production system |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/008,387 Continuation US11182944B1 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20220044462A1 true US20220044462A1 (en) | 2022-02-10 |
Family
ID=78703530
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/008,387 Active US11182944B1 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
| US17/506,420 Abandoned US20220044462A1 (en) | 2020-07-29 | 2021-10-20 | Animation production system |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/008,387 Active US11182944B1 (en) | 2020-07-29 | 2020-08-31 | Animation production system |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US11182944B1 (en) |
| JP (1) | JP7637481B2 (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11017233B2 (en) | 2019-03-29 | 2021-05-25 | Snap Inc. | Contextual media filter search |
| WO2023044172A1 (en) | 2021-09-20 | 2023-03-23 | Idoru, Inc. | Systems and method for calculating liability of a driver of a vehicle |
| US12393734B2 (en) * | 2023-02-07 | 2025-08-19 | Snap Inc. | Unlockable content creation portal |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130293468A1 (en) * | 2012-05-04 | 2013-11-07 | Kathryn Stone Perez | Collaboration environment using see through displays |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130249947A1 (en) * | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Communication using augmented reality |
| JP2017146651A (en) | 2016-02-15 | 2017-08-24 | 株式会社コロプラ | Image processing method and image processing program |
| JP6952065B2 (en) | 2017-07-21 | 2021-10-20 | 株式会社コロプラ | Programs and methods that are executed on the computer that provides the virtual space, and information processing devices that execute the programs. |
| JP6714633B2 (en) | 2018-03-23 | 2020-06-24 | パソナ・パナソニックビジネスサービス株式会社 | VR content management system |
-
2020
- 2020-07-29 JP JP2020128309A patent/JP7637481B2/en active Active
- 2020-08-31 US US17/008,387 patent/US11182944B1/en active Active
-
2021
- 2021-10-20 US US17/506,420 patent/US20220044462A1/en not_active Abandoned
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130293468A1 (en) * | 2012-05-04 | 2013-11-07 | Kathryn Stone Perez | Collaboration environment using see through displays |
Non-Patent Citations (1)
| Title |
|---|
| Mendes et al., A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments, April 14, 2018, COMPUTER GRAPHICS forum, 21-45 (Year: 2018) * |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2022025476A (en) | 2022-02-10 |
| US11182944B1 (en) | 2021-11-23 |
| JP7637481B2 (en) | 2025-02-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220044462A1 (en) | Animation production system | |
| US20220351442A1 (en) | Animation production system | |
| US20220358704A1 (en) | Animation production system | |
| US20220035154A1 (en) | Animation production system | |
| US20220351448A1 (en) | Animation production system | |
| US20220351452A1 (en) | Animation production method | |
| US20220351444A1 (en) | Animation production method | |
| US20220351446A1 (en) | Animation production method | |
| US20220036616A1 (en) | Animation production system | |
| US20220036620A1 (en) | Animation production system | |
| US20230005205A1 (en) | Animation production method | |
| US20220036622A1 (en) | Animation production system | |
| US20220351447A1 (en) | Animation production system | |
| US11537199B2 (en) | Animation production system | |
| US11475619B2 (en) | Animation production method | |
| US20220351441A1 (en) | Animation production system | |
| US20220351443A1 (en) | Animation production system | |
| JP7218872B2 (en) | animation production system | |
| US20220358702A1 (en) | Animation production system | |
| US20220035442A1 (en) | Movie distribution method | |
| US20220351451A1 (en) | Animation production system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |