MXPA99004453A - Video data editing apparatus and computer-readable recording medium storing an editing program - Google Patents
Video data editing apparatus and computer-readable recording medium storing an editing programInfo
- Publication number
- MXPA99004453A MXPA99004453A MXPA/A/1999/004453A MX9904453A MXPA99004453A MX PA99004453 A MXPA99004453 A MX PA99004453A MX 9904453 A MX9904453 A MX 9904453A MX PA99004453 A MXPA99004453 A MX PA99004453A
- Authority
- MX
- Mexico
- Prior art keywords
- data
- segment
- audio
- video
- unit
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 claims abstract description 84
- 238000003860 storage Methods 0.000 claims description 33
- 230000033001 locomotion Effects 0.000 claims description 27
- 238000005259 measurement Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 7
- 238000003822 preparative gas chromatography Methods 0.000 description 353
- 238000012545 processing Methods 0.000 description 215
- 239000000872 buffer Substances 0.000 description 208
- 206010003671 Atrioventricular Block Diseases 0.000 description 135
- 238000000034 method Methods 0.000 description 116
- 238000000926 separation method Methods 0.000 description 103
- 238000012546 transfer Methods 0.000 description 78
- 238000010586 diagram Methods 0.000 description 50
- 238000010276 construction Methods 0.000 description 46
- 230000006870 function Effects 0.000 description 42
- 230000008569 process Effects 0.000 description 36
- 230000000295 complement effect Effects 0.000 description 22
- 230000036961 partial effect Effects 0.000 description 20
- 230000002452 interceptive effect Effects 0.000 description 14
- 230000008859 change Effects 0.000 description 11
- 230000002950 deficient Effects 0.000 description 11
- 239000000463 material Substances 0.000 description 11
- 102100037812 Medium-wave-sensitive opsin 1 Human genes 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 9
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 230000007423 decrease Effects 0.000 description 9
- 230000004044 response Effects 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000003825 pressing Methods 0.000 description 8
- 230000007704 transition Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000013467 fragmentation Methods 0.000 description 6
- 238000006062 fragmentation reaction Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000012217 deletion Methods 0.000 description 5
- 230000037430 deletion Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 102100035767 Adrenocortical dysplasia protein homolog Human genes 0.000 description 4
- 101100433963 Homo sapiens ACD gene Proteins 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 4
- 230000008707 rearrangement Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001174 ascending effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000013479 data entry Methods 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 102100025064 Cellular tumor antigen p53 Human genes 0.000 description 2
- 235000019227 E-number Nutrition 0.000 description 2
- 239000004243 E-number Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 150000001768 cations Chemical class 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- PNDPGZBMCMUPRI-UHFFFAOYSA-N iodine Chemical compound II PNDPGZBMCMUPRI-UHFFFAOYSA-N 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000008929 regeneration Effects 0.000 description 2
- 238000011069 regeneration method Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- -1 ext ens ion Chemical class 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 238000005755 formation reaction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- PWPJGUXAGUPAHP-UHFFFAOYSA-N lufenuron Chemical compound C1=C(Cl)C(OC(F)(F)C(C(F)(F)F)F)=CC(Cl)=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F PWPJGUXAGUPAHP-UHFFFAOYSA-N 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Abstract
During video editing, video data is recorded in continuous areas of at least a predetermined length on an optical disc to ensure that the display of video images is uninterrupted. A first segment, out of a plurality of segments recorded on an optical disc, whose consecutive area on the optical disc is shorter than the predetermined length is detected. Re-encoded data that is to be reproduced either immediately before or immediately after the detected first segment is recorded next to the first segment. If the combined continuous area of the first segment and the re-encoded data is still below a predetermined length, the segment that is to be reproduced on the other side of the re-encoded data to the first segment is recorded on the optical disc so as to be positioned on the other side of the re-encoded data, thereby increasing the continuous length of the recording area beyond the predetermined length.
Description
VIDEO AND AVERAGE DATA EDITION DEVICE
LEVIBLE RECORDING BY COMPUTER STORING
AN EDITING PROGRAM
FIELD OF THE INVENTION
The present invention relates to an apparatus for editing video data on the other side of the encoded data that is edited by an optical disc that records video data files, and a recording medium, readable by computer that stores an editing program. •
BACKGROUND OF THE INVENTION
Video editors in the film and broadcast industries make full use of their skill and experience when editing the wide variety of video productions that come to market. While movie buffs and home video producers may not possess this skill or experience, many are still inspired by the professional edition to try to edit video on their own.
REF .: 30089 same. This creates a demand for a home video editing device that can perform video editing, while still being easy to use. While video editing generally comprises a variety of operations, domestic video editing devices that are likely to appear on the market in the near future will especially require an advanced scene link function. This function links a number of scenes to form an individual work. When scenes are linked using conventional, domestic equipment, users connect two video cassette recorders to form a copy system. In operations performed when scenes are linked using this kind of copy system are described below. Figure IA shows a video editing arrangement using video cassette recorders that are respectively capable of recording and reproducing video signals. The arrangement of Figure IA includes the video cassette 301 which records the source video, the video cassette 302 for recording the editing result, and two video cassette recorders 303 and 304 for reproducing and recording video images in the video cassettes 301 and 302. In this example, the user attempts to perform the editing operation shown in Figure IB using the arrangement of Figure IA. Figure IB shows the relationship between the material that is edited and the result of editing. In this example, the user reproduces scene 505 which is located between time t5 and time tlO of the source material, scene 506 which is located between time tl3 and t21, and scene 507 which is located between time t23 and t25 and try to produce and the editing result that consists only of these scenes. With the arrangement of Figure IA, the user adjusts the video cassette 301 including the source material in the recording apparatus 303 and the video cassette 302 to record the editing result in the video cassette recorder apparatus 304. After the video cassettes 301 and 302 are set, the user presses the fast forward button on the operation panel of the video cassette recorder 303 (shown by ® in Figure IA) to search for the start of the scene 505. Then, the user presses the play button on the operation panel of the video cassette recorder 303 (as shown by © in Figure 1A) to play scene 505. At the same time, the user presses the record button in the operation panel of the video cassette recorder 304 (as shown by ® in Figure 1A) to start recording. When the scene 505 is finished, the user stops the operation of both recording devices 303 and 304 of video cassettes. The user then rapidly overtakes the video cassette at the start of scene 506, and then simultaneously begins playback by the video cassette recorder 303 and recording by the video cassette recorder 304. After finishing the above process for scenes 506 and 507, the user causes the video cassette recorders 303 and 304 to rewind the video cassettes 301 and 302 respectively to complete the editing operation. If the scene link operation described above can be performed easily at home, users would then be able to easily handle programs that have been recorded on a large number of magnetic tape cassettes. However, when the. If you want to perform a scene-link operation, the user has to repeat the processes of locating the desired scene start in the source material and playing all the video images from the beginning to the end of the scene for each scene that is will link Therefore, video editing is a cumbersome process. Exceeding the potential of video cassettes, they have been subjected to increasing attention for their ability to facilitate video editing of file systems that process audio and video data (AV data) that are produced by multiplexing video and data data. of audio, in the same way as computer files.
The preferred file systems herein are data constructs for managing areas on a recording medium, similar to a hard disk or optical disk, which allows random access. A file system divides the entire area of the disk into blocks of data that are several dozens of KB in size, with blocks of data that do not contain valid data that are managed as an empty area. When a file is deleted, the blocks that store the file as registered as empty areas. Data is generated by an application program that operates within the file system, and when the user gives an indication to make this data recorded as a file on a recordable disc, the file system calculates the file size, and judges if there is an empty area, it continues on the disk whose size is equal to, or larger than, the file size. If an area is present on the disk, the file system will record the file in this area, although if there is no empty area as large as the disk file, the file system will search for fragmented empty areas on the disk. . The file system then divides the data to be recorded and stores the different sets of data divided into different empty areas on the disk. The file system then generates the information and administration to manage the divided data sets as different sets of data, and has the management information written on the recordable disk to finish recording the file on the optical disk. Since the data that is recorded by the file system is divided into a plurality of data sets and stored as fragments in different areas on the disk, it is not necessary for the recordable optical disk to include a continuous empty area that is so large as the file. Even if the data to be recorded is AV data, this data can still be recorded efficiently on the optical disc. However, when a plurality of AV data sets are recorded on a recordable optical disc under the control of a file system, if the continuous length of an area storing the AV data is too short, when AV data in this area are reproduced, 'there is a risk that the display that the video images will be interrupted while the optical reader is jumping from the data to the recording position of the following AV data. In more detail, the playback apparatus reads the video images stored in the recordable optical disc to the temporary buffer and the AV encoder of the playback apparatus decodes the read AV data in the buffer. When the area recording AV has a long, long length, a sufficient amount of AV data can be accumulated in the buffer. When the optical reader then jumps to a different recording position, there will be enough data in the buffer so that the encoder continues its decoding process, meaning that the visualization of the video images can be continued without interruption.
On the other hand, when the image editing is performed and the operations that take part of the existing AV data and are used to create a new file are repeated a large number of times, many short sets of data will end up on the disk Recordable Since the continuous length of the recording areas recorded by these AV data sets is short, an insufficient amount of data will accumulate in the buffer when this data is reproduced. If the optical reader jumps to another recording position with only a small amount of data in the buffer, a subflow will occur in the buffer, so that the continuity of the AV data encoding can not be maintained. for the intermediate memoir. This will result in an interruption in the video display.
DESCRIPTION OF THE INVENTION
It is a main object of the present invention to provide a video editing apparatus and a computer-readable optical disk that stores an editing program that allows easy editing of video and can quickly deal with sections of audio and video data. (AV) of an insufficient length, it does not matter how these sections appear. The main object proposed can be achieved by a video editing apparatus for an optical disk, the optical disk that. records at least one video data file divided into a plurality of segments, each segment being recorded in a consecutive area within a region on the optical disk, the video data editing apparatus including: a detection unit for detecting a first segment, from among the plurality of segments, where a length of the consecutive area is below a predetermined length; and a link unit for linking the first detected segment with at least part of a second segment, and for making a total continuous length of the first segment and a linked portion of the second segment at least equal to the predetermined length, moving at least one first segment and the linked part of the second segment to a different area on the optical disc, the second segment that includes video data that is reproduced one immediately before and immediately after the reproduction of the video data in the first segment, the different position which is located completely within an area on the optical disk. With the construction indicated, the fragmentation of the AV files can be avoided, and the uninterrupted reproduction of the AV data in the AV files can be realized. Here, the link unit may include: a first measurement unit for measuring a continuous length of an empty area on the optical disk on at least one side of a recording area of the first segment detected by the detection unit; a second measuring unit for measuring a continuous length of an empty area on the optical disk on at least one side of the recording area of the second segment, a first judgment unit for judging whether a continuous length of any empty area measured by the first unit of measurement is greater than a data size of the second segment; a first unit of motion to move, when a judgment of the first judgment unit is affirmative, the second segment to the empty area judged as being larger than the data size of the second segment so that the first segment is recorded and the second segment on the disk in the order of reproduction; a second judgment unit for judging, when the judgment of the first judgment unit is negative, if a continuous length of any empty area measured by the second measurement unit is greater than a data size of the first segment; a second unit of movement to move, when a judgment of the second judgment unit is affirmative, the first segment to the empty area judged to be larger than the data size of the first segment, so that the first segment and the second segment segment are recorded on the disc of the playback order. With the construction indicated, the first and second judgment units judge the continuous lengths of the empty areas adjacent to the first and second segments against the lengths of the first and second segments. Based on the judgment results, one of the segments moves to a position adjacent to the other segment, so that the AV data to be played consecutively is recorded in consecutive areas on a recordable optical disc if possible. This creates the efficiency with which the recording areas of the optical disc are used. Here, the link unit may additionally include: a search unit to search, when the judgments of both the first judgment unit and the second judgment unit are negative, the optical disk by an empty area whose continuous length is greater than one length L, where the length L is a total length of the first segment and the second segment; and a third movement unit for moving, when the search unit has found an empty area with a continuous length greater than the length L, the first segment and the second segment to the empty area found by the search unit. With the construction noted, the first and second segments can be moved to a different recording position when movement of any of the first and second segments to an approximately adjacent to the other segment is not possible. As a result, potential sub-flows for the first segment can be avoided. Here, the video data and editing apparatus may additionally include: a third judgment unit for judging, when the search unit has found an empty area with a continuous length greater than the length L, if the length L is below a maximum length S, the maximum length S which is at least twice the predetermined length, wherein the third movement unit moves the first segment and second segment to an empty area only when the length L is below the maximum length S , the link length that also includes: a fourth unit of movement to move, when the length L is not below the maximum length S, the entire first segment and only the linked part of the second segment to the empty area found by the unit of bus remains.
With the construction indicated, the third judgment unit judges whether the total length L of the first segment and the second segment is below a maximum length S that is at least twice the predetermined length. When L exceeds S, the fourth unit of motion * restricts the amount of data moved. As a result, it can be ensured that the total size of the data that needs to be described will be within a given size, meaning that the fragmentation of the data can be completed in a short time. Here, the video data editing apparatus may additionally include: a storage unit for storing re-encoded data obtained by re-encoding a video data section read by the video editing apparatus during an editing operation; a fourth unit of judgment to judge, when the trial of the first unit of judgment is affirmative, if the first segment is a remaining part of a segment that was originally recorded on the optical disc but has had a section of data read by the apparatus of editing video data during the editing operation; and a first recording unit for recording, when a judgment of the fourth judgment unit is affirming, the re-encoded data that is stored with the storage unit in the empty area, the first unit of movement that moves the second segment to a position on the optical disk that follows immediately after a recording position of the re-encoded data. With the construction noted, when it is necessary to produce re-encoded data with a short continuous length as a result of an editing operation performed freely by the user, the re-encoded data will be recorded in a position adjacent to the AV data being reproduce before or after the re-encoded data. As a result, the fragmentary recording of the re-encoded data can be prevented from the beginning, so that the uninterrupted playback of AV data in the AV files can be realized.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects, advantages and features of the invention will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the invention. In the drawings:
Figure 1A shows a conventional video editing arrangement using video cassette recorders that are capable of reproducing and recording video signals;
Figure IB shows the relationship between the source materials and the editing result;
Figure 2A shows the outer appearance of a DVD-RAM disc which is the recordable optical disc used in the embodiments of the present invention;
Figure 2B shows the recording areas on a DVD-RAM;
Figure 2C shows the cross section and surface of a DVD-RAM cut in a sector header;
Figure 3A shows zones 0 to 23 on a DVD-RAM;
Figure 3B shows zones 0 to 23 arranged in a horizontal sequence;
Figure 3C shows the logical sector numbers (LSN) in the volume area;
Figure 3D shows the logical block numbers (LBN) in the volume area;
Figure 4A shows the contents of the recorded data in the volume area;
Figure 4B shows the hierarchical structure of the data definitions used in the MPEG standard;
Figure 5A shows a plurality of image data set arranged in the display order and a plurality of image data set arranged in the order of coding;
Figure 5B shows the correspondence between the audio frames and the audio data;
Figure 6A shows a detailed hierarchy of the logical formations in the construction of data of a VOB (video object);
Figure 6B shows the partial suppression of a VOB;
Figure 6C shows the logical format of a video packet arranged at the start of a VOB;
Figure 6D shows the logical format of other video packets arranged in a VOB;
The figure. 6E shows the logical format of an audio package;
Figure 6F shows the logical format of a packet header;
Figure 6G shows the logical format of a system header;
Figure 6H shows the logical format of a group header,
Figure 7A shows a video frame and the occupation of the video buffer;
Figure 7B shows an audio frame and an ideal transition in the buffer state of the audio buffer;
Figure 7C shows an audio frame and the ideal transition in the buffer state of the audio buffer;
Figure 7D shows the detailed transfer period of each image data set;
Figure 8A shows how the audio packets, which store audio data to be reproduced in a plurality of audio frames, and video packets, which store the image data to be reproduced in a plurality of frames of video, you can record;
Figure 8B shows a key to the annotation used in Figure 8A;
Figure 9 shows how the audio data to be reproduced in a plurality of audio frames can be recorded, and the video data, which stores the image data to be reproduced in a plurality of video frames;
Figure 10A shows the transition in the buffer state during the first part of a video stream;
Figure 10B shows the transition in the buffer state during the last part of a video stream;
Figure 10C shows the transition in the buffer state through two VOBs, when the intermediate current whose last part causes the buffer state shown in Figure 10B seamlessly links to the video stream whose previous part causes the buffer state shown in Figure 10A;
Figure 11A is a graph where the
SCR of the video packets included in a VOB are plotted in the order in which the video packets are arranged;
Figure 11B shows an example where the first SCR in section B corresponds to the last SCR in section A;
Figure 11C shows an example where the first SCR in section D is higher than the last SCR in section C;
. Figure 11D shows an example where the last SCR in section E is higher than the first SCR in section F;
Figure HE shows the continuity plot of the VOBs in Figure HA for two specific VOBs;
Figure 12A shows a detailed expansion of the data hierarchy in the RTR administration file;
Figure 12B shows the format of the PTM descriptor;
Figure 12C shows the data construction of the audio separation location information;
Figure 13 shows the buffer occupancy for each of a previous VOB and a last VOB;
Figure 14A shows examples of audio frames and video frames;
Figure 14B shows the time difference gl appearing at the end of the audio data and the image data when the playing time of the image data and the playing time of the audio data are aligned at the start of a VOB;
Figure 14C shows the audio packet G3 including the audio separation and the audio pack G4, the audio packet G3 including (i) the sets of the audio data y-2, y-1 and y, which is locate at the end of VOB # l, and (ii) Re 1 leno_Group, of audio pack G4 that includes the audio data sets u, u + 1, and u + 2, which are located at the start of VOB # 2;
Figure 14D shows in which of V0BU # 1, V0BU # 2 and V0BU # 3 at the start of V0B # 2 the audio packet G3 including the audio separation is arranged;
Figures 15A to 15D show the procedure for the regeneration of the audio separation when the VOBUs located at the beginning of V0B # 2, of the VOB # 1 and # 2 to be played without seams, are erased;
Figure 16 shows a sample configuration of the system using the video data editing apparatus of the first embodiment;
Figure 17 is a block diagram showing the construction of the physical equipment of the DVD recorder 70;
Figure 18 shows the construction of the MPEG encoder 2;
Figure 19 shows the construction of the MPEG decoder 4;
Figure 20 is a timing diagram showing the synchronization for switching of switches SWl to S 4;
Figure 21 is a flow chart showing the processing procedure without seams;
Figure 22 is also a flow chart showing the seamless processing procedure;
Figures 23A and 23B show the analysis of the transition in the buffer state for the audio packets;
Figure 23C shows the area to be read from the previous VOB in step S 106;
Figure 23D shows the area to be read from the last VOB in step S107;
Figure 24A shows the audio data in the audio stream corresponding to the audio frames x, x + 1, y, u, u + 1, u + 2 used in Figure 22;
Figure 24B shows the case when the
First_SCR + STC_Compensation corresponds to a limit between the audio frames in the previous VOB;
Figure 24C shows the case when the video playback start time VOB_V_S_PTM + STC_compensation corresponds to the limit between the audio frames in the previous VOB;
Figure 24D shows the case when the presentation completion time of the video frame corresponds to a limit between the audio frames in the last VOB;
Figure 25 shows how audio packets that store audio data for a plurality of audio frames and video packets that store video data for each video frame are multiplexed,
Figure 26 shows an example of the section of a VOB that is specified using the time information for a pair of C_V_S_PTM and C_V_E_PTM;
Figure 27A shows the area to be read from the previous cell in step S 106;
Figure 27B shows the area to be read from the last cell in step S107;
Figure 28A shows an example of the binding of sets of cell information that are specified as the edit limits in a VOBU;
Figure 28B shows the processing for the three rules for GOP reconstruction when correcting the display order and the coding order;
Figure 29A shows the processing when an image type of the image data in the previous cell changes;
Figure 29B shows the procedure for measuring the ß change in buffer occupancy when the image type in the previous cell changes;
Figure 30A shows the processing when the type of image in the last cell changes;
Figure 30B shows the procedure for measuring the change a in buffer occupancy when an image type in the last cell changes;
Figure 31 is a flow chart showing the procedure for seamless processing;
Figure 32 is also a flow chart showing the procedure for seamless processing;
Figure 33 is also a flow chart showing the procedure for seamless processing;
Figure 34 shows the audio frames in the audio stream. corresponding to the audio frames x, x + 1 e and used in the flow diagram of Figure 31;
Figure 35 shows the hierarchical directory structure;
Figure 36 shows the information, in addition to the sector administration table and block management table
AV shown in Figure 6, in the administration information for the file system;
Figure 37 shows the linked relationships shown by the arrows in Figure 6 within the directory structure;
Figure 38A shows the data construction of the file entries in greater detail;
Figure 38B shows the construction of data of the allocation descriptors;
Figure 38C shows the list • recorded of 2 upper bits in the data shown in the extension length;
Figure 39A shows the construction of such detailed data of the file identification descriptor for a directory;
Figure 39B shows the construction of detailed data of the file identification descriptor for a file;
Figure 40 is a model showing the buffer storage in the buffer buffer of the AV data read from the DVD-RAM;
Figure 41 is a functional block diagram showing the construction of the DVD recorder apparatus 70 divided by function,
Figure 42 shows an example of an interactive screen displayed on the TV monitor 72 under the control of the recording-editing-playback control unit 12;
Figure 43 is a flow diagram showing the processing by the recording-editing-playback control unit 12 for a virtual edition and for a real edition unit;
Figures 44A to 44F show a complementary example for illustrating the processing of the AV data editing unit 15 in the flow diagram of Figure 43;
Figures 45A through 45E show a complementary example to illustrate the processing of the AV data editing unit 15 in the flow diagram of Figure 43;
Figures 46A to 46F show a complementary example for illustrating the processing of the AV data editing unit 15 in the flow diagram of the Figure
43;
Figure 47A shows the relationship between the extensions and the data in memory, in terms of time;
Figure 47B shows the position relationship between the extensions, in the Input area and the Output area;
Figure 48A is a flow diagram showing the processing by unit 11 of the AV file system when a "DIVIDE" command is executed;
Figure 48B is a flow chart showing the processing when the execution of a "SHORT" command is issued;
Figure 48B is a flow chart showing the processing when the execution of an "ANNEXAR" command is omitted;
Figure 50 is a flow chart for the case when the previous extension is below the length of the AV block but the ultimate extent is at least equal to the length of the AV block;
Figures 51A-51B are a complementary example showing the processing of unit 11 of the AV file system in the flow chart of Figure 50;
Figures 52A-52C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 50;
Figures 53A to 53D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 50;
Figures 54A-54D are a complementary example showing the processing of unit 11 of the AV file system in the flow chart of Figure 50;
Figure '55 is a flow diagram for the case when the previous extension is at least 'equal to the length of the AV block but the last extension is below the length of the AV block;
Figures 56A-56B are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55;
Figures 57A-57C are a complementary example showing the processing of the unit 11 of the AV file system in the flow diagram of Figure 55;
Figures 53A-53D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55;
Figures 54A-54D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 55;
Figure 60 is a flow diagram for the case when both the anterior extension and the latter extension are below the length of the AV block;
Figures 61A-61D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60;
Figures 62A-62C are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60;
Figures 63A-63C are a complementary example showing the processing of the. unit 11 of the AV file system in the flow diagram of Figure 60;
Figures 64A-64D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 60;
Figure 65 is a flow chart for the case when both the previous extension and the last extension are at least equal to the length of the AV block;
Figures 66A-66D are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 65;
Figure 67 is a flow diagram showing the case when both the previous extension and the last extension are at least equal to the length of the AV block but the data sizes in the Entry area and the Output area are insufficient;
Figures 68A-69E are a complementary example showing the processing of unit 11 of the AV file system in the flow diagram of Figure 67;
Figures 69A-69D are a complementary example showing the processing of the fragmentation unit 16;
Figure 70A shows the detailed hierarchical content of the RTRW administration file in the fourth mode;
Figure 70B is a flow diagram showing the logical format of the original PGC information in the fourth mode;
Figure 70C is a flow chart showing the logical format of the PGC information defined by the user in the fourth mode;
Figure 70D shows the logical format of the title search indicator;
Figure 71 shows the interrelationships between the AV file, the extensions, the VOBs, the VOB information, the original PGC information, and the user-defined PGC information, with the unified elements enclosed in the drawn boxes with thick lines;
Figure 72 shows an example of a user-defined PGC and an original PGC;
Figure 73 shows a part corresponding to the cell to be erased using diagonal dimming;
Figure 74A shows that ECC blocks are free in the empty areas by a real edition using the PGC information # 2 defined by the user;
Figure 74B shows examples of VOB, VOB information, and PGC information after a real edition;
Figure 75 is a functional block diagram showing the construction of the DVD recorder apparatus 70 divided according to the "function;
Figure 76 shows an example of the original PGC information that has been generated by the PGC information generator 25 defined by the user when recording an AV file;
Figure 77A shows an example of graph data that is displayed on the TV momitor 72 under the control of the recording-editing-playback control unit 12;
Figure 77B shows an example of the PGC information and the cell information that is displayed as a list of operation objectives;
Figure 78A is a flow chart showing the processing during partial reproduction of a title;
Figure 78B shows as only the section between the presentation start time C_V_S_PTM and the presentation completion time C_V_E_PTM being played, from among the VOBUs the VOBU enters
(HOME) and the VOBU (FINAL);
Figures 79A, 79B show the user pressing the dial key while viewing the video images on the TV monitor 72;
Figures 80A, 80B show how data is entered and transferred between the components shown in Figure 75 when a marking operation is returned;
Figure 81 is a flow chart showing the processing of the multistage, editing control unit 26 when defining the user defined PGC information;
Figure 82 is a flow diagram showing the processing of the multistage editing control unit 26 when the user defined PGC information is defined;
Figure 83 is a flowchart showing the processing of the recording-editing-playback control unit 12 during a scheduled and actual editing;
Figure 84 is a flowchart showing the update processing for the PGC information after a real edition;
Figure 85 shows an example of the interactive screen that is displayed on the TV monitor 72 to have the user make a selection of the cell information as an element in a set of PGC information defined by the user during a virtual edit;
Figures 86A, 86B show the relationship between the operation of the user of the remote controller 71 of the display processing that accompanies the operation of the user;
Figures 87A to 87B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user;
Figures 88A, 88B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user;
Figures 89A, 89B show the relationship between the operation of the user of the remote controller 71 and the display processing that achieves the operation of the user;
Figure 90 shows an example of the interactive screen in which the user has to select a set of PGC information defined by the user or a predicted one (using the play key) or a real edition (using the actual edit key). );
Figure 91 shows an example of the original PGC information table and the user-defined PGC information table, when the user-defined PGC information # 2 composed of CELL # 2B, CELL # 4B, CELL # 10B , and CELL # 5B and the information # 3 of PGC defined by the user composed of CELDA # 3C, CELL # 6C, CEELDA # 8C, CELL # 9C as defined;
Figures 92A-92B show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user;
Figures 93A-93C show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user;
Figures 94A-94C show the relationship between the operation of the user of the remote controller 71 and the display processing that accompanies the operation of the user; Y
Figure 95 shows the table of information of the original PGC and the table of information of PGC defined by the user after the processing of the VOB in a real edition.
DESCRIPTION OF THE PREFERRED MODALITIES
The following embodiments describe a video data editing apparatus and the optical disc which uses the video data editing apparatus as the recording medium. For ease of explanation, it is divided into four modalities that deal with the physical structure of the optical disc, the logical structure, the structure of the physical equipment of the video data editing apparatus, and the functional construction of the editing apparatus. video data. The first embodiment explains the physical structure from the optical disc and the structure of the physical equipment of the video data editing apparatus, as well as the seamless linking of the video objects as the first basic example of. the video edition. The second embodiment explains the seamless connection of the partial sections of video objects as the second basic example. The third embodiment deals with the functional construction of the video data editing apparatus and the procedure for performing video editing within a file system. The fourth embodiment describes the data and process structures of the video data editing apparatus when it performs a two-step editing process comprised of virtual editing and real edition of two types of program chain called a user-defined PGC and a Original PGC.
• (1-1) Physical Structure of a Rewritable Optical Disc
Figure 2A shows the external appearance of a DVD-RAM disc that is a rewritable optical disc. As shown in this drawing, the DVD-RAM is located in a video data editing apparatus that has been placed in a cartridge 75. This cartridge 75 protects the recording surface of the DVD-RAM, and a closer 76 which open and close to allow access to the DVD-RAM locked inside. Figure 2B shows the recording area of the DVD-RAM disc which is a rewritable optical disc. As shown in the figure, the DVD-RAM has an entrance area in its innermost periphery and an exit area in its outermost periphery, with a data area between them. The input area records the reference signals necessary for the stabilization of a servo during access by optical reader, and the identification signals to prevent confusion with other means. The output recorder records the same types of reference signals as the input area. The data area, meanwhile, is divided into sectors that are the smallest units through which you can access the DVD-RAM. Here, the size of each- -sector adjusts to 2 KB. Figure 2C shows the cross section and the surface of a DVD-RAM cut in the header of a sector. As shown in the figure, each sector is composed of a sequence of dimples that is formed on the surface of a reflective film, such as a metal film, and a concave-convex part. The dimple sequence is composed of 0.4 μm ~ 1.87 μm dimples that are cut on the surface of the DVD-RAM to show the direction of the sector. The concave-convex part is composed of a concave part called a "groove" and a convex part called a "ground". Each groove and earth has a recording mark composed of a metal film capable of phase change attached to its surface. Here, the expression "capable of phase change" means that the recording mark may be in a crystalline state or in a non-crystalline state depending on whether the metallic film has been exposed to a light beam. Using this phase change feature, data can be recorded in this concave-convex part. While it is possible to record data on the ground part of an MO disk (Magné ti co-Opt i co), data can be recorded on both the ground and groove sides of a DVD-RAM, meaning that the The recording density of a DVD-RAM exceeds that of an MO disk. The error correction information is provided on a DVD-RAM for each group of 16 sectors. In this specification, each group of 16 sectors that is given an ECC (Error Correction Code) is called an ECC block. In a DVD-RAM, the data area is divided into several zones to perform the rotation control called Z-CLV (Line-Constant Speed of Zones) during recording and playback. Figure 3A shows the plurality of zones provided in a DVD-RAM. As shown in the figure, a DVD-RAM is divided into 24 zones numbered zone 0 ~ zone 23. Each zone is a group of tracks that are accessed using the same angular velocity. In this modality, each zone includes 1888 tracks. The rotational angular velocity of the DVD-RAM is adjusted separately for each zone, with this speed being greater the closer the zone is located to the inner periphery of the disk. The division of the data area into zones ensures that the optical reader can move at a constant speed while performing access within an individual zone. By doing so, the elevation density of the DVD-RAM is increased, and the rotation control during recording and playback becomes easier. Figure 3B shows a horizontal arrangement of the entrance area, the exit area, zone 0-23 shown in Figure 3A. The entrance area and the exit area that include a defect management area (DMA: Defect Management Area). This defect management area records the position information that shows the positions of sectors found to include defects and the replacement position information that shows whether the sectors used to replace defective sectors are located in any of the replacement areas. Each zone has a user area, in addition to a replacement area and an unused area that are provided on the border with the next area. A user area is an area that the file system can use as a recording area. The replacement area is used to replace the bad sectors when those bad sectors are found. An unused area is an area that is not used to record data. In only two tracks are used as the unused area, with the unused area that is provided to prevent erroneous identification of sector addresses. The reason for this is that while the sector addresses are recorded in the same position on adjacent tracks within the same zone, for the Z-CLV recording positions and sector addresses are different for adjacent tracks in the boundaries between zones.
In this way, the sectors that are not used for recording data exist in the boundaries between zones. In a DVD-RAM, logical sector numbers (LSN: Logical Sector Number) are assigned to physical sectors of the user area in order to start from the inner periphery to consecutively display only the sectors used for data recording. As shown in Figure 3C, the area that records the user data and is composed of sectors that have been assigned to an LSN is called the volume area. The volume area is used to record AV files that are each composed of a plurality of VOBs and a RTRW (Real-Time Rewritable) administration file that is the management information for the AV files. These AV files and the RTRW administration file are actually recorded in a file system according to ISO / IEC 13346, although this will not be explained in this modality. The file system is discussed in detail in the third subsequent mode.
(1-2) Recorded Data in the Volume Area
Figure 4A shows the content of the recorded data in the volume area of a DVD-RAM. The video stream and the audio stream shown in the fifth level of Figure 4 are divided into units of approximately 2 KB, as shown in the fourth level. The units obtained through this division are interleaved in the VOB # l and VOB # 2 in the AV file shown in the third level as video packets and audio packets in accordance with the MPEG standard. The AV file is divided into a plurality of extensions as shown in the second level, according to I SO / I EC13346, and these extensions are each stored in an empty area within a zone in the volume area, such as it is shown in the first level of Figure 4A. The information for VOB # l ~ VOB # 3 is recorded in the RTRW administration system as the VOB # l information, VOB # 2 information, and VOB # 3 information shown in the fifth level. In the same way as an AV file, this RTRW file is divided into a plurality of extensions that are recorded in empty areas in the volume area. The following explanation will deal with video streams, audio streams, and VOBs separately, having first explained the hierarchical structure of the MPEG standard and the DVD-RAM standard that defines the data structures of these elements. Figure 4B shows the hierarchical structure of the data definitions used under the MPEG standard. The data structure for the MPEG standard is composed of an elementary current layer and a s e s t eme layer. The elementary current layer shown in Figure 4B includes a video layer defining the data structure of the video streams, an MPEG-Audio audio layer that defines the data structure of an MPEG-Audio stream, a AC3 layer that defines the data structure of a data stream under the Dolby-AC3 methods, and a linear PCM layer that defines the data structure of an audio stream under the linear PCM methods. The presentation start time (Pres_aticio_Inicio_Tiempo) .and the presentation completion time
(Presentation_Finalization_Time) are defined within the elementary stream layer, although, as shown by the separate frames for the video layer, MPEG-Audio layer, AC-3 layer, and Linear PCM layer, the • data structures of the video stream and the audio stream are independent of each other. The start time of presentation and the end time of presentation of a video frame and the start time of presentation and the end time of presentation of an audio frame are not synchronized in a similar way. The system layer shown in Figure 4B defines the packets, groups, DTS and PTS described below. In Figure 4B, the system layer is shown in a separate box to the video layer and the audio layer, showing that the packets, groups, DTS and PTS are independent of the data structures of the video streams and the audio streams.
While the above layer structure is used for the MPEG standard, the DVD-RAM standard includes the system layer under the MPEG standard shown in Figure 4B and an elementary stream layer. In addition to the packages, groups, DTS, and PTS described above, the DVD standard defines the data structures of the VOBs shown in Figure 4A.
(1-2-1) Video Stream
The video stream shown in Figure 5A has a data structure that is defined by the video layer shown in Figure 4B. Each video stream is composed of an array of a plurality of image data sets each corresponding to a frame of video images. This image data is a video signal in accordance with the NTSC (National Television Standards Committee) or PAL (Phase Alternation Line) standard that has been compressed using MPEG techniques. The image data sets produced by compressing a video signal under the NTSC standard are displayed by video frames that have a frame interval of approximately 33 seconds (1 / 29.97 seconds to be accurate), while data sets of Image produced by compressing a video signal under the PAL norm are displayed by video frames that have a frame interval of 40 m seconds. The upper level of Figure 5A shows the examples of video frames. In Figure 5A, the sections indicated between the symbols "<" and ">" are each a video frame, with the "<" symbol showing the presentation start time (Time_Home_Fast) for each frame of video and the ">" symbol that shows the presentation end time (Pres_action_Final_ITIA_TIME_time) for each video frame. This annotation for video frames is also used in the following drawings. The sections enclosed by these symbols include each plurality of video fields. As shown in Figure 5A, the image data to be displayed for a video frame is entered into a decoder before the
Pres_action_Start_Time of the video frame and must be taken from the buffer by the decoder in the Start_Timer_Time. When compression is performed in accordance with MPEG standards, the spatial frequency characteristics within the image of a frame and the time-related correlation with the images that are displayed before or after a frame are used. In doing so, each set of image data becomes one of a bidirectionally predicative image (B), a predicative image (P), or an Intra (I) image. An image B is used where the compression is performed using the correlation related to time with images that are reproduced both before and after the present image. An image P is used when performing compression and using time-related correlation with images that are reproduced before the present image. An I image is used when performing compression and using the spatial frequency characteristics within a frame without using time-related correlation with other images. Figure 5A shows B images, P images, and I images as they all have the same size, although it should be noted that there is actually more variation in their sizes. When decoding an image B or a picture P using the correlation related to time between frames, it is necessary to refer to the 'images to be reproduced before or after the image to be decoded. For example, when an image B is decoded, the encoder has to wait until the decoding of the next image has been completed. As a result, an MPEG video stream defines the encoding order of each image as well as defining the display order of the images. Figure 5A, the second and third levels respectively show the image data sets arranged in the order of display and in the order of coding. In Figure 5A, the reference objective of one of the images B is shown by the dashed line that is followed by the following image I. In the order of 'visualization, this image I follows the image B, although since the image B is compressed using the time-related correlation with the image I, the decoding of the image B has to wait for the coding of the image I to end. As a result, the coding order defines that the image I arrives before the image image B. This re-arrangement of the display order of the images when the coding order is generated is called "reordering". As shown in the third level of Figure 5A, each image data set is divided into units of 2 KB after it is arranged in the order of coding. The resulting 2 KB units are stored as a sequence of video packets, as shown in the background level of Figure 5A. When a sequence of B images and P images are used, problems can be caused, such as by the special playback characteristics that perform decoding starting from the midpoint through the video stream. To prevent these problems, an I image is inserted into the video data at intervals of 0.5 sec. Each sequence of image data that starts from an I image that continues to the next image I is called a GOP (Group of Images), with the GOPs that are defined in the system layer of the MPEG standard as the unit for the MPEG compression. In the third level of Figure 5A, the dotted vertical line shows the boundary between the present GOP and the next GOP. In each GOP, the image type of the image data that is arranged last in the display order is an image in P, while the type of image in the image data that is arranged first in the coding order must be an image I
(1-2-2) Audio Stream
The audio stream is data that has been compressed according to one of the Dolby-AC3 method, MPEG method, and or linear PCM. Similar to a video stream, an audio stream is generated using audio frames that have a fixed range of frames. Figure 5B shows the correspondence between the audio frames and the audio data. In detail, the playback period of an audio frame is 32 m seconds to Dolby-AC3, 24 m seconds for MPEG, and about 1.67 m seconds (1/600 seconds to be precise) for linear PCM. The upper level of Figure 5B shows the example audio frames. In Figure 5B, each section indicated between the symbols "<" and "> < • - * is an audio box, with the "<" symbol showing the presentation start time and the ">" symbol showing the presentation end time. This annotation for the video frames is also used in the following drawings. The audio data that must be displayed for a video frame is entered into a decoder before the start time of presentation of the audio frame and must be extracted from the buffer by the decoder at the start time of presentation.
The background level of Figure 5B shows an example of how the audio data to be played in each frame is stored in audio packets. In this figure, the audio data to be played for the audio frames f81, f82 are stored in the audio package A71, the audio data to be played for the audio frame f84 are stored in the package A72 audio, and the audio data to be played for audio frames f86, f87 are stored in audio package A73. The audio data to be played for the audio frame f83 are divided between the audio pack A71 that arrives first and the audio pack A72 that arrives later. In the same way, the audio data to be played for the f86 video frame is divided between the audio pack A72 that arrives first and the audio pack A73 that arrives last. The reason why the audio data to be played for an audio frame is stored divided between two audio packets is that the boundaries between the audio frames and the video frames do not correspond to the boundaries between the packets. The reason why these limits do not correspond is that the data structure of the packets under the MPEG standard is independent of the data structure of the video streams and the audio streams.
(1-2-3) Data Structure of the VOB
The VOB (Video Objects) # 1, # 2, # 3
... shown in Figure 4A are program streams under ISO / IEC 13818-1 which are obtained by multiplexing a video stream and an audio stream, although these VOB do not have a Finish_Code Program in the termination. Figure 6A shows the detailed hierarchy of the logical construction of the VOB. This means that the logical format located at the highest level of Figure 6A is shown in more detail at the lower levels. The video stream that is located at the highest level in Figure 6A is shown divided into a plurality of GOPs in the second level, with these GOPs that have been shown in Figure 5A. As in Figure 5A, the image data in the GOP units are divided into a large number of 2 KB units. On the other hand, the audio stream shown to the left of the highest level in Figure 6A is divided into a large number of units of approximately 2 KB at the third level in the same manner as Figure 5B. The image data for a GOP unit that is divided into - 2 KB units is interspersed with the air stream that is divided similarly into units of approximately 2 KB. This produces the sequence of packets in the fourth level of Figure 6A. This packet sequence forms a plurality of VOBU (Video Object Units) that are displayed at the level time, with the VOB (video object) shown in the sixth level which is composed of a plurality of these VOBU arranged in a time series. In Figure 6A, the guides drawn using dashed lines show the relationships between the data structures at the adjacent levels. By referring to the guides 6A, it can be seen that the VOBUs in the fifth level correspond to the packet sequences in the fourth level and the image data in the GOP units shown in the second level. As can be seen when plotting the guides, each VOBU is a unit that includes at least one GOP composed of image data with a reproduction period of approximately 0.4 to 1.0 seconds and audio data that has been interspersed with this image data. At the same time, each VOBU is composed of an array of video packages and audio packages under the MPEG standard. The unit called a GOP under the MPEG standard is defined by the system layer although when only the video data is specified by a GOP, as shown in the second level of Figure 6A, the audio data and other data (such as such as the sub-image data and the control data) that are multiplexed with the video data are not indicated by the GOP. Under the DVD-RAM standard, the expression "VOBU" is used for a unit that corresponds to a GOP, with this unit being a general name for at least one GOP composed of image data with a reproduction period of approximately 0.4. to 1.0 seconds and the audio data that has been interspersed with this image data. Here, it is possible for the parts of a VOB to be erased, with the minimum unit being a VOBU. As an example, the video stream recorded on a DVD-RAM as a VOB can contain images for a commercial that is not needed by the user. The VOBUs in this VOB include at least one GOP that composes the commercial and the audio data that is interspersed with this image data, so that if only the VOBUs in the. VOB that correspond to the commercial can be deleted, the user will then be able to see the video stream without having to see the commercial. Here, even if a VOBU is erased, for example, the VOBUs on either side of the deleted VOBU will include a portion of the video stream in the GOP units having each I image located on its front side. This means that a normal coding and reproduction process is possible, even after the VOBU has been erased. Figure 6B shows an example where part of a VOB is deleted. This VOB originally includes V0BU # 1, V0BU # 2, V0BU # 3, V0BU # 4 ... V0BU # 7. When the deletion of V0BU # 2, V0BU # 4 and V0BU # 6 is indicated, the areas that were originally occupied by these VOBUs are free and thus are shown as empty areas in the second level of Figure 6B. When the VOB is played later, the playback order is V0BU # 1, V0BU # 3, V0BU # 5 and V0BU # 7. The video packets and audio packages included in a VOBU each have a data length of 2 KB. This size of 2 KB corresponds to the size of the sector of a DVD-RAM, so that each video package and audio package is recorded in a separate sector. The arrangement of video packets and audio packets corresponds to the arrangement of an equal number of consecutive logical sectors, and the data held within these packages are read from the DVD-RAM. That is, the arrangement of video packages and audio packages refers to the order in which these packages are read from the DVD-RAM. Since each video packet is approximately 2 KB in size, if the size of the video stream data for a VOBU is several hundred KB, for example, the
• Video stream will be stored having, being divided into several hundred video packages.
(1-2-3-1) Data Structures of Video Packages and Audio Packages
Figures 6C to 6E show the logical format of the video packets and audio packets stored in a VOBU. Normally, a plurality of groups are inserted into a packet in an MPEG system stream, although under the DVD-RAM standard, the number of groups that can be inserted into a packet is restricted to one. Figure 6C shows the logical format of a video package arranged at the start of a VOBU. As shown in Figure 6C, the first video packet in a VOBU is composed of a packet header, a system header, a group header, and the video data that is part of the video stream.
Figure 6D shows the logical format of video packets that do not come first in the VOBU. As shown in Figure 6D, these video packets are each composed of a packet header, a group header, and video data, without the system header. Figure 6E shows the logical format of the audio packets. As shown in Figure 6E, each audio packet is composed of a packet header, a group header, a Sub_cor ri ent e_id that shows whether the compression method used for the video stream included in the present packet is PCM Linear or Dolby-AC3, and the audio packages that are part of the video stream and have been built according to the indicated method.
(1-2-3-2-1 Inte rmedia Memory Control within a VOB
The video stream and the audio stream are stored in video packets and audio packets as described above. However, in order to seamlessly reproduce the VOBs, it is not enough to store the video stream to the audio stream in video packets and audio packets, which is necessary for the proper arrangement of the video packets. and the audio packets will be uninterrupted buffer control. The buffers referred to herein are input buffers for temporarily storing the video stream and the audio stream before input to a decoder. Hereinafter, the separate buffers are referred to as the video buffer and the audio buffer, with specific examples shown as the video buffer 4b and the audio buffer 4d in Figure 19. The Uninterrupted control of the buffer refers to the input control for the buffer which ensures that no overflow or subflow will occur for any input buffer. This is described in more detail later, but is achieved primarily by assigning timestamps (which show the correct times for input, output, and data display) that are standardized by an MPEG stream to the packet header and the group header. shown in Figure 6D and Figure 6E. If no subflows or overflows occur for the video buffer and the audio buffer, there will be no interruptions in the playback of video streams and audio streams. As will be clear from this specification, it is very important that the buffer control is not interrupted. There is a time limitation whereby each set of audio data needs to be transferred to the audio buffer and decoded by the start time of presentation of the audio frame to be played by this data, since These audio streams are encoded using fixed-length encoding with a relatively small amount of data, the data that is required for playback of each audio frame can be stored in the audio packets. These audio packets are transferred to the audio buffer during playback, meaning that the time limitation described above can be easily managed. Figure 7A is a figure showing the ideal operation of the buffer for the audio buffer. This figure shows how the occupation of the buffer memory changes for a sequence of audio frames. In this specification, the term "buffer occupancy" refers to the extent to which the capacity of a buffer to store data is being used. The vertical axis of Figure 7A shows the occupation of the audio buffer, while the horizontal axis represents the time. This time axis is divided into sections of 32 m seconds, which correspond to the reproduction period of each audio frame in the Dolby-AC3 method. By referring to this graph, it can be seen that the occupation of the buffer memory changes over time to exhibit a sawtooth pattern.
The height of each triangular tooth that makes up the sawtooth pattern represents the amount of data in the part of the audio stream that will be played in each audio frame. The gradient of each triangular tooth represents the transfer speed of the audio stream. This transfer speed is the same for all audio frames. During the period corresponding to a triangular tooth, audio data is accumulated with a constant transfer speed during the viewing period (32 m seconds) of the audio frame presiding over the audio frame that is played by this audio data. At the end time of presentation of the preceding audio frame (it is the time that the decoding time represents for the present frame), the audio data for the present frame is transferred instantaneously from the audio buffer. The reason why a sawtooth pattern is achieved is that the processing is continuously repeated from buffering to transfer from the buffer. As an example, it is assumed that the transfer of an audio stream to the audio buffer starts at time TI. This audio data must be reproduced at time T2, so that the amount of data stored in the audio buffer will gradually increase between time TI to time T2 due to the transfer of this audio data. However, because this transferred audio data is transferred at the presentation definition time of the preceding audio frame, the audio buffer will be cleared from the audio data at that point, so that memory occupancy Intermediate audio returns to 0. FIG. 7A, the same pattern is repeated between time T2 and time T3, between time T3 and time T4, and so on. The buffer operation shown in Figure 7A is the buffering operation state, ideal for the premise where the audio data to be played in each audio frame is stored in an audio packet. In reality, however, it is normal for the audio data to be played in several different audio frames that are stored in an audio package, as shown in Figure 5B. Figure 7B shows a more real operation for the audio buffer. In this figure, the audio pack A31 stores the audio data A21, A22 and A23 that must be respectively decoded by the presentation end times of the audio frame f21, f22 and f23. As shown in Figure 7B, only the decoding of the audio data A21 will be terminated at the time of presentation of the audio frame f21, with the decoding of the other audio data sets f22 and f23 being determined respectively by the presentation completion times of the following audio frames f22 and f23. From the audio frames included in this audio packet, the A21 packets must be decoded first, with the decoding of this audio data that needs to be terminated by the audio frame display completion time f21. Therefore, this audio pack must be read from the DVD-RAM during the playback period of the audio frame f21. Video streams are encoded with variable code length due to large differences in code size between different types of images (I images, P images and B images) used in compression methods that use correlation related to time . The audio streams also include a significant amount of data, so that it is difficult to complete the transfer of the image data for a video frame, especially the image data for an I image, by the end time of presentation of the frame of previous audio. Figure 7C is a graph showing the video frames and the occupation of the video buffer. In Figure 7C, the vertical axis represents the occupation in the video buffer, while the horizontal axis represents the time. This horizontal axis is divided into sections of 33 m seconds that each one corresponds to the video frame reproduction period under the NTSC standard. By referring to this graph, it can be seen that the changes in the occupation of the video buffer changes over time to exhibit a sawtooth pattern. The height of each triangular tooth that makes up the sawtooth pattern represents the amount of data in the part of the video stream that will be played in each video frame. As mentioned above, the amount of data in each video frame is not equal, since the amount of code for each video frame is assigned dynamically according to the complexity of the frame. The gradient of each triangular tooth shows the transfer speed of the video stream. The transfer speed of approximately the video stream is calculated by subtracting the output speed of the audio stream from the output speed of the track buffer. This transfer speed is the same during each frame period. During the period corresponding to a triangular tooth in Figure 7C, the image data is accumulated with a constant transfer rate during the viewing period (33 m seconds) and the video frame that precedes the video frame being reproduced for this image data. At the end time of presentation of the preceding video frame (this time representing the decoding time for the present picture data), the picture data for the present picture is transferred instantaneously from the video buffer. The reason why a sawtooth pattern is achieved is that the processing is constantly repeated from storage in the video buffer to the transfer from the video buffer. When the image that is to be displayed in a given video frame is complex, a large amount of code needs to be assigned to this frame. When a large amount of code is assigned, this means that the pre-storing of data in the video buffer needs to be started in advance. Normally, the period from the transfer start time, in which the transfer of the image data in the video buffer is started, at the time of decoding for the image data is called the VBV delay (Intermediate Memory Check). Of video) . In general, the more complex the image, the greater the amount of code assigned and the greater the VBV delay. As can be seen from Figure 7C, the transfer of the image data is decoded at the time of presentation completion T16 of the preceding video frame is started at the time Til. The transfer of the image data that is decoded at the time of presentation completion T18 of the preceding video frame, meanwhile, is started at time T12. The transfer of image data for other video frames can be seen to start times T14, T15, T17, T19, T20 and T21. Figure 7D shows the transfer of image data sets in more detail. When considering the situation in Figure 7C, the transfer of the image data to be decoded at time T24 in Figure 7D needs to be completed at the "Tf_Period" between the start time T23 of the "VBV delay". and the beginning of the transfer of the image data for the next video frame to be played. The increase in buffer occupancy that occurs from this Tf_Periodo forward is made by the transfer of the image data for the image to be displayed in the next video frame. The image data accumulated in the video buffer wait for the time T24 in which the image data is to be decoded. In decoding time T24, the image A is decoded, which illustrates part of the image data stored in the video buffer, thereby reducing the total occupation of the video buffer. When considering - the above situation, you can see that while it is sufficient for the transfer of audio data to be played in a certain audio frame that is started around a table in advance, the transfer of the data from The image for a certain video frame needs to be started well before the time of decoding of this image data. In other words, the audio data to be played in a certain audio frame must be entered into the buffer around the same time as the image data for a video frame which is well in advance of the audio frame. This means that when the audio stream and the video stream are multiplexed in an MPEG stream, the video data needs to be also multiplexed before the corresponding image data. As a result, the video data and the audio data in a VOBU are actually composed of video data that will be played later and audio data. The arrangement of the plurality of video packets and audio packets that have been described as reflecting the order of transfer of the data included in the packets. Accordingly, to cause the audio data to be played in a video frame read approximately at the same time as the image data to be played in a video frame which is well in front of the audio frame, the audio packets and the video packets that store the audio data and the video data in question that need to be arranged in the same part of the VOB. Figure 8A shows how the audio packets, which store audio data to be played in each audio frame, the video packets, which show the image data to be played in each video frame, must be to stock. In Figure 8A, the rectangles marked "V" and "A" show each video packet and each audio packet. Figure 8B shows the meaning of the width and height of each of its rectangles. As shown in Figure 8B,. the height of each rectangle shows the bitrate used to transfer the packet. As a result, packets having a high height are transferred with a high bitrate, which means that the packet can be entered into a buffer relatively quickly. The packets that are not high, however, are transferred with a low bitrate, and in this way it takes a relatively long time to be transferred to the buffer. The image data Vil that is decoded at time Til in Figure 8B are transferred during time kll. Since the transfer and decoding of the audio data All is performed during this period kll, the video packets storing the video data Vil and the audio packets that store the audio data All are arranged in a similar position, as It is shown in the lower part of Figure 8A. The image data V12 that is decoded at time T12 in Figure 8A is transferred during time kl2. Since the transfer and decoding of the audio data Al 2 is performed during this period kl2, the video packets storing the video data V12 and the audio packets storing the audio data A12 are arranged in a similar position, as shown in the lower part of Figure 8A. In the same way, the audio data A13, A14 and A15 are arranged in similar positions as the image data V13 and V14 whose transfer is started at the time of transfer of these audio data sets. It is noted that when video data with a large amount of assigned code, such as the V16 image data, accumulate in the buffer, a plurality of audio data A15, A16 and A17 are multiplexed during kld which is the period of transfer of the video data V16. Figure 9 shows as audio packets that store a plurality of audio data sets to be reproduced in a plurality of audio frames and video packets that store image data to be reproduced in each video frame they can be stored. In Figure 9, audio pack A31 stores audio data A21, A22 and A23 to be played for audio frames f21, f22 and f23. From the audio data that is stored in the audio pack A31, the first audio data to be decoded is the image data A21. Since the image data A21 needs to be encoded at the time of presentation completion of the audio frame f20, this audio data A21 needs to be read from the DVD-RAM in conjunction with the image data VII which is transferred during the same period (period kll) as the audio frame f20. As a result, the audio package A31 is arranged close to the video packets that store the image data Vil. When it is considered that an audio packet can store audio data that must be decoded for several audio frames, and that the audio packets are arranged in positions similar to the video packets that are composed of image data to be decoded in the future, it may appear that the audio data and the video data to be decoded at the same time should be stored in audio packets and video packets that are in different positions within a VOB. However, there will be no cases where the video packets storing image data that will be encoded a second or more later are stored along the audio data that must be decoded at the same time. This is because the MPEG standard defines the upper limit for the time data that can be accumulated in the buffer, with all the data that has to be transferred from the buffer within a second of being entered into the buffer . This restriction is called the "one second rule" for the MPEG standard. Due to the one-second rule, even if the audio data and the video data to be decoded at the same time are arranged in different positions, the audio packet that stores the video data to be decoded in a given time will be stored definitively within a range of 3 VOBU of the VOBU which stores the image data to be decoded at the given time.
(1-2-3-2-2) Intermediate Memory Control between the VOBs
The following explanation deals with the buffer control that is performed when two or more VOBs are played back in succession. Figure 10A shows the buffer for the first part of a video stream. In Figure 10A, the input of the packet that includes the image data is started at the point indicated as First_SCR during video frame f71, with the amount of data shown as BT2 being transferred by the end time of presentation of the table of video f72. Similarly, the amount of BT3 data has been accumulated in the buffer by the display frame completion time f73. This data is read from the video buffer by the video decoder at the time of presentation completion of video frame f74. with this time that is indicated later in the present by the notation Primero_DTS. In this way, the state of the buffer changes as shown in Figure 10A, with no data for a preceding video stream at the start and the cumulative amount of data that is incremented gradually to plot a triangular shape. It is noted here that Figure 10A is plotted with the premise that the video packet is entered in the First_SCR time, although when the packet placed on the front of a VOB is a different packet, the start of the increased amount of data stored in the buffer will not match the First_SCR time. Also, the reason Ultimo_SCR is placed at a midpoint through a video frame is that the data structure of the packet is not related to the data structure of the video data. Figure 10B shows the buffer state during the last part of a video stream. In this drawing, the data entry in the video buffer is terminated by the Ultimo_SCR time which is located at the midpoint through the video frame f61. After this, only the amount of data "3 of the accumulated video data" is taken from the video buffer at the end time of presentation of the video frame f61. After this, it can be seen that only the amount of data? 4 is taken from the video buffer at the end time of presentation of the video frame f62, and only the amount of data? 5 is taken at the time of Finalization of video frame presentation f63. This last time is also called the Last_DTS. For the last part of a VOB, the input of the video packets and the audio packets is determined by the time shown as Last_SCR in Figure 10B, so that the amount of data stored in the video buffer will subsequently decrease in the steps in the decoding of video frames f61, f62, f63 and f64. As a result, the occupation of the buffer decreases in the steps at the end of a video stream, as shown in Figure 10B. Figure 10C shows the buffer state through the VOBs. In more detail, this drawing shows the case where the last part of a video stream causing the buffer state shown in Figure 10B is seamlessly linked to the front of another video stream making the buffer state shown in Figure 10A. When these two video streams are seamlessly linked, the First_DTS of the last part of the second video stream to be played needs to be followed later by the video frame with the Last_DTS of the last part of the first video stream . In other words, the decoding of the first video frame in the second video stream needs to be done after the decoding of the video frame with the final decoding time in the first video stream. If the interval between the Last_DTS of the last part of the first video stream and the First_DTS of the last part of the second video stream is equivalent to a video frame, the image data of the last part of the first video stream will coexist in the video buffer with the image data of the last part of the second video stream, as shown in Figure 10C. In Figure 10C, it is assumed that the video frames f71, f72 and f73 shown in Figure 10A correspond to the video frames fdl, f62 and f63 shown in Figure 10B. Under these conditions, the display completion time of the video frame f71, the image data BEI of the last part of the first video stream and the image data BT1 of the previous part, of the second video stream they are presented in the video buffer. At the end time of presentation the video frame f72, the image data BE2 of the last part of the first video stream and the image data BT2 of the previous part of the second video stream are present in the buffer Of video. At the time of completion of presentation of the video frame f73, the image data BE3 of the last part of the first video stream and the image data BT3 of the previous part of the second video stream are present in the buffer of video. As the decoding of the video frames progresses, the image data of the last part of the first video stream decreases in steps, while the image data of the previous part of the second video stream increases gradually. These decreases and increments occur concurrently, so that the buffer state shown in Figure 10C exhibits a sawtooth pattern that closely resembles the buffer state shown for the VOBs in Figure 7C. It should be noted here that each of the BT1 + BE1 total of the amount of data BT1 and the amount of data BEI, the total BT2 + BE2 of the amount of data BT2 and the amount of data BE2, and the total BT3 + BE3 of the amount of data BT3 and the amount of data BE3 is below the capacity of the video buffer. Here, if any of these totals BT1 + BE1, BT2 + BE2 or BT3 + BE3 exceeds the capacity of the video buffer, an overflow will occur in the video buffer. If the highest of these totals is expressed as Bvl + Bv2, this value Bvl + Bv2 must be within the capacity of the buffer.
(1-2-3-3) Package Header, System Header, Group Header
The information for the buffer control described above is written as timestamps in the packet header, system header and group header shown in Figures 6F ~ 6H. Figures 6F ~ 6H show the logical formats of the packet header, the system header and the group header. As shown in Figure 6F, the packet header includes a Paquet e_Home_Code, an SCR (system clock reference), demonstrates the time at which the data stored in the present packet must be entered into the video buffer and the audio buffer, and a Program_max_velocity. In a VOB, the first SCR is set as the initial value of the STC (System Time Clock) that is provided as a normal feature in a decoder under the MPEG standard. The system header shown in the
Figure 6G is attached only to the video package that is located at the start of a VOBU. East
• system header includes the maximum speed information (shown as the "Speed, Limit, Info" in Figure 6G) that shows the transfer speed that is to be requested from the playback device when the data is entered, size information of the buffer memory (shown as "Memory buffer", "Info limit" in Figure 6G) showing the highest size of the buffer to be requested from the reproduction apparatus when the data is entered into the VOBU. The group header shown in Figure 6H includes a DTS (Timecode of Coding) that shows the decoding time and for a video stream, a PTS (Presentation Time Mark) shows the time in which the data must be transferred from the rearrangement of the decoded video stream. PTS and DTS are adjusted based on the start time of presentation of a video frame or audio frame. In the construction of the data, a PTS and a DTS can be adjusted for all packets, although this information is rare for the image data that must be displayed or displayed for all the video frames. It is common for this information to be designated once in a GOP, which is to say once given 0.5 seconds of the playback time. Each video package and each audio package is assigned an SCR, however. For a video stream, it is common for a PTS to be assigned to each video frame in a GOP, although for an audio stream, it is common for a PTS to assign each or two audio frames. For an audio stream, there will be no difference between the display order and the encoding order, so the DTS is not required. When an audio packet stores all the audio data to be played for two or more audio frames, a PTS 'is written at the beginning of the audio packet. As an example, the audio pack A71 shown in Figure 5B can be given the start time of presentation of the audio frame fdl in the PTS. On the other hand, the audio package A72 that stores the divided audio frame f83 must be given the start time of presentation of the audio frame f84, then the presentation time of the audio frame f83, as the PTS. This is also the case for the A73 audio packets, which must be given the start time of presentation of the f86 audio frame, not the presentation start time of the f85 audio frame, such as the PTS.
(1-2-3-4) Continuity of Time Marks
The following is an explanation of the values that are adjusted as the PTS, DTS, and SCR for the video packets and the audio packets, as shown in Figures 6F to 6H.
Figure HA is a graph showing the values of the SCR of the packets included in a VOB in the order that the packets are arranged in the VOB. The horizontal axis shows the order of the video packages, with the vertical axis showing the value of the SCR that is assigned to each package. The first value of the SCR in Figure HA is not zero, and instead it is a predetermined value shown as Initl. The reason why the first value of the SCR is not zero is that VOBs that are processed by a video editing device undergo many editing operations, so there are many cases where the first part of a VOB It will have already been deleted. It should be obvious that the initial value of the SCR of a VOB that has been coded will already be zero, although this embodiment assumes that the initial value of the SCR for a VOB is not zero, as shown in Figure HA. In Figure 11A, the closer a video packet is to the start of the VOB, the lower the value of the SCR of that video packet will be, and in addition, a video packet is • from the start of the VOB, and higher will be the value of the SCR of that video package. This characteristic is referred to as the "continuity of time stamps", with the same continuity that is exhibited by the DTS. Although the order of coding of the video packets is such that a last video packet can actually be displayed before a previous video packet, meaning that the PTS of the last packet has a lower value than the previous packet, the PTS it will exhibit an approximate continuity in the same way as the SCR and the DTS. The SCR of the audio packets exhibits continuity in the same way as for the video packets. The continuity of the SCR, DTS and PTS is a prerequisite for the proper decoding of the VOBs. The following is an explanation of the values used for the SCR to maintain this continuity. In Figure 11B, the straight line showing the values of the SCR in section B is an extension of the straight line that shows the values of the SCR in section A. This means that there is continuity within the SCR values between section A and section B. In Figure 11C, the first value of the SCR in period D is greater than the highest value in the straight line that shows the SCR values in section C. However, in this Also, the closer a packet is to the start of the VOB, and it will not be the value of the SCR, and the more favored a packet of video from the start of the VOB, the higher the value of the SCR. This means that there is continuity of the time stamps between section C and section D. Here, when the difference in time stamps is large, these marks are naturally non-continuous. Under the MPEG standard, the difference between the timestamp pairs, such as the SCRs, should not exceed 0.7 seconds, so that the areas in the data where this value is exceeded are treated as non-continuous. In Figure 11D, the last value of the SCR in section E is higher than the first value in the straight line that shows the values of the SCR in section F. In this case, the continuity where the nearest is a At the beginning of the VOB, the lower the value of the SCR, and the more favored the video package is from the start of the VOB, the higher the value of the SCR is valid no longer, so there is no continuity in the the timestamps between section E and section F. When there is no continuity in the timestamps, as in the example of section E and section F, the previous and last sections are administered as separate VOBs. It should be noted that the details of the buffer control between the VOBs and the multiplexing method are described in detail in the PCT applications "WO 97/13367" and "WO 97/13363".
(1-2-4) A r ch i vos AV
An AV file is a file that records at least one VOB that will be played consecutively. When a plurality of VOBs are retained within an AV file, these VOBs are reproduced in the order they are stored in the AV file. For the example in Figure 4, the three VOB, VOB # l, VOB # 2 and VOB # 3 are stored in an AV file, with these VOBs being played in the order of VOB # l? VOB # 2? VOB # 3 When the VOBs are stored in this manner, the buffer status for the video stream placed at the end of the first VOB to be played and the video stream placed at the start of the next VOB to be played will be as shown. shown in Figure 10C. Here, if the highest quantity data Bvl + Bv2 are stored in the buffer memory, it exceeds the capacity of the buffer, or if the first time stamp in the VOB is to be played second it is not continuous with the last mark of VOB time to be played first, there is a danger that seamless reproduction is not possible for the first and second VOB.
(1-3) Logical Construction of the RTRW Administration File
The following is an explanation of the RTRW administration file. The RTRW administration file is the information that shows the attributes for each VOB stored in an AV file. Figure 12A shows the detailed hierarchical structure in which the data is stored in the RTRW administration files. . The logical format shown on the right in Figure 12A is a detailed expansion of the data shown on the left, with broken lines that serve as guides to clarify what parts of the data structure are expanding. Referring to the data structure in Figure 12B, it is possible that the RTRW administration file records the VOB information, for VOB # l, VOB # 2, VOB # 3, ... V0B # 6, and that it is VOB information is composed of general VOB information, current attribute information, a time map table, and seamless link information.
(1-3-1) VOB General Information "VOB General Information" refers to the VOB-ID that is uniquely assigned to each VOB in an AV file and to the VOB playback period information of each VOB .
(1-3-2) Current Attribute Information
The stream attribute information is composed of the video attribute information and the audio attribute information. The video attribute information includes information of the video format indicating one of MPEG2 and MPEG1, and a display or display method indicating one of NTSC and PAL / SECAM. When the video attribute information indicates NTSC, an indication such as "720 x 480" or "352 x 240" can be given as the display resolution, and an indication such as "4: 3" or "16: 9" it can be given as the aspect ratio. The presence / absence of copy production control for an analog video signal can also be indicated, such as the presence / absence of a copy security for a video cassette recorder that damages the AGC circuit of a VTR when changing, the signal amplitude during the blank period of a video signal. The invention of audio attribute shows the coding method which can be one of MPEG2, Dolby Digital, or Linear PCM, the sampling frequency (such as 48 KHz), a bitrate when using a fixed bitrate, or a bitrate marked with "VBR" when using a variable bitrate. The time map table shows the size of each VOBU that makes up the VOB and the reproduction period of each VOBU. To improve the excess capabilities, the representative VOBUs are selected in a predetermined range, such as a multiple of ten seconds, and the directions and times of reproduction of these representative VOBUs are given in relation to the start of the VOB.
(1-3-3) Seamless Link Information Seamless link information is information that allows consecutive reproduction of the plurality of VOBs in the AV file to be made seamless. This seamless link information includes the seamless brand, the VOB_V_S_PTM video presentation start time, the VOB_V_E_PTM video presentation completion time, the First_SCR, the Last_SCR, the audio separation start time A_STP_PTM, the audio separation length A_GAP_LEN, and a location and audio separation A_GAP_LOC.
(1-3-3-1) Seamless Brands
The seamless mark is a mark that shows whether the VOB corresponding to the present seamless link information is reproduced seamlessly following the end of the VOB playback placed immediately before the present VOB in the AV file. When this mark is set to "01", the reproduction of the present VOB (the last VOB) is performed without seams, whereas when the mark is set to "00", the reproduction of the present VOB is not reproduced without seams. In order to perform the reproduction of a plurality of seamless VOBs, the ratio of the previous VOB to the ultimate VOB should be as follows: (1) Both VOB can use a display or display method (NTSC, PAL, etc.) for the video stream as given in the video attribute information. (2) Both VOBs must use the same coding method (AC-3, MPEG, Linear PCM) for the audio stream as given in the audio attribute information. Failure to comply with the above conditions prevents seamless reproduction. When a different display method is used for a video stream or a different encoding method is used for an audio stream, the video encoder and the audio encoder will have to determine their respective operations to switch the display method, the encoding method and / or bit rate.
As an example, when two video streams it. are going to play consecutively are such that the previous audio stream has been encoded according to the AC-3 methods and the subsequent one according to the MPEG methods, an audio decoder will have to have the one of codi f ication to switch the current attributes when the current commutes from AC-3 to MPEG. A similar situation also occurs for a video decoder when the video stream changes. The seamless mark only adjusts "01" when both of the above conditions (1) and (2) are satisfied. If any of the above conditions (1) and (2) does not satisfy, the seamless mark is set to "00".
(1-3-3-2) VOB V S PTM Video Presentation Start Time
The VOB_V_S_PTM video presentation start time shows the time in which the reproduction of the first video field in the video streams that make up a VOB will start. This time is given in the PTM descriptor format. The PTM descriptor format is a format by which time is expressed with an accuracy of 1 / 27,000,000 seconds or 1 / 90,000 seconds (= 300 / 27,000,000 seconds). This accuracy of 1 / 90,000 seconds is adjusted by considering the common multiples of the frame rates of the NTSC signals, PAL signals, Dolby-AC3 and MPEG audio, while the accuracy of 1 / 27,000,000 seconds is adjusted considering the frequency of the STC. Figure 12B shows the PTM descriptor format. In this drawing, the PTM descriptor format is composed of a base element (PTM_base) that displays the quotient when the presentation start time is divided by 1 / 90,000 seconds and an extension element (PTM_ext ens ion) that shows the rest when the same presentation start time is divided by the base element to an accuracy of 1 / 27,000,000 seconds.
(1-3-3-3) End Time of
Presentation of Video VOB V E PTM
The video presentation completion time shows the time in which the reproduction of the last video field in the video streams that make up a VOB ends. This time, is also given in the PTM descriptor format.
(1-3-3-4) Relationship between the Start Time of
Presentation of Video VOB V S PTM and the Time of Completion of Video Presentation
VOB V E PTM
The following is an explanation of the relationship between the VOB_V_E_PTM of a previous VOB and the VOB_V_S_PTM of a last VOB, when the previous VOB and the last VOB are to be played seamlessly. Since the last VOB will fundamentally be played after all the video packages included in the previous VOB, so that if the VOB_V_S_PTM of the last VOB is not equal to the VOB V E PTM of the previous VOB, the timestamps will not be continuous, meaning that the previous VOB and the last VOB can not be played without seams. However, when the two VOBs have been fully and separately encoded, the encoder will have assigned a unique time stamp to each video packet and audio packet during encoding, so that the condition for the VOB_V_S_PTM of the previous VOB is equal to the VOB_V_E_PTM of the previous VOB becomes problematic. Figure 13 shows the state of the buffer for the previous VOB and the last VOB. In the graphs in Figure 13, the vertical axis shows the occupation of the buffer while the horizontal axis represents the time. The times representing the SCR, PTS, the video presentation completion time V0B_V_E_PTM, and the VOB_V_S_PTM video presentation start time have been plotted. In Figure 11B, the image data that is reproduced last in the previous VOB is entered into the video buffer for the time indicated as Last_SCR of the video packet composed of this image data, with the processing of the production of these data that wait until the PTS which is the presentation start time is reached (if the last packet entered in an MPEG decoder is an audio packet or another, this condition is not valid). Here, I saw deo_present ation_fin_t_time VOB_V_E_PTM shows the point where the display period hl of this final video has expired starting from this PTS. This display period hl is the period taken to draw an image from the first field that composes an image of the size of the screen to the final field. In the lower part of Figure 11B, the image data that must be displayed first in the last VOB is entered into the video buffer in the First_SCR time, with the reproduction of this data waiting until the PTS indicating the Presentation start time. In this drawing, the video packets of the previous and last VOB are respectively assigned an SCR with the first value "0", a video presentation completion time VOB_V_E_PTM, and a video presentation start time VOB_V_S_PTM. For this example, you can see that VOB_V_S_PTM of the last VOB <; VOB_V_E_PTM of the previous VOB. The following is an explanation of why seamless reproduction is possible even for the VOB_V_S_PTM condition of the last
• VOB < VOB_V_E_PTM of the previous VOB. Under the DVD-RAM standard, an extended STD model (later in the present "E-STD") is defined as the normal model for the reproduction apparatus, as shown in Figure 19. In general, one of the MPEG encoder has an STC (System Time Clock) to measure a normal time, with video decoding and audio decoding referring to the start time shown by the STC to smooth the processing of of codi fi cation and reproduction processing. In addition to the STC, however, the E-STD has an adder to add a compensation to the normal time introduced by the STC, so that any of the normal time transferred by the STC and the addition result of the adder can be selected and transferred to the video decoder and the audio decoder. With this construction, even if the timestamps for different VOBs are not continuous, the transference of the adder can be supplied to the decoder to make the decoder behave as if the VOB timestamps were continuous. As a result, seamless reproduction is still possible even when the VOB_V_E_PTM of the previous VOB and the VOB_V_S_PTM of the last VOB are not continuous, as in the previous example. The difference between the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB can be used as the compensation to be summed by the adder. This is usually referred to as the "STC_compensation". As a result, a reproducing apparatus of the E-STD model finds the STC_ compensation according to the formula shown below using the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB. After finding the STC_compensation, the reproduction apparatus does not adjust the result in the adder.
STC_compensation = VOB_V_E_PTM of the previous VOB - VOB_V_S_PTM of the last VOB The reason why the VOB_V_S_PTM of the last VOB and the VOB_V_E_PTM of the previous VOB are written in the seamless link information is to allow the decoder to perform the above calculation and adjust the STC_compensation in the adder. Figure HE is a graph that has been plotted for two VOBs in each of which the time stamps are continuous, as shown in Figure HA. The time stamp of the first packet in VOB # l includes the initial value Intl, with the packets that follow after that having recently higher values as their timestamps. In the same way, the timestamp of the first packet in VOB # 2 includes the initial value Init2, with the packets that follow after that having increasingly higher values as their timestamps. In Figure 11E, the final value of the timestamps in VOB # l, is greater than the first value of the timestamps in VOB # 2, so that you can see that the timestamps are not continuous * a through the two VOB. When the decoding of the first packet in VOB # 2 is desired after the final VOB packet # despite the non-continuity of the timestamps, an STC_compensation can be added to the timestamps in VOB # 2, changing from this mode the timestamps on the VOB # 2 from the solid line shown in Figure 11E to the broken line that continues as an extension of the timestamps on the VOB # l. As a result, the timestamps changed in VOB # 2 can be seen to be continuous with the timestamps in VOB # l.
(1-3-3-5) First SCR
The First_SCR shows the SCR of the first packet in a VOB, written in the PTM descriptor format.
1-3-3-6) Last SCR The Ultimo_SCR shows the SCR of the last packet in a VOB, written in the PTM descriptor format.
(1-3-3-7) Relationship between First SCR and Last SCR
As described above, since the reproduction of the VOB is performed by a decoder of the E-STD type, the Last_SCR of the previous VOB and the First_SCR of the last VOB need not satisfy the condition that Last_SCR of the previous VOB = First_SCR of the last VOB. However, when using an STC_compensation, the following relationship must be satisfied.
Last_SCR of the previous VOB + time required for the transfer of a package < STC compensation + First SCR of the last VOB.
Here, if the Last_SCR of the previous VOB and the First_SCR of the last VOB do not satisfy the above equation, that means that the packets that make up the previous VOB are transferred to the video buffer and the audio buffer at the same time as the packets that make up the last VOB. This violates the MPEG standard and the E-STD decoder model where packets are transferred one at a time in the packet sequence. By referring to Figure 10, it can be seen that the Last_SCR of the previous one corresponds to the First_SCR of the last VOB + STC_compensation, so that the previous relationship is satisfied. When the VOB is reproduced using the decoder of the type E-S'TD, of particular note is the time in which the switching is made between the normal time transfer transferred by the STC and the transfer of the normal time with the compensation added by the adder. Since no information is given for this switching in the time stamps of a VOB, there is a risk that improper synchronization will be used to switch to the transfer value of the adder. The First_SCR and the Last_SCR are effective for informing the coder of the correct synchronization to switch to the transfer value of the adder. While the STC is being counted, the decoder compares the normal time transferred by the STC by the First_SCR and Last_SCR. When the normal time transferred by the STC corresponds to the First_SCR or Last_SCR, the decoder switches from the normal time transferred by the STC to the output or transfer value of the adder. When a VOB is played, normal playback plays the last VOB after playing the previous VOB, while "rewind playback" (seek backward image) plays the previous VOB after the last VOB. Accordingly, the Last_SCR is used to switch the value used by the encoder during normal playback and the First_SCR is used for the switch used by the encoder during the rewind play. During rewind playback, the last VOB is decoded starting from the last VOBU to the first VOBU, and when the first video packet in the last VOB has been de-encoded, the previous VOB is decoded starting from the last VOBU to the first VOBU. In other words, during rewind playback, time. in which the decoding of the first video packet in the last VOB is complete is the time in which the value used by the decoder needs to be decoded. To report a video data editing device of the E-STD type of this time, the First_SCR of each VOB is provided in the RTRW administration file. A more detailed explanation of the techniques used for E-STD and STC_compensation is given in PCT publication WO 97/13364.
(i-3-3-8) Audio Separation Start Time "A STP PTM"
When there is a separation of audio reproduction in a VOB, the audio separation start time "A_STP_PTM" shows the start time of stop in which the audio decoder must for its operation. This audio separation start time is given in the PTM descriptor format. A start time of audio separation A STP PTM is indicated by a VOB.
(1-3-3-9) Audio Separation Length "A GAP LEN"
The audio separation length A_GAP_LEN shows how long it should stop its operation from the audio encoder starting from the stop start time indicated as the audio separation start time "A_STP_PTM". The length of this audio separation length A_GAP_LEN is restricted to being less than the length of an audio frame.
(1-3-3-10) Inefability of Audio Separation
The following is an explanation of why a period where an audio separation occurs needs to be specified by the audio separation start time A_STP_PTM and the audio separation length A_GAP_LEN. Since video streams and audio streams with different cycles are reproduced, the total playing time of a video stream contained in a VOB does not correspond to the total playback time of the audio stream. For example, if the video stream is for the NTSC standard and the audio stream is for Dolby-AC3, the total playback time of the video stream will be a multiple integer of 33 msec and the total current playback time The video will be a whole integer multiple of 33 msec, as shown in Figure 14A. If seamless reproduction of two VOBs is performed without considering these differences in total playing time, it will be necessary to align the playing time of a set of image data and the playing time of the audio data to synchronize the reproduction of the image data with the audio data. In order to align these reproduction times, a difference of the total time appears in one of the start or end of the image data or video data. In Figure 14B, the reproduction time of the image data is aligned with the. playing time of the audio data at the start of a VOB, so that the time difference gl is present from the image data and the audio data. Since the difference of time gl is present at the end of VOB # l, when the seamless reproduction of VOB # l and VOB # 2 is attempted, the reproduction of the audio stream in VOBf 2 is performed to conclude with the difference of time gl, meaning that the reproduction of the audio stream in V0B # 2 starts at time gO. The audio encoder uses a fixed frame rate when an audio stream is reproduced, so that the coding of the audio streams is performed continuously with a fixed cycle. When the V0B # 2 to be played after V0B # 1 has already been read from the DVD-RAM, the audio encoder can start the decoding of V0B # 2 as soon as the decoding has finished. of the audio stream in V0B # 1. To prevent the audio stream in the next VOB from playing back too soon during seamless playback, the audio separation information in the stream is managed on the host side of a playback device, so that during the period of audio separation, that the guest needs to stop the operation of the audio decoder. This production stop period is audio separation, and it starts from the audio separation start time A_STP_PTM and continues for the period indicated as A_GAP_LEN. Processing to specify audio separations is also performed between a stream. More specifically, the PTS of an audio frame immediately after an audio separation written in a group header of an audio group, so that it is possible to specify when audio separation ends. However, problems arise with the specification method when multiple sets of audio data are to be played for multiple audio frames. They store in an individual audio group. In more detail, when several sets of audio data to be played for several audio frames are stored in an individual audio group, it is only possible to provide one PTS for the first of the plurality of audio frames in the - group. In other words, a PTS can not be provided for the remaining audio frames in the group. If the audio data to be played for the audio frames located both before and after an audio separation are arranged in the same group, it will not be possible to provide a PTS for the audio box located immediately after the audio separation. As a result, it will not be possible to specify the audio separation, meaning that the audio separation will be lost. To avoid this, the audio box located immediately after an audio separation is processed to be arranged on the front of the next audio packet, so that the PTS (Audio Separation Start Time A_STP_PTM and the separation length of A_GAP__LEN audio) of the audio box immediately after the audio separation can be cleared within the stream. Whenever necessary, a filler group, as prescribed by the MPEG standard, can be skipped immediately after the audio data in an audio package that stores the audio data to be played immediately before a audio separation. Figure 14C shows the audio package G3 which includes the use of audio separation that includes the audio data y2, y-1, and that will be reproduced for the audio frames y-2, y-1, and located in the last part of V0B # 1 shown in Figure 14B and a Rene_Paque. This drawing also shows the G4 audio package that includes the u + 1, u + 2 and +3 audio frames that are placed on the front of VOB # 2. The G4 audio package mentioned above is the package that includes the audio data to be played for the audio frame immediately after the audio separation, while the G3 audio packet is the package that is located immediately before of this package. If the audio data to be played for the audio box located immediately after the audio separation is included in a packet, the packet located immediately before this packet is called an "audio packet that includes an audio separation " Here, the G3 audio packets are placed towards the end or end of the video packet sequence in a VOBU, with no image data with a last type of playback that is included in V0B # 1. However, it is assumed that the reproduction of VOB # 2 will follow the playback of V0B # 1, so that the data included in V0B # 2 is the image data to be read corresponding to the audio data y-2, y-1 e y. If this is the case, the G3 audio packets that include audio separation can be placed within any of the first three VOBUs in V0B # 2 without violating the "one second rule". Figure 14D shows that this G3 audio packets that includes audio separation can be placed within any of V0BU # 1, V0BU # 2 and V0BU # 3 at the start of V0B # 2. The operation of the audio decoder needs to be temporarily stopped during the period of audio separation. This is because the audio decoder will attempt to perform the encoding processing even during audio separation, so that the host control unit performing the core control processing in a reproduction apparatus it has to indicate an audio pause to the encoder s once the playback of the image data and the audio data has stopped, thus temporarily stopping the audio decoder. This indication is shown as the ADPI (Pause Information of the Audio Decoder) in Figure 19. By doing so, the operation of the audio decoder can be stopped during the period of audio separation. However, this does not mean that the audio transfer can be stopped regardless of how an audio separation appears in the data. This is because it is normal for the control unit to be composed of a normal microcomputer and computer program, so that depending on the circumstances to stop the operation of the audio encoder, audio separations must occur repeatedly over a period of time. short period of time, there is the possibility that the control unit does not ensure the stop indication sufficiently early. As an example, when the VOBs of about one second in length are played consecutively, it becomes necessary to give stop indication to the audio decoder at intervals of about one second. When the control unit of a normal microcomputer and computer program is composed, there is a possibility that the control unit will not be able to stop that of the audio signal processor during the period where the audio separations are present.
When the VOBs are played back, the playback time of the image data and the playing time of the audio data have been aligned several times, so it is necessary to provide the audio decoder with a stop indication each time. When the control unit is composed of a normal microcomputer and computer program, there is a possibility that the control unit will not be able to stop the audio decoder during the period where the audio separations are present. For this reason, the following restrictions are imposed so that audio separations will only occur once within a certain period. First, to allow the control unit to perform the stop operation with ease, the VOB playback period is adjusted to 1.5 seconds or above, thus reducing the frequency with the audio separations that may occur. Second, the alignment of the playback time of the image data and the playing time of the audio data is only performed once in each VOB. By doing so, there will be only one audio separation in each VOB. Third, the period of each audio separation is restricted to being less than one audio frame. Fourth, the audio separation start time VOB__A_STP_PTM is adjusted with the video presentation start time
• VOB_V_S_PTM of the following VOB as a rule, 'so that the audio separation initiation time VOB_A_STP_PTM is restricted to being within an audio frame of the next video presentation start time
VOB_V_S_PTM. As a result, VOB_V_S_PTM, - the reproduction period of an audio frame <
A_STP_PTM < VOB_V_S_PTM. If an audio separation that satisfies the above formula occurs, the first image in the next VOB will have to be only displayed, so even if there is no audio transfer at this time, this will not be particularly prominent. When providing the above restriction, when the audio separations appear during seamless playback, the interval between the audio separations will be at least "1.5 seconds - the reproduction period of two audio frames." More specifically, when replacing the values In real terms, the reproduction period according to the audio frame will be 32 msec when the Dolby-AC3 is used, so that the minimum interval between the audio separations is 1436 msec, which means that there is a high probability that the control unit is capable of performing stop control processing within the dead line for processing.
(1-3-3-11 Information on Localization of
Audio separation
The audio separation location information "A_GAP_LOC" is a 3-bit value that shows in which of the three VOBs located at the beginning of the last VOB, the audio packet including the audio separation has been inserted. When the first bit in this value is "1", this shows that the audio separation is present in V0BU # 1. In this way, the values "2" and "3" show respectively that the audio separation is present in V0BU # 2 or V0BU # 3. The reason why this mark is necessary is that it will be necessary to regenerate the audio separation when the last of the VOBs that are going to be played without seams have been partially erased. The partial erasure of the VOB reflects the erasure of a plurality of VOBUs that are located at the start or end of a VOB. As an example, there are many cases during video editing when the user wishes to remove the opening credit sequence. The erasure of the VOBU that includes this opening credit sequence is called the "partial erasure of a VOB". When performing partial erasure, audio packages that include an audio separation that move to a last VOB require special attention. As described above, the audio separation is determined according to the video presentation start time of the last VOB_V_S_PTM, so that when some of the VOBUs are erased from the last VOB, the image data having the VOB_V_S_PTM video display start time, which determines the audio separation and the VOBUs for this image data will be erased. Audio separation is multiplexed into one of the first three VOBs at the start of a VOB. Accordingly, when a part of a VOB, such as the first VOBU, is deleted, it will not be clear as to whether the audio separation will have been destroyed as a result of this deletion. Since the number of audio separations that can be provided within a VOB is limited to one, it is also necessary to erase a previous audio separation that is not needed any longer once a new audio separation has been generated. As shown in Figure 14D, the G3 audio package that includes audio separation needs to be inserted into one of the VOBU # VOBU # 3 in VOB # 2 to comply with the one second rule, so that the Audio package that includes this audio separation needs to be taken from the packages included in e? V0B # 1 to V0B # 3. While this comprises a maximum of VOBU, the immediate extraction of only one G3 audio packet that includes audio separation is technically very difficult. This means that they are sometimes required from the current. Here, each VOBU includes several hundred packets so that a significant amount of processing is required to refer to the content of all packets. The audio separation location information A_GAP_LOC uses a 3-bit mark to show in which of the three VOBUs at the start of the last VOB an audio packet that includes an audio separation has been inserted, so that only one VOBU needs be searched when searching for audio separation. This facilitates the extraction of the G3 audio packets that includes audio separation. Figures 15A to 15E show a procedure for the regeneration of the audio separation by the video data editing apparatus when the VOBUs located at the start of VOB # 2 have been erased, between the two VOB, V0B # 1 and V0B # 2, which will be reproduced in seams. As the picture shows. 15A, the VOBU, "VOBU # 98", VOBU # 99", and" VOBU # 100 are located at the end of V0B # 1 and the VOBU "V0BU # 1", "V0BU # 2", and "V0BU # 3" they are located at the beginning of V0B # 2. In this example, the user instructs the video data editing device to perform a partial erasure to erase VOBU # l and V0BU # 2 in VOBU # 2. In this case, the G3 audio package that includes the audio separation is required, from the audio data stored in the VOBU # 100, but it is known that for some of this this G3 audio package that includes the audio separation is will fix in one of the VOBU # l, VOBU # 2 and VOBU3 # in VOB # 2. To find the VOBU in which the G3 packet including audio separation has been fixed, the video data editing apparatus refers to the audio separation location information A_GAP_LOC. When the audio information location information A GAP LOC is set as shown in Figure 15B, it can be seen that the audio packet G3 including the audio separation is located in V0BU # 3 in V0B # 2. Once the video data editing apparatus knows that the audio packet G3 including the audio separation is located in the V0BU # 3, the video data editing apparatus will know if the audio separation was multiplexed in the area that was submitted to partial erasure. In the present example, the audio separation is not included in the deleted area, so that the value of A_GAP_LOC is only amended by the number of VOBU that were erased. This ends the explanation of the VOBs, video stream, audio stream, and VOB information that are stored in an optical disk for the present invention.
(1-4) Construction of the Video Data Editing System System.
The video data editing apparatus of the present embodiment provides conjunctions for both a DVD-RAM playback editing apparatus and a DVD-RAM recording apparatus. Figure 16 shows an example of the. construction of the system including the video data editing apparatus of the present embodiment. As shown in Figure '16, this system includes a video data editing apparatus (later in the present DVD recorder apparatus 70), a remote controller 71, a TV monitor 72 which is connected to the recording apparatus 70 of DVD, and an antenna 73. The DVD recorder 70 is conceived as a device to be used in place of a video cassette recorder, conventional for the recording of television broadcasts, but also incorporates editing functions. The system illustrated in Figure 16 shows the case when the DVD recording apparatus 70 is used as a home video editing device. The DVD-RAM described above is used by the DVD recorder apparatus 70 as the recording medium for recording television broadcasts. When a DVD-RAM is loaded in the DVD recorder 70, the DVD recorder 70 compresses a video signal received via the antenna 73 or a conventional NTSC signal and records the result in the DVD-RAM as VOB. The DVD recorder 70 also decompresses the video streams and audio streams included in the VOBs recorded on a DVD-RAM and transfers the resulting video signal or NTSC signal and audio signal to the TV monitor 72.
(1-4-1) Construction of the Physical Equipment of the Recorder 70 DVD
Figure 17 is a block diagram showing the construction of the physical equipment of the DVD recorder 70. As shown in Figure 17, the DVD recorder apparatus 70 is comprised of a control unit 1, an MPEG encoder 2, a disc access unit 3, an MPEG decoder 4, a processing unit 5. of video signal, a remote controller 71, a common bar 7, a receiver 8 of signals from the remote control, and a receiver 9. The arrows drawn with solid lines in Figure 17 show the physical connections that are achieved by the wiring the circuit inside the DVD recorder 70. The dashed lines, meanwhile, show the logical connections that indicate the input and output of the various kinds of data in the connections shown with the solid lines during a video editing operation. The numbers (1) and (5) assigned to the dashed lines show how many VOBUs and the image data and audio data that make up the VOBUs are transferred in the physical connections when the DVD recording apparatus 70 re-encodes the VOBUs. The control unit 1 is the host-side control unit that includes the CPUs, the common bus of the processor, the common bus interface, the main storage, and the ROM. When running programs stored in ROM, the control unit 1 records, plays and edits the VOB. The MPEG encoder 2 operates as follows. When the receiver 9 receives an NTSC signal via the antenna 73, or when a video signal transferred by a home video camera is received via the video input terminals provided on the back of the DVD recorder 70, the encoder 2 MPEG encodes the NTSC signal the video signal to produce the VOBs and transfers the generated VOBs to the disk access unit 3 and the common bus 7. As a process that is particularly related to video editing, the encoder 2 of MPEG receives an input of the decoding result of the MPEG decoder 4 from the connection line Cl via the common bar 7, as shown by dashed line (4), and transfers the coding result for this data to the disk access unit 3 via the common bar 7, as shown by dashed line (5). The disc access unit 3 includes a track buffer 3a, an ECC processing unit 3b, and a drive mechanism 3c for a DVD-RAM, and has access to the DVD-RAM according to the control by the unit control 1. In more detail, when the control unit gives an indication for recording on the DVD-RAM and the VOB encoded by the MPEG encoder 2 have been transferred successively as shown by dashed line (5), the unit 3 accesses the disk stores the received VOBs in the track buffer 3a, and once the ECC processing has been performed by the ECC processing unit 3b, it controls the drive mechanism 3e to successively record these VOBs on the DVD -RAM. On the other hand, when the control unit 1 indicates a data reading of a DVD-RAM, the disk access unit 3 controls the drive mechanism 3c to successively read the VOBs of the DVD-RAM, and once, that the ECB processing unit 3b has performed ECC processing on these VOBs, stores the result in track buffer 3a. The drive mechanism 3c mentioned herein includes a tray for seating the DVD-RAM, a spindle motor for holding and rotating the DVD-RAM, an optical reader for reading a signal recorded on the DVD-RAM, and an actuator. for the optical reader. The read and write operations are achieved by controlling these components of the drive mechanism 3c, although this control is not part of the essence of the present invention. Since this control can be achieved using well-known methods, no further explanation will be given in this specification. When the VOBs that have been read from
DVD-RAM by disk access unit 3 is transferred as shown by the line
* discontinuous (1), MPEG decoder 4 decodes these VOBs to obtain digital video data, uncompressed and an audio signal. The MPEG decoder 4 transfers the uncompressed digital video data to the video signal processing unit 5 and transfers the audio signal to the TV monitor 72. During a video editing operation, the MPEG encoder 4 transfers the result of the encoding for a video stream and an audio stream to the common bus 7 via the connecting lines C2, C3 , as shown by dashed lines (2) and (3) in Figure 17. The decoding result transferred to the common bar is transferred to the MPEG encoder 2 and to the connection line Cl, as shown by the dotted line ( 4) . The video signal processing unit 5 converts the image data transferred by the MPEG decoder 4 into a video signal for the TV monitor 72. In the reception of the graphic data from the outside, the video signal processing unit 5 converts the graphic data into an image signal and performs signal processing to combine this image signal with the video signal. The signal receiving unit 8 of the remote control receives a signal from the remote control in form to the control unit 1 of the key code included in the signal so that the control unit 1 can check the control according to the operations of the user of the remote control 71.
(1-4-1-1) Internal Construction of MPEG Encoder 2
Figure 18 is a block diagram showing the construction of the MPEG encoder 2. As shown in Figure 18, the MPEG encoder 2 is composed of a video encoder 2a, a video buffer 2b for storing the transfer of the video encoder 2a, an audio encoder 2c, an audio buffer 2d for storing the transfer of the audio encoder 2c, a current encoder 2e for multiplexing the video stream encoded in the video buffer 2b and the audio stream encoded in the audio buffer 2d, an STC unit (clock System Time) 2f to generate the synchronization clock of the MPEG encoder 2, and the encoder control unit 2g to control and manage these components of the MPEG encoder 2.
(1-4-1-2) Internal Construction of the
MPEG decoder 4
Figure 19 shows the construction of the MPEG decoder 4. As shown in Figure 19, the MPEG encoding 4 is composed of an iplexer demult 4a, a video buffer 4b, a video decoder 4c, an audio buffer 4d, a decoder audio 4e, a rearrangement buffer 4f, an STC unit of 4g, switches SWl to SW4, and a decoder control unit 4k. The iplexer demulture 4a refers to the header of a group that has been read from a VOB and judges thus the various packets are video packets or audio packets. The smult iplexor 4a transfers the video data in packets judged as being video packets to the video buffer 4b and the audio data in packets judged to be audio packets to the audio buffer 4 d. The video buffer 4b is a buffer for accumulating the video data that is transferred by the smul t iplexor 4a. Each image data set in the intermediate video barrier 4b is stored until its decoding time is taken from the video buffer 4b.
The video decoder 4c takes the image data sets from the video buffer 4b at their respective decoding times and instantly decodes the data. The audio buffer 4d is an intermediate memory for accumulating the audio data transferred by the demulting device 4a. The audio encoder 4e successively decodes the audio data stored in the audio buffer
4d in frame units. Upon receipt of the ADPI
(Pause Information of the Codi f i
Audio) emitted by the control unit 1, the audio coding device 4e for the processing of the codi fi cation for the audio frames. The ADPI is output by the control unit 1 when the present time reaches the audio separation start time A_STP_PTM shown by the seamless link information. The reordering buffer 4f is a memory for storing the decoding result of the video decoder 4c when an I image or P-image has been encoded. The reason why the decoding results for the images I or images P are stored is that the order of coding was originally produced when fixing the order of display or display. Accordingly, after each B image that is to be displayed before the decoding results stored in the reordering buffer 4f have been decoded, the buffer of
• reordering 4f transfers the decoding results of the I images and the P images stored so far as an NTSC signal. The 4g STC unit generates the synchronization clock that shows the system clock for use in the MPEG decoder 4. The adder 4h transfers a value produced by adding the STC_compensation to the normal clock shown by the synchronization error as the normal compensation clock. The control unit 1 calculates the STC_compensation by finding the difference between the video presentation start time VOB_V_S_PTM and the presentation completion time VOB_V_E_PTM which is given in the seamless link information, and adjusts the STC_compensation in the adder 4h. The switch SW1 supplies the switch of the iplexer 4a with the normal time measured by the STC 4g unit or the normal compensation time transferred by the adder 4h. The switch SW2 supplies the audio decoder 4e with the normal time measured by the STC 4g unit or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal time supplied or the normal time of compensation is used to check the time of the coding and the start time of presentation of each audio frame. The switch SW3 supplies the video decoder 4c with the normal time measured by the 4g unit of STC or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal compensation time is used to check the decoding time of each image data set.
The switch SW4 supplies the rearrangement buffer 4f with the normal time measured by the unit 4g of STC or the normal compensation time transferred by the adder 4h. The normal time supplied or the normal time of compensation is used to check the start time of presentation of each set of image data. The decoder control unit 4k receives a decoding processing request from the control unit 1 for a whole integer multiple of the VOBUs, which is to say a whole integer multiple of the GOPs, and has the decoding processing performed by the decoder. all components of the iplexer demulture 4a to the reordering buffer 4f. Also, on receipt of a valid / invalid indication for the transfer of reproduction of the decoding result, the unit 4k of the decoder control has the result
The encoder of the video decoder 4c and the audio encoder 4e are transferred to the outside if the indication is valid, or prohibits the transfer of the encoding results of the video decoder 4c. and the audio decoder 4e to the outside if the indication is invalid. The valid / invalid indication can be given by a unit smaller than a video stream, such as a video field. The information that indicates the valid reproduction transfer section in the video field units is called valid reproduction section information.
(1-4-1-2-1) Synchronization for Switch Switching SW1 ~ SW4
Figure 20 is a synchronization synchronization diagram for switching of switches SWl to SW4. In this synchronization diagram it shows the switching of switches SWl to SW4 when seamless playback of VOB # l and VOB # 2 is performed. The upper part of Figure 20 shows the packet sequences that found the VOB # l and the VOB # 2, while the middle part shows the video frames and the bottom part shows the audio frames. The synchronization for switching of switch SWl which is the point where the packet sequence that is transferred to MPEG decoder 4 changes from V0B # 1 to V0B # 2. This time is indicated as the Last_SCR in the seamless link information of V0B # 1. The synchronization for the switching of the switch SW2 is the point where all the audio data in the VOB which is stored in the audio buffer 4d before the switching of the switch SW1, that is to say VOB # l, have been s encoded The synchronization for the switching of the switch SW3 is the point where all the video data in the VOB has been decoded which is stored in the video buffer 4b before the synchronization time (TI) of the switch SW # 1, which is to say VOB # l. The synchronization for switching of switch SW4 is the point during the playback of VOB # l where the last video frame has been played.
Programs stored in the ROM include modules that allow two VOBs that have been burned to DVD-RAM to play seamlessly.
(1-4-1-2-2) Procedure for Seamless Processing of VOBs
Figures 21 and 22 are flow diagrams showing the seamless link procedure of two VOBs in an AV file. Figures 23A and 23B show an analysis in the buffer state for each video packet. Figures 24A and 25 show the audio frames in the audio stream corresponding to the audio frames x, x + 1, y-1, y, u + 1, u + 2 and u + 3 mentioned in Figure 22. The following it is an explanation of the re-coding of the VOB. In step S102 of Figure 21, the control unit 1 performs the calculation VOB V E PTM of the previous VOB minus VOB_V_S_PTM of the last VOB to obtain the STC_compensation. In step S103, the control unit 1 analyzes the changes in the buffer occupancy of the First_SCR of the previous VOB to the end time of decoding all the data in the previous VOB. In Figures 23A and 23B show the analysis process for the occupation of the buffer made in step SI 03. When the video pack # 1 and # 2 are included in the previous VOB as shown in Figure 23A, the SCR # 1, SCR # 2, and DTS # 1 included in these video packages are plotted on the same axis. After this, the size of the data included in the video package # 1 and the video package # 2 are calculated. A line is graphical starting from SCR # 1 with the bitrate information in the packet header as the gradient, until the data size of video pack # 1 has been plotted. After this, the data size of the video pack # 2 will be graphical starting from SCR # 2. Then, the data size of the image data Pl to be decoded is removed in DTS # 1. This data size of the image data Pl is obtained by analyzing the bitstream. After plotting the data sizes of the video packets and the image data in this manner, the buffer state of the video buffer 4b from the first SCR to the DTS can be plotted as a graph. By using the same procedure for all video data and audio data in a VOB, a graph showing the state of the buffer can be obtained, as shown in Figure 23B. In step S104, the control unit 1 performs the same analysis as in step S103 for the last VOB, and in this way analyzes the changes in occupation of the video buffer of the First_SCR of the last VOB to the finalization time of the Last_SCR presentation of all the data in the last VOB. In step 105, the control unit 1 analyzes the changes in occupation of the video buffer of the First_SCR of the last VOB plus STC compensation to the Last DTS of the previous VOB. This period from the First_SCR to the last VOB plus STC__compensation to the Last_DTS of the data in the previous VOB is when the first image data of the last VOB is being transferred to the video buffer 4b while the last image data of the previous VOB is transferred to the video buffer 4b. they still store in the video buffer 4b. When the video data of the previous VOB and the last VOB consists of the buffer, the memory status will be as shown in Figure 10C. In Figure 10C, the video buffer 4b stores the video data of both the previous VOB and the last VOB during the period from the First_SCR + STC_compensation to the Last_SCR, with Bvl + Bv2 representing the highest occupation of the buffer of 4b video during this period. In step S106, the control unit 1 controls the unit from disk access to read the 3 VOBUs that are located at the end of the previous VOB. After this, in step S107 the control unit 1 controls the disk access unit 3 to read the three VOBUs that are located on the front of the last-VOB. Figure 23C shows the area to be read from the previous VOB in step S106. In Figure 23C, the former VOB includes the VOBU # 98 ~ # 105, so that the VOBU # 103 to # 105 is read as the VOBU including the image data V_END that must be decoded to the latter. Figure 23D shows the area that should be read from the last VOB in step S107. In Figure 23D, the former VOB includes the VOBU # 1 ~ # 7, so that when the VOBU # l arrives first, the VOBU # l to # 3 should be read as the VOBU including the image data V_TOP. According to the one-second rule, there is a possibility that the audio data and the image data that must be reproduced in the space of one second are stored through three VOBUs, so that when reading the three VOBUs at the beginning and end of a VOB, in step S106, all the image data and audio data to be reproduced between a point of one second from the end time of presentation of the image data V_END located at the end of the previous VOB and this presentation completion time by itself can be read together. Also, in step S107, all image data and audio data to be reproduced between the presentation start time and the V_TOP image data located at the start of the last VOB of a point one second after this presentation start time that can be read together. It should be noted that the readings in this flow chart are made for VOBU units, although the readings can be made instead for the image data and audio data 'to be reproduced in one second, of all the image data and audio data included in a VOBU. In this mode, the number of VOBUs corresponding to a second that is, although any VOBU number can be re-encoded. The reading can be performed alternatively for image data and audio data that will be reproduced in a period of no more than one second. Then, in step S108, the control unit 1 controls the remote demult ip 4a to separate the VOBUs for the first part and the last part in a video stream and an audio stream, and causes the decoder to video 4c and audio decoder 4e decode these streams. During normal playback, the decoding results of the video decoder 4c and the audio decoder 4e will be transferred as video and audio. When the re-coding is done, however, these coding results must be entered into the MPEG encoder 2, so that the control unit 1 causes the video stream and the audio stream of the decoding results to be transferred to the common bus 7, as shown by the arrows (2) and (3) which are plotted with dashed lines in Figure 17. The video stream and the audio stream which are the results of the coding are transferred via the common bus 7 in order to the MPEG encoder 2, as shown by dashed line (4). After this, the control unit 1 calculates the amount of code for the re-encoding of the encoded video stream and the audio stream decoded by the MPE'G encoder 2. First, in step S109, the control unit 1 judges whether the accumulated amount of data in the buffer exceeds the upper limit of the buffer at any point in the decoding when the previous VOB and the last VOB coexist in the buffer . In the present modality, this is achieved by judging if. the value Bvl + Bv2 calculated in step S105 exceeds the upper limit of the buffer. If this value does not exceed the upper limit, the processing proceeds to step S112, or if the value exceeds the upper limit, the control unit 1 subtracts the excess amount of the code A in the calculated amount and assigns the resulting amount of the code to the de-encoded VOBU sequence. If the amount of code is decreased, this means that the image quality of the video stream decreases during playback these VOBU. However, overflows in the video buffer 4b must be prevented when two VOBs are seamlessly linked, so that this method that decreases the image quality is used. In step Slll, the control unit 1 controls the video decoder 4c to re-encode the decoding results of the video decoder 4c and the audio decoder 4e according to the amount of code assigned in step S110. Here, the MPEG decoder 2 performs a decoding to temporarily convert the values of the graphic elements in the video data into digital data in a YUV coordinate system. The digital data in the YUV coordinate system is digital data for the signals (luminance signal (Y), chrominance signal (U, V)) that specify the colors for a color TV, with the video decoder 4c which re-encodes this digital data to produce image data sets. The technique used for assigning a quantity of code is that described in MPEG DIS (International Standard Edited) Test Model 3. The re-coding to reduce the number of codes is achieved by processes such as the replacement of the coefficients of quantification. It is noted that the amount of code from which the quantity A of excesses has been subtracted can be assigned to only the last VOB or only the previous VOB. In step S112, the control unit 1 calculates which part of the decoding result for the audio data taken from the previous VOB corresponds to the audio box x which includes the STC_compensation + First_SCR of the last VOB. In Figure 24A, the graph shows the buffer status for the previous VOB and the last VOB, while the lower part shows the audio frames of the audio data separated from the previous VOB and the audio frames of the audio separated from the last VOB. The sequences of audio frames in the lower part of Figure 24A show the correspondence between each audio frame and the time axis of the graph in the upper part. The downline drawn from the point shown as
First_SCR + STC_compensation in the graph that uses an audio frame of the sequence of audio frames for the previous VOB.
The audio box that crosses this descending line is the audio box x, and the audio box x + 1 that follows immediately after is the final audio data included in the previous VOB. It should be noted that the data in the audio boxes x and x + 1 are included in the audio data that must be played during a period that is indicated by the points 1.0 seconds before and after the period of reproduction of the final image data V_END , with this being included in the three VOBUs read in step S105. Figure 24B shows the case where the First_SCR + STC_compensation corresponds to an audio frame boundary in the previous VOB. In this case, the audio frame immediately before the limit is set as the audi or x box. In step S113, the control unit 1 calculates the audio frame and + 1 which includes the STC_compensation + VOB_V_S_PTM of the last VOB. In Figure 24A, the ascending line drawn from the video presentation start time VOB_V_S_PTM + STC_compensation in the graph crosses an audio frame in the audio frame sequence of the last VOB. The audio box that crosses this ascending line is the audio frame and + 1. Here, the audio frames up to the preceding audio frame and are the valid audio frames that are still used after the editing has been made, of the original audio data included in the previous VOB. Figure 24C shows the case where the video presentation start time VOB__V_S_PTM + STC_compensation corresponds to an audio frame boundary in the previous VOB. In this case, the audio box immediately before the presentation start time VOB_V_S_PTM + STC_compensation is set as the audio frame and. In step S114, the audio data of the audio box x + 2 to the audio frame and are taken from the above audio data. In Figure 24A, the audio frames of the audio frame and + 1 to the front have been drawn with a dashed line, showing which part is not multiplexed with the VOB. It should be noted that the audio frames that have been moved to the last VOB have been assigned timestamps for the last VOB, so that these audio frames are time stamps reassigned for the last VOB. In step S115, the audio frame immediately after the audio frame including the boundary between the audio frames and e and +1 is detected from the audio frame sequences of the last VOB. When a descending line is drawn from the limit of the audio frames and e and +1, this line will cross one of the audio frames in the sequence of audio frames in the last VOB. The audio box that crosses the audio frame is the audio box u. Figure 24D shows the case where the end time of presentation of the audio frame corresponds to an audio frame limit in the last VOB. In this case, the audio frame immediately after this presentation ending time is set as the audio frame u. In step S116, the audio packet G4, which includes a sequence of audio data where the audio data reproduced for the audio box u are arranged on the front is generated from the audio stream at the last VOB. In Figure 24A, the preceding audio frames are audio frames u have been drawn with a dashed line, with these audio data shown using a broken line are not multiplexed in the last VOB. As a result of the above steps S114 ~ S116, the audio data from the last audio frame to the audio frame x + 1 is multiplexed from the previous VOB. The audio data from the audio box x + 2 to the audio box y and the audio data from the audio box u to the final audio frame are multiplexed in the last VOB. When performing multiplexing in this way, the audio frames for the audio data at the end of the previous VOB will be read from the DVD-RAM at the same time as the image data to be played later in the playback. In this point, when the audio data in the previous VOB is not present until the frame and, which is to say the audio data is short, silent audio frame data is inserted to compensate for the insufficient number of frames. In the same way, when the audio data of the last VOB is not present by starting from the audio frame u, which is to say that the audio data is short, silent audio frame data is inserted to compensate for the insufficient number of audio data. picture. When the audio frames in the x + 2 box to the audio box and in the previous VOB and the audio data in the audio box u to the final audio frame in the last VOB are multiplexed in the last VOB, you need to pay attention to the AV synchronization. As shown in Figure 24A, a separation of reproduction occurs in the audio box and and the audio frame u, and if multiplexing is performed without considering this reproduction gap, a loss of synchronization will occur so the audio frame u will be played before the corresponding video frame. To prevent the increase of these time delays between audio and video, you can assign to the audio group a time stamp that shows the audio box u.
To do so, in step S117, a Filler_Group or padding bytes are inserted into the packet that includes the data in the audio box and, so that the audio box u is not stored in the packet that stores the audio box and . As a result, the audio box u is located at the beginning of the next packet. In step S118, the VOBU sequence that is located at the end of the previous VOB is generated by multiplexing the audio data up to the audio box x + 1, from the audio data extracted from the VOBU located at the end of this previous VOB, with the video data that has been re-encoded. In step S119, the audio data in the x + 2 audio frame forward is multiplexed with the video data that is extracted from the VOBUs located at the start of the last VOB to generate the VOBUs that must be arranged on the front of the VOBU. last VOB. In detail, control unit 1 has the audio pack G3, which includes the audio data sequence of the first audio frame x + 2 to the audio frame and and the Group_Fill, and the audio pack G4, which includes the sequence The audio data of the audio frame u is advanced in the last VOB, multiplexed with the re-encoded video data and causes the current encoder 2e to generate the VOBU that will be fixed at the start of the last VOB. As a result of this multiplexing, the audio frames at the end of the audio data of the previous VOB will be read from the DVD-RAM at the same time as sets of image data that will be played at a later time. Figure 25 shows how audio packets that store a plurality of audio data sets to be reproduced for a plurality of audio frames are multiplexed with video packets that store image data to be reproduced for a plurality of of audio frames. In Figure 25, the transfer of the V_TOP image data to be decoded at the start of the last VOB will be completed within the period Tf_Period. The sequence of packets subsequently arranged below this period Tf_Period in Figure 25 shows the packets that make up the image data V_T0P. In Figure 25, the G3 audio packet that includes the audio separation stores the audio sets x + 2, y-1, and that will be played back for the audio frames x + 2, y-1, and . From the audio data sets stored in this audio packets, the first one to be decoded is the audio data x + 2. This audio data x + 2 must be decoded at the end time of presentation of the audio frame x + 1, and in this way must be read from the DVD-RAM together with the image data V_T0P whose sequence of packets is transferred during the same period
(Tf_Period) as the audio box x + 1. As a result, this audio data is inserted between the video packet sequence P51, which stores the image data V_TOP, and the sequence of video packets P52, as shown in the background of Figure 25. In the packet of G4 audio which stores the audio data sets u, u + 1, and u + 2 to be played for the audio frames u, u + 1, u + 2, the audio data u to be decoded first. This audio data u must be decoded at the end time of presentation of the audio frame ul, so that this audio data u must be read from the DVD-RAM in conjunction with the image data V_NXT whose packet sequence is transferred during the same period. As a result, this audio data u is inserted between the video packet sequence P52, which stores the image data V_TOP and the sequence P53 of video packets that store the image data V_NXT, as shown in the background of Figure 25. As shown above, the audio packet G3 including the audio separation is inserted between the P51 and P52 sequences of video packets, while the G4 audio packets are inserted between the P52 and P53 packet sequences. Of video, thus ending multiplexing. After this, in step S120 the control unit 1 inserts the First_SCR and Last_SCR of the previous VOB of the last VOB, the seamless mark, the VOB V E PTM and the VOB_V_S_PTM the seamless link information for the previous VOB. In steps S121 and S122, the control unit 1 writes all the information that is related to the audio separation, which is to say the audio separation start time, A_STP_PTM, the length of the audio separation A_GAP_LEN, and the audio separation location information A_GAP_LOC in the seamless link information. After the above processing, the control unit 1 has the end of the previous VOB, the start of the last VOB, and the seamless link information described in the DVD-RAM. The video packets and audio packets that store the video data and the audio data obtained through the previous recoding are assigned the SCRs with ascending values. The initial value of the assigned SCRs is the value of the SCR of the packet initially located at the beginning of the area submitted to the re-co-location. Since the SCRs show the time at which the video packets and the respective audio packets are to be entered into the video buffer 4b and the video decoder 4c, if there is a change in the amount of data before or after of the re-coding, it will again be necessary to update the values of the SCRs. Even if this is the case, however, the decoding process will still be carried out correctly with the condition that the SCRs for the • first re-encoded part of the last VOB 'are below the SCRs of the packets of video in the remaining part of the last VOB that was not re-encoded. The PTS and the DTS are assigned according to the video frames and the audio frames, so that there will be no significant change in their values when the recoding is performed. As a result, the continuity of the DTS-PTS is maintained between the data not subjected to the re-encoding and the data in the re-encoded area. To produce two VOBs without seams, you should avoid non-continuity in time stamps. To do so, the control unit 1 judges in step S123 of Figure 22 whether the overlap of the SCRs has appeared. If this judgment is negative, the processing of the flow chart of Figure 22 ends. If the overlap of the SCRs has appeared, the control unit 1 proceeds to step S124 where it calculates the excess amount A based on the number of packets that an overlap SCR has. The control unit 1 then returns to step S110 to repeat the re-encoding, basing the amount of the code assigned for repeated re-encoding on this excess amount A. As shown by the arrow (5) in Figure 17, the six VOBU that has been multiplexed again by the processing in Figure 22 are transferred to the disk access unit 3. Disk access unit 3 then writes the VOBU sequence to the DVD-RAM. It should be noted that while the flow chart of Figures 21 and 22 describes the seamless link of the two VOBs, the same processing can be used to link two sections of the same VOB. For the example shown in Figure 6B, when VOBU # 2, # 4, # 6 and # 8 are deleted, the VOBU located before each erased part can be analyzed seamlessly to the VOBU located after the packet erased by the processing of Figures 21 and 22. The following is a description of the reproduction procedure for seamlessly reproducing two VOBs that have been seamlessly linked by the processing described above. When the user indicates the seamless reproduction of two or more VOBs recorded in an AV file, the control unit 1 is first required to mark seamlessly in the link information if it seams the last VOB. If this seamless mark is "on", the control unit 1 adjusts the time obtained by subtracting the video presentation start time VOB_V_S_PTM from the last VOB of the video presentation completion time VOB_V_E_PTM of the previous VOB to obtain the compensation STC . The control unit 1 then causes the adder 4h to add the STC_Compensation to the normal time measured by the 4g unit of STC.
After this, the time of introduction to the first_SCR buffer of the previous VOB indicated by the seamless link information is compared with the normal time measured by the 4g unit of STC. When the normal time reaches this Primero_SCR, the control unit 1 controls the switch SWl to switch to the transfer of the normal compensation time transferred by the adder 4h instead of the normal time transferred by the 4g unit of STC. After this, the control unit 1 switches the states of the switches SW2 ~ SW4 according to the synchronization diagram in Figure 20. With the present embodiment, seamless reproduction of a plurality of VOBs can be achieved when reading and re-encode only the endings and the respective starts of the VOB. Since the re-encoded data is only the VOB located at the start and end of the VOB, the re-encoding of the VOB can be achieved in a very short time. It is noted that while the present embodiment describes a case where the seamless link information is administered for each VOB, the information that is required for the seamless link of the VOB can be provided collectively. As an example, the video presentation differentiation time VOB_V_E_PTM and the video presentation start time VOB_V_S_PTM which are used to calculate the STC_compensation are described as occurring in two separate sets of VOB information, although these may be given. as the seamless link information of the last VOB. When this is done, it is desirable that the VOB information includes the information for the presentation time of the previous VOB (PREV_VOB_V_E_PTM). In the same way, it is preferred that the information which is the final SCR in the previous VOB (PREV_VOB_ULTIMO_PTM) is included in the seamless link information of the last VOB. In the present embodiment, the DVD recorder apparatus 70 was described as being a device that takes the place of a conventional domestic VCR (not portable), although when a DVD-RAM is used as the recording medium for a computer, it is You can use the following system fix. Disk access unit 3 can function as a DVD-RAM drive device, and can be connected to a common computer bar via an interface that complies with SCSI, IDE or IEEE1394 standard. In this case, the DVD recorder apparatus 70 will include a control unit 1, an MPEG encoder 2, a disc access unit 3, an MPEG encoder 4, a video signal processing unit 5. , a remote control 71, a common bar 7, a signal receiving unit 8 of the remote control, and a receiver 9. In the above embodiment, the VOBs were described as being a multiplexed combination of a video stream and a frame of audio, although the sub-image data produced by submitting the data for subtitles to the pass-length encoding can also be multiplexed in the VOBs. A video stream composed of still image data sets can also be multiplexed.
In addition, the above embodiment describes the case where the re-encoding of the data is performed with the MPEG decoder 4 after the VOBs have been decoded by the MPEG encoder 2. However, during the re-encoding the VOB can be directly input directly from the disk access unit 3 to the MPEG encoder 2 without de-coding before. The present modality describes the case where an image is represented using a frame, although there are cases where an image is actually represented using 1.5 frames, such as for a video stream where a decrease of 3: 2 is used with images for 24 frames per second they undergo compression, in the same way as with film materials. The computation program of processing modules represented by the flowcharts in this first mode (Figures 21-22) can be done by a program with machine language that can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language program recorded on the recording medium can then be installed on a normal personal computer. When executing the machine language programs, installed, the normal, personal computer can achieve the functions of the video data editing apparatus of the present modality.
Second Modality
With the first embodiment dealing with a premise that is performed without the seamless link for the VOBs, this second embodiment describes the seamless connection of a plurality of VOB parts. In this second embodiment, these parts of a VOB are specified using the time information indicated by the video fields. The video fields referred to herein are units that are smaller than a video frame, with the time information for the video fields being expressed using the PTS of the video packets.
The parts of a VOB that are specified using the time information for the video fields are called cells, and the information used to indicate these cells is called cell information. The cell information is recorded in the RTRW administration file or a PGC information element. The details of the data construction and the generation of the cell information and the PGC information is given in the fourth mode. Figure 26 shows examples of the cells located by the video fields for start and end. In Figure 26, the time information sets C_V_S_PTM, C_V_E_PTM specify the video fields at the start and end of a cell. In Figure 26, the time information C_V_S_PTM is the start time of presentation of a video field in which the image P in a VOBU # 100 that forms a part of the present VOB must be reproduced. In the same way, the time information C_V_E_PTM is the end time of presentation of a video field in which the image Bl in the VOBU # 105 that forms a part of the same VOB must be reproduced. As shown in Figure 26, the time information C_V_S_PTM and C_V_E_PTM specifies a section of an image P to an image B as a cell.
(2-1) Reconstruction of GOPs
When the seamless link parts of a VOB that are indicated by the time information, they become necessary for the use of two processes that were not required in the first mode. First, the construction of the GOPs has to be reconstructed to convert the section indicated for the time information into a separate VOB, and second, the increase in the occupation of the buffer due to the reconstruction of the GOPs has to be estimated. The reconstruction of the GOPs refers to a process that changes the construction of the GOPs so that the section indicated as a cell has the appropriate display order and the order of coding.
More specifically, when a section is indicated to be linked by the cell information, there may be cases where an edit limit is defined halfway through a VOBU, as shown in Figure 28A. If this is the case, the two cells to be linked will not have an appropriate display order or appropriate coding order. In order to rectify the order of display and the order of coding, the reconstruction of the GOPs is performed using the processing based on the three rules shown in Figure 28B. When the image data in the display order of a previous cell is a B-image, the processing based on the first rule re-encodes this image data to convert them into an image P (or an I-image). The image P in the forward direction that is referred to by the image B is located before the image B in the order of coding. However, this P image will not be displayed after editing, and will thus be deleted from the VOB.
When the first image data in the coding order of the last cell is an image P, the processing based on the second rule re-encodes this image data to convert them to an image I. When the first set or consecutive sets of image data in the display order of the last cell are B images, processing based on the third rule re-encodes this image data to convert them to image data whose display does not depend on the correlation with other images that have been reproduced previously. Later in the present, the images formed image data that only depend on the correlation with images that are still to be displayed will be called B-forward images.
(2-2) Estimation of- Increase in the Occupation of the Intermediate Memory
When the image types of certain images have been changed by processing based on the three rules described above, the processing for estimating the increase in buffer occupancy estimates the sizes of these converted sets of image data. When the reconstruction described above is performed for the previous cell, the final image data in the reproduction order of the previous cell is converted from an image B to an image P or an image I, thereby increasing the size of these data. When the reconstruction described above is performed for the last cell, the image data located at the start of coding order of the final cell is converted from a P image to an I image, and the image type of the video data located at the front of the display order are converted to a B-forward image. This also increases the size of the data. The following is an explanation of the procedure for estimating the increases in the size of the data that accompanies the conversion of the image type. Figure 29A and 29B will be used to explain this procedure. In Figure 29A, the first cell continues until the image B B3. According to the above rules, the video data editing apparatus has to convert this image B B3 to the image P Pl. When the image B B3 is dependent on the image P P2 which is reproduced after the image B B3, the conversion process of the image type will incorporate the necessary information of the image P P2 in the image P Pl 'that is produced for the conversion process. In view of this procedure, the video data editing apparatus can estimate the data size of the image P Pl 'which is obtained by the conversion process using a sum of the size of the image B B3 and the size of the image P P2. This estimation method represents only one potential method, however, so that other methods are equally possible. In determining the amount of the code for use in the coding based on the estimated buffer occupancy, the video data editing apparatus may assign an optical quantity of code to the previous one -cell and the last cell . Figures 30A and 30B show how the increases in buffer occupancy that accompany changes in the type of image within the last cell are estimated. In Figure 30A, the image data B B3 forward corresponds to the last VOB. Each cell is determined based on the display time for the start of the cell, so that the image B B3 is the image data located at the start of the order of display or display of the last cell. As a result, the video data editing apparatus needs to convert the image B B3 into the image B-forward B 'according to the rules given above. When this image B B3 has an information component that is dependent on the previously reproduced image P P2, this information component of the image P P2 will have been incorporated in the image B-forward B 'during the conversion of the image type.
In view of this procedure, the video data editing apparatus can estimate the data size of the B-forward image B 'which is obtained by the conversion process using a sum of the size of the image B B3 and the image size P P2. For the last VOB, the video data editing apparatus needs to convert the image type of the image data located to the start or start of the encoding order. Referring to the image order of the last VOB in Figure 28A, it can be seen that the image P P3 is the image data to be displayed immediately after the image B B3. The image P P3 is stored in the reordering buffer 4f of the video data editing apparatus until the decoding of the image B B3 is completed, and in this way is only displayed after the decoding of the image B B3 has been performed. By causing the reordering buffer 4f to reorder the image data in this manner, the image P P3 will precede the image B B3 in the encoding order even though the image P P3 is displayed after the image B B3. According to the rules described above, the video data editing apparatus needs to convert the detected image data P3 as the first image data in the order of coding into an image I. When this image P has an information component that it depends on the image I that is reproduced before the image P P3, this information component of the image I will be incorporated in the image P P3 during the conversion of the image type. In view of this procedure, the video data editing apparatus can estimate the data size of the image II 'which is obtained by the conversion process using a sum of the image size P P3 and the size of the image I precedent. Based on the occupation of the buffer that is estimated in this way, the image data editing apparatus can then allocate the optimum quantities of the code to the previous and last cells to be used in the recoding.
(2-3) Procedure to seamlessly connect the cells
Figures 31 to 33 are flowcharts that show the procedure that links two cells to allow seamless reproduction of the two. It is noted that many of the steps in these flowcharts are the same as the steps in the flowcharts shown in Figures 21 and 22 with the term "VOB" that has been replaced with the term "cell". These steps have been given the same reference numbers as in the first modality, and their explanation is omitted. Figure 34 shows the audio frames in the audio stream corresponding to the audio box x, the audio box x + 1 and the audio box and used in Figure 31. > In step S102, the control unit 1 refers to the time information that specifies the end of the cell to be reproduced first (subsequently called the "previous cell") and the time information that specifies the end of cell which will be reproduced second (later called the "last cell") and subtracts the C_V_S_PTM from the last cell of the C_V_E_PTM of the previous cell to obtain the STC_compensation. In step 103, the control unit 1 analyzes the changes in the buffer occupancy of the First_SCR of the previous cell to the decoding completion time of the Last_DTS of all the data in the previous cell. In step S104, the control unit 1 performs the same analysis as in step S103 for the last cell, and in this way analyzes the changes in the buffer occupancy of the First_SCR of the last cell at the end time of Last_DTS decoding of all data in the last cell. In step S130, the control unit 1 estimates the increment a in the occupation of the buffer that accompanies the changes in the type of image for the last cell, according to the procedure in Figures 30A and 30B. In step S131, the control unit 1 estimates the increase ß in the memory occupation 'intermediate that accompanies the changes in the type of image for the previous cell, according to the procedure shown in Figures 29A and 29B. In step S132, the control unit -1 adds the estimated increments a, ß to the buffer storage, respectively for the previous and last cells. In step S105, the control unit 1 analyzes the changes in the. occupation of the buffer of the First_SCR of the last cell + STC_compensation to the Last_DTS of the last cell. As shown in Figure 10C of the first embodiment, the highest occupancy Bvl + BHv2 of the video buffer 4b is obtained during the period where the video data for both the previous cell and the last cell are stored in the buffer 4b video In step S106, the control unit 1 controls the disk access unit 3 to read the three VOBs that are thought to include the image data located at the end of the previous cell of the DVD-RAM. After this, in step S107, the control unit 1 controls the disk access unit 3 to read the three VOBs that are believed to include the image data located at the beginning of the last cell. Figure 27A shows the area to be read from the previous cell in step S106. Figure 27B shows the VOB that includes the VOBU # 98 to # 107, with the VOBU # 99 to # 105 that are indicated as the previous cell. When the image data to be reproduced last in the previous cell is the Bend image data, this image data will be included in one of the VOBU # 103 to # 105 according to the one second rule, so that VOBU # 103 to VOB 105 will be read as the VOBU sequence that includes the image data to be played back to the latter. The VOB shown in Figure 27B includes the VOBU # 498 to # 507, and of these, the VOBU # 500 to # 506 are indicated as the last cell. When the image data to be displayed first in this last cell is the PTOP image data, these PTOP image data will be included in VOBU # 500 to # 502, so VOBU # 500 to # 502 will be read as the VOBU sequence that includes the image data to be displayed first. These VOBUs include all the image data that depends on the PTOP image data of the Bend image data, in addition to the audio data to be reproduced at the same time as the PTOP image data and the Bend image data. As a result, all the image data that is required for the conversion of the image types is read by this operation. It should be noted that the readings in this flow chart are made for the VOBU units, although these readings can be made instead for the image data and audio data to be reproduced in a second, of all the image data and audio data included in a VOBU. In the present embodiment, the number of VOBs corresponding to one second of reproduction are given three, although any number of VOBs may be used. The readings can be performed alternatively for the image data and the audio data to be reproduced in a period longer than one second.
After these readings, in step S108, the control unit 1 controls the desmult iplexor 4a for separating video data and audio data from the VOBU located at the end of the previous cell and the start of the latter end cell. In step S109, the control unit 1 judges whether the accumulated amount of data in the buffer exceeds the upper limit of the buffer at any point in the decoding when the previous cell and the last cell exist in the buffer. More specifically, this is achieved by judging whether the value Bvl + Bv2 calculated in step S105 exceeds the upper limit of the internal memory. If this value does not exceed the upper limit, the processing proceeds to step S133, or if the value exceeds the upper limit, control unit 1 allocates a quantity of code based on the amount A in excess to the previous cell and the last cell in step S110. It is noted that the re-coding performed in this case can be performed only for one of the previous VOB and the last VOB, or for both. In step Slll, the video data obtained from the two cells is re-encoded according to the amount of code assigned in step S110. In step S133, the first_SCR that has been recently assigned to the video data re-coded in the last cell is obtained. In this last cell, the first image data in the order of display and the first 0 image data in the order of
• Encoding will have become the image types with large amounts of image data, so it should be obvious that the value
First_SCR + STC_compensation will indicate a time 5 earlier than before. In step S112, the control unit 1 calculates the audio data, from the audio data separated from the previous cell, which corresponds to the audio box x which includes 0 the sum of the STC_compensation and the first First_SCR that is newly assigned to the video data in the previous VOB. In Figure 34, the upper and lower graphs respectively show the transition in the occupation of the buffer memory due to the video data in the previous cell and last cell. The lower graph in Figure 34 shows the audio frames of the audio data separated from the previous cell. The sequence of the audio frames below the lower graph in Figure 34 shows each audio frame against the time axis of the graph given above * here. The occupation of the buffer for the new last cell obtained as a result of the re-encoding is increased by the amount a. It is noted that this amount a differs from the increased amount that was estimated in step S132. Due to this amount, the Primero_SCR that is recently assigned to the latest video data indicates an earlier time. As you can see in the lower graph in Figure 34, the new value of First o_SCR + STC_compensation is placed in the time that is of T l earlier than before. In Figure 34, the traced downward guide of the first value of Pr_me_RO_SCR + STC_compensation crosses an audio frame in the sequence of audio frames of the previous cell. This intercepted audio frame is the audio box x, with the following audio box x + 1 which is the final audio box in the previous cell. Since the value of the sum of the
STC_compensation and the new Primero_SCR of the last cell indicates an earlier time, this means that an earlier box is indicated as the * audio x box. As a result, when a reading is started for the video data in the last cell, the audio data to be read from the previous cell together with this video data is comparatively larger than in the first modality. Subsequently, the processing of the step S113 to S119 is performed so that the current encoder 2e performs the multiplexing shown in Fig. 25. After this, in step S120, the
First_SCR, the Last_SCR, the seamless mark, the C_V_E_PTM and the C_V_S_PTM for the
'Previous and last cells are intercepted in the seamless link information of the previous cell. The control unit 1 then performs the processing in steps S121 and S122. From the data of the six VOBUs obtained through the re-coding, the three VOBUs arranged at the beginning (the first VOBUs) originally formed part of the previous cell, and in this way they are appended to the end of the previous cell. Similarly, the three VOBUs arranged at the end (the last VOBUs) originally formed part of the last cell, and thus are intercepted at that beginning of the last cell. While one of the previous and last cell that has been given the re-encoded data is administered having been assigned the same identifier as the VOB from which they were taken, the other of the two cells is administered having been assigned a different identifier VOB from which it was taken. This means that after this division, the previous and the last cell are administered as separate VOBs. This is because there is a high possibility that the timestamps are not continuous in the boundary between the previous cell and the last cell.
As in the first embodiment, in step S123, the control unit 1 judges whether the values of the SCR are continuous. If so, control unit 1 ends processing in the flowcharts of Figures 31 to 33. If not, control unit 1 calculates the excess amount A based on the number of packets given for overlapping SCRs. , determines a code amount based on the excess amount A, and returns to step S109 to repeat the re-encoding. As a result of the above processing, the cells are re-encoded, with the cells indicated by the cell information that are set as separate VOBs. This means that the VOB information for the newly generated VOBs needs to be provided in the RTRW administration file. The following is an explanation of how this VOB information is defined for the cells. The "video stream attribute information" includes the compression mode information, the TV system information, the aspect ratio information and the resolution information, although this information can be adjusted to correspond to the information for the VOBs (s) from which the cells were taken. The "audio stream attribute information" includes a coding mode, the presence / absence of a dynamic range control, a sampling frequency, and the number of channels, although this • information can be adjusted to correspond to the information for the VOB (s) from which the cells were taken. The "time map table" is composed of the size of each VOBU that composes the VOB and the viewing period of each VOBU, although a corresponding part of the information given for the VOB (s) from which the cells were taken can be Use, with the sizes and viewing periods that are amended only to the VOBUs that have been re-encoded. The following is an explanation of the "seamless link information" that was generated in step S133. This seamless link information is composed of a seamless mark, a VOB_V_S_PTM presentation start time, a VOB VE PTM video display completion time, a First SCR, a lastSCR Last, a start time of separation audio A_STP_PTM, and an audio separation length A_GAP_LEN. These elements are written in the seamless link information at a time. Only when the relationship between the previous cell and the last cell satisfy conditions (1) and (2) is the seamless mark set to "01". If any condition is not satisfied, the seamless mark is set to "00". (1) Both cells must use the same display method (NTSC, PAL, etc.) for the video stream as given in the video attribute information. (2) Both cells must use the same coding method (AC-3), MPEG, Linear PCM) for the audio stream as given in the audio attribute information. The "VOB V S_PTM video presentation start time" is updated to the presentation start time after recoding. The "video presentation completion time V0B_V_E_PTM" is updated to the presentation completion time after re-encoding. The "First_SCR" is updated to the SCR of the first packet after recoding. The "Latest_SCR" is updated to the SCR of the final package after recoding. The "A_STP_PTM audio separation start time" is set at the end time of presentation of the audio frame and that is the final audio frame to be played for the audio data moving to the last cell in Figure 34. The "audio separation length A_GAP_LEN" is set as the period from the end time of presentation of the last audio frame and which will be played using the audio data moving to the last cell in the Figure 34 at the start time of presentation of the audio frame u.
Once the VOB information has been generated as described above, a RTRW administration file included with this new VOB information is recorded in the DVD-RAM. By doing so, the two cells indicated by the cell information can be recorded on the DVD-RAM as two VOBs to be played seamlessly. As described above, this second mode can process the cells in a VOB or VOBs to make the cells reproduced seamlessly when reading only and when re-encoding the end of the previous cell and the start of the last cell. Since only the VOBUs located at the beginning and end of the respective cell are re-encoded, this re-coding of the cells can be achieved in a very short time. It should be noted that while the present embodiment describes the case where the video fields are used as the unit when the cells are indicated, video frames may be used instead.
The computation program of processing mode represented by the flowcharts in this first mode (Figures 31-33) can be done by a machine language program that can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The engraving machine language program in the recording medium can then be installed on a normal, personal computer. When executing the installed machine language programs, the normal personal computer can achieve the functions of the video data editing apparatus of the present fashion.
THIRD MODALITY
The third embodiment of the present invention manages AV files in a file system and allows greater freedom in video editing.
3-1 Directory Structure in a DVD-RAM The RTRW administration file of the AV files of the first mode are arranged in the directories shown in Figure 35 within a file system that complies with ISO / IEC 13346. In the Figure 35, ovals represent directories and rectangles represent files. The root directory includes the "one RTRW" directories and two files called "Archivol.DAT" and "File2.DAT". The RTRW directory includes three files called "Movie, VOB," "Movie2.VOB," and "RTRWM.IFO."
(3 -1-1 - Administration Information of
File Systems in the Directories
The following is a description of the management information used to administer the RTRW administration file and the AV files in the directory structure shown in Figure 35. Figure 36 shows the file system administration information in the directory structure of Figure 35.
Figure 36 shows the volume area shown in Figure 3D, the sectors, and the stored contents of the sectors in a hierarchy. The F ~ ® arrows in this drawing show the order in which the storage locations of the "VOB Movie" file are specified by this management information. The first level in the hierarchy in Figure 36 shows the volume area shown in Figure 3 D. The second level in the hierarchy displays the descriptors of the file set, the final descriptors, the file entries, and the directories, of the complete administration information. The information in this second level complies with a file system that is standardized under ISO / IEC 13346. File systems that are standardized under ISO / IEC 13346 manage directories in a hierarchy. The management information in Figure 36 is arranged according to the directory structure. However, a recording region is only displayed for the AV file "Pelí culal. VOB". The file set descriptor (LBN 80) in the second level shows the information such as the LBN of the sector that stores the file entry for the root directory. The final descriptor (LBN 81) shows the end of the descriptor of the file set s. A file entry (such as LBN
82, 584, 3585) is stored for each file (or directory), which shows a storage location for a directory file. The file entries for the files and the file entries for the directories have the same format, so that the directories and directories can be freely constructed. A directory (such as LBN83, 584, 3585) shows the storage locations for the file entries of the files and directories included in the directory. Three file entries and two directories are shown at the third level in the hierarchy. File entries and directories are followed by the file system and have a data construction that allows the storage position of a specified file to be indicated despite the construction of the hierarchy in the directory structure. Each file entry includes an assignment descriptor that shows a storage position of a directory file. When the data recorded in each file is divided into a plurality of extensions, a file entry includes a plurality of assignment descriptors for each extension. The expression "extension" refers to a section of data included in a file that should be stored preferably in consecutive regions. When, for example, the size of a VOB to be recorded in an AV file is large, but there are no consecutive regions to store the VOB, the AV file can not be written to the DVD-RAM. However, when there is a plurality of small consecutive regions distributed across the partition area, by dividing the VOBs to be recorded in the AV file, the resulting divided sections of the VOBs can be stored in the consecutive, distributed areas. By dividing the VOBs in this way, the probability of being able to store the VOBs as AV files is increased, even when limiting the number of consecutive regions and the length of the partition area. To improve the efficiency with which data is recorded on a DVD-RAM, the VOBs recorded in an AV file are divided into a plurality of extensions, with these extensions being recorded in consecutive separate areas on the disc without considering twice as many. the extensions. It should be noted that the expression "consecutive regions" refers here to a region composed of ECC blocks that are logically or physically consecutive. As an example, the file entries with LBN 82 and 584 in Figure 36 each include an assignment descriptor
'individual, which means that the file is not divided into a plurality of extensions (that is, it is composed of an individual extension). The 3585 file entry meanwhile has two mapping descriptors, which means that the data that is to be stored in the file is composed of two extensions. Each directory includes a file identification descriptor that shows a storage portion of a file entry for each file and for each directory included in the directory. When a route is traced through the file entries and directories, the storage position "ra / z / video / Movie 1. VOB" can be found by following the given order as the file set descriptor? ? file entry (root)? ®? Director and (root? ®? file entry (RTRW)? ©? directory (RTRW)? ©? file entry (Pe 1 i 1. VOB)? (D®_> file (extensions # 1 and # 2 of Movie.VOB) Figure 37 shows the link between file entries and directories in this route in another format that maps the construction of the directory.In this drawing, the directories used for the route include the descriptors file identification for the directory of the source directory (the source of the root that is the root itself), the RTRW directory, the Archivol.DAT, the File2.DAT.The RTRW directory includes the file identification descriptors for each one of the directory of the source directory (root), the file Pei i 1. VOB, the file Peí íula2, VOB, and the file RTRWM.IFO In the same way, the storage position of the Pelí culal file. VOB is specified when plotting the F ~ (D®.
(3-1-2) Constructing Data from an Input
Archive
Figure 38A shows the construction of data from a file entry in more detail. As shown in Figure 38A, a file entry includes a descriptor mark, an ICB mark, an assignment descriptor length, expanded attributes, and an assignment descriptor. In this figure, the legend "BP" represents "bit position", while the legend "RBP" represents "relative position of bits". The descriptor mark is a mark that shows that the present entry is a file entry. For a DVD-RAM, a variety of marks are used, such as the file entry descriptor and the space bitmap descriptor. For a file entry, a value of "261" is used as the file mark indicating a file mark. The I'CB mark shows the attribute information for the same file entry. The expanded attributes are the information that shows the attributes with a higher level content than the content specified by the attribute information field in the file entry. The assignment descriptor field stores as many allocation descriptors as there are extensions that make up the file. Each assignment descriptor shows the logical block number (LBN) that indicates the storage position of an extension for a file or directory. The data construction of an allocation descriptor is shown in Figure 38B. The allocation descriptor of Figure 38B includes the data that shows the extension length and a logical block number that shows the possible storage of the extension. However, the two upper bits of the data indicating the extension length show the storage state of the extension storage area. The meanings of the various values are as shown in Figure 38C.
(3-1-3) Construction of Data of the
File Identification Descriptors for Directories and Files
Figures 39A and 39B show the detailed data construction of the file identification descriptors for directories and files in various directories. In these two types of file identification descriptors have the same format, and thus includes the administration information, identification information, a length and directory name, an address that shows the logical block number that stores the input file for the file directory, expansion information, and a directory name. In this way, the address of a file entry is associated with a directory name or a file name.
(3-1-4) Minimum Size of an AV Block
When a VOB to be recorded in an AV file is divided into a plurality of extensions, the data length of each extension must exceed the length of the data of an AV block. The expression "AV block" refers here to the minimum amount of data for which there is no subflow danger for the track buffer 3a when a VOB of the DVD-RAM is read. To guarantee consecutive playback, the minimum size of an AV block is defined in relation to the track buffer provided in the playback apparatus. The following explanation is about how the minimum size of an AV block is.
(3-1-5) Minimum Size of an AV Block Area
First, the relationship of the need to determine the minimum size • of an AV block to ensure uninterrupted playback is described. Figure 40 shows a model of how a playback apparatus that plays video objects buffer the AV data read from the DVD-RAM in the track buffer. This model shows the minimum requirements of a reproduction device for the uninterrupted reproduction that is going to be guaranteed. In the upper part of Figure 40, the reproduction apparatus subjects the AV data to reading from the DVD-RAM to the 'ECC' process., it temporarily accumulates the resulting data in the track buffer, which is a FIFO memory, and then transfers the data from the track buffer to the decoder. In the illustrated example, Vr is the input transfer speed of the track buffer (or in other words, the speed at which it is read from the optical disk), and V0 is the memory output transfer rate intermediate of tracks (decoding input speed), where Vr >; V0. In the present model, Vr = llMbps. The lower part of Figure 40 is a graph showing the changes in the amount of data in the track buffer for the present model. In this graph, the vertical axis represents the amount of data in the buffer, while the horizontal axis represents time. This graph assumes that the block AV # k that includes a defective sector is read after block AV # j that does not include bad sectors. The TI time period displayed on the time axis shows the time required to read all the AV data in block AV # j that does not include bad sectors. During this TI period, the amount of data in the track buffer is increased at the speed (Vr - Vs). Period T2 (hereinafter referred to as the "jump period") shows the time required for the optical reader to jump from block AV # j to block AV # k. This jump period includes the search time for the optical reader and the time taken for the rotation of the optical disk to stabilize. In the worst-case scenario of a jump from the inner periphery to the outer periphery of the optical disk, the jump time is assumed to be around 1500 ms for the present model. During the jump period T2, the amount of data in the track buffer decreases at a speed of V0. The periods T3 to T5 show the time taken to read all the AV data in the 'AV block #k which includes a defective sector. From these periods, the period T4 shows the time taken to jump to the next ECC block from a present ECC block that includes a defective sector. This jump operation comprises the jump from a present ECC block if one or more of the 16 sectors are defective and the jump to the next ECC block. This means that in an AV block, instead of just logically replacing each defective sector in an ECC block with a replacement sector (or a replacement ECC block), the use of each ECC block (all 16 sectors) is stopped with a defective sector. This method is called the ECC block jump method. Period T4 is the disk rotation waiting time, which, in the worst case scenario, is the time taken for a disk revolution. This is presumed to be around 105 ms for the present model. In periods T3 and T5, the amount of data in the buffer is increased at a given rate as Vr-V0 / while during period T4, the amount decreases at the speed V0. When "N_ecc" represents the total number of ECC blocks in an AV block, the size of an AV block is given by the formula "N_ecc * 16 * 8 * 2048 * bits.To ensure that effective reproduction is performed, it is found as the minimum value of N ecc is described.
In period T2, the AV data is read only from the track buffer without concurrent replenishment of the AV data. During this period T2, if the amount of data in the buffer reaches zero, a subflow will occur in the encoder. In this case, the uninterrupted playback of AV data can not be guaranteed. As a result, the relationship shown as the following equation 1 needs to be satisfied to ensure the uninterrupted reproduction of the AV data (which is to say, to ensure that the subflow does not occur).
Equation 1
(amount of data B in buffer) D (amount of data R consumed)
The amount of buffer data B is the amount of data stored in the buffer at the end of the TI period. The amount of data R consumed is the amount of data of the data read during period T2.
The amount of data B in buffer is given by Equation 2 below.
Equation 2
The amount of data R consumed is given by Equation 3 below.
Equation 3
(amount of data R consumed) = T2 * VQ
Substituting Equations 2 and 3 on the respective sides of Equation 1 gives Equation 4 below.
Equation 4
(N ecc * 16 * 8 * 2048) * (1-V0 / Vr) > t2 * V (
By rearranging Equation 4, it can be seen that the number N_ecc of ECC blocks that guarantees consecutive reproduction must satisfy Equation 5 below.
Equation 5 N_ecc > T2 * V0 / ((16 * 8 * 2048) * (1-V0 / Vr))
In Equation 5, T2 is the jump period described above, which has a maximum of 1.5s. Meanwhile, Vr has a fixed value, which for the model in the upper part of Figure 40 is 11 Mbps. V0 is expressed by the following Equation 6 that takes the variable bit rate of the AV block that includes the number N_ecc of ECC blocks under consideration. It is pointed out that V0 is not the maximum value of the logical transfer rate for the transfer from the track buffer, but is given by the subsequent equation as the effective input speed of the variable speed AV data in the decoding. cador The length of the AV block here is given as the number N_ packet packet in a block composed of N_ecc ECC blocks ((N ecc- 1) * 16 <N_paque t eDN ecc * 16).
Equation 6
V, AV length (bit io) * (1 / AV block playback time (second)) (N_paqüete * 2048 * 8) * (27M / (SCR_pr next_primer SCR first current))
In the above equation, SCR_first_prime is the SCR of the first packet in the next AV block, while SCR_first_current is the SCR of the first packet in the present AV block. Each SCR shows the time at which the corresponding packet must be transferred from the track buffer to the decoder. The unit for the SCR is 1/27 mega s second. As shown in Equations 5 and 6, the minimum size of an AV block can theoretically be calculated according to the actual bit rate of the AV data. Equation 5 applies to a case where there are no bad sectors on the optical disk. When these sectors are present, the number of ECCC Necc blocks is required to ensure that the uninterrupted playback is as described below. It is assumed herein that the AV block area includes ECC blocks with bad sectors, the number of which is represented as "dN_ecc". AV data is not recorded in the dN_ecc ECC blocks due to the ECC block jump described above. The lost time Ts caused by the jump of the dN_ecc defective ECC blocks is represented as "T4 * dN ecc", where "T4" represents the block jump block ECC for the model shown in Figure 40. To ensure uninterrupted playback of the AV data when defective sectors are included, the AV block area needs to include as the number of ECC blocks as represented by Equation 7.
Equation 7
N_ecc > dN_ecc + V0 * (T j + Ts) / ((16 * 8 * 2048) * 1 -V0 / Vr))
As described above, the size of the AV block area is calculated from Formula 5 when no defective sectors are present, and from Formula 7 when defective sectors are present.
It should be noted here that when the AV data is composed of a plurality of AV blocks, the first and last AV blocks need not satisfy 1.a Equation 5 or 7. This is because the timing at which the AV blocks start decoding for the first AV block can be delayed, which is to say, the data supply to the encoder can be delayed until enough data is accumulated in the buffer, thus ensuring uninterrupted playback between the first and second AV blocks. The last AV block, meanwhile, is not followed by any particular AV data, meaning that the playback can simply end with this last AV block.
(3-2) Functional Blocks of the DVD Recorder 70
Figure 41 is a block diagram of function showing the construction of the DVD burning apparatus 70 divided into functions. Each function in Figure 41 is performed by the CPU in the control unit 1 which executes a program in the ROM for controlling the physical equipment shown in Figure 17. The DVD player of Figure 41 includes the unit 100 of disc recording, disk reading unit 101, common file system unit 10, unit 11 of the AV file system, recording-editing-reproduction control unit 12, data recording unit 13 AV, the data reproduction unit 14, and the AV data editing unit 15.
(3-2-1) Disk Recording Unit 100-Disk Reading Unit 101
The disk recording unit 100 operates as follows. Upon receipt of an entry of the logical sector number from which the recording is to be started and the data to be recorded from the common file system unit 10 and the unit 11 of the AV file system, the recording unit 100 The disk moves the optical reader to the appropriate logical sector number and causes the optical reader to record the data in ECC block units (16 sectors) in the sectors indicated on the disk. When the amount of data to be recorded is below 16 sectors, the disk recording unit 100 first reads the data, submits it to ECC processing, and records it on the disk as an ECC block. The disk reading unit 101 operates as follows. Upon receipt of an entry from a logical sector number, from which the data will be read and a number of sectors of the unit
of the common file system and the unit
11 of the AV file system, the disk reading unit 101 moves the optical reader to the appropriate logical sector number and causes the optical reader to read the data in the ECC block units from the indicated logical sectors. The disk reading unit 101 has performed the ECC processing on the read data and transfers only the data of the required sectors to the unit 10 of the common file system. As with the disk recording unit 100, the disk reading unit 101 of the VOBs in units of 16 sectors for each ECC block, thereby reducing overloads.
(3-2-2) Unit 10 of File System
Common
The common file system unit 10 provides the recording-editing-playback control unit 12, the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 with the normal functions to access the standardized data format under ISO / IEC 13346. These normal functions provided by the common file system unit 10 controls the disk recording unit 100 and the disk reading unit 101 for reading or writing the data or from the DVD in the directory units and in the file units. Representative examples of the normal functions provided by unit 10 of the common file system are as s igue.
1. Making the disk recording unit 100 record a file entry and transfer the file identification descriptor to the recording-editing-playback control unit 12, the AV data recording unit 13, the reproduction unit 14 of AV data, and the AV data editing unit 15. 2. Convert a recorded area to the disk that includes a file in an empty area. 3. Control the disk reading unit 101 to read the file identification descriptor of a specified file of a DVD-RAM,. 4. Control the disk recording unit 100 to record the memory present in the memory on the disk as a non-AV file. 5. Control the disk reading unit 101 to read an extension that composes a file recorded on the disk. 6. Control the disk reading unit 101 to move the cal reader to a desired position in the extensions that make up a file.
To use any of the functions (1) to (6), the recording-editing-reproducing control unit 12 to the AV data editing unit 15 may issue an order to the file system unit 10, to indicate that the file to be read or recorded as a parameter. These commands are called commands oriented to the common file system. Several types of commands oriented to the common file system are available, such as "(1) CREATE", "(2) ERASE", "(3) OPEN / CLOSE", "(4) WRITE", "(5) READ ", and" (6) SEARCH ". These commands are assigned respectively to functions (1) to (6). In the present embodiment, the assignment of the commands to the normal functions is as follows. To use the function (1), the recording-editing-playback control unit 12 to the AV data editing unit 15 can issue a "CREATE" command to the common file system unit 10. To use the function (2), the recording-editing-reproducing control unit 12 to the AV data editing unit 15, can issue a "CLEAR" command to the unit 10 of the common file system. In the same way, to use respectively the functions (3), (4), (5) and (6), the recording-editing-editing-playback control unit 12 to the AV data editing unit 15 can emit a order "OPEN / CLOSE", "WRITE", "READ" or "SEARCH" to unit 10 of the common file system.
(3-2-3) Unit 11 of the AV File System
The AV file system unit 11 provides the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 with extended functions that are only necessary when a file is recorded or edited. AV. These extended functions can not be provided by the common file system unit 10. The following are representative examples of these extended functions. (7) Writing a VOB that is encoded by the MPEG encoder 2 onto a DVD-RAM as an AV file.
(8) Cutting of an indicated part of the VOB recorded in an AV file and adjusting the part as a different file. (9) Cleaning of an indicated part of the VOB recorded in an AV file. (10) Link of two AV files presented in the DVD-RAM with VOBU that have been re-encoded according to the procedure of the first and second modalities. To use the extended functions (7) to (10), the recording-editing-reproducing control unit 12 to the AV data editing unit 15 can issue an order to the common file system unit 10 to indicate that the file is recorded, linked or cut. These commands are called commands oriented to the AV file system. Here, the commands oriented to the file system "AV-ESCRI B I R", "DIVIDE", "CUT", and "ANNEX" are available, with these assigned respectively to functions (7) to (10). In the present embodiment, the assignment of the orders to the extended functions is as follows. To use the function (7), the AV data recording unit 13 to the AV data editing unit 15 can issue an AV-WRITE command. To use the function (8), the AV data recording unit 13 to the AV data editing unit 15 can issue a command-DIVIDIR. Similarly, to use the function (9) or (10), the AV data recording unit 13 to the AV data editing 15 may issue an "ACQUIRE" or "ANNEXE" command. With the function (* 10), the 'extension of the file after the link is as long or longer than an AV block.
(3-2-4) Unit 12 of Recording Control - Editing-Reproduction
The record-editing-playback control unit 12 issues an OPEN / CLOSE command that indicates the directory names as parameters to the common file system unit 10, and in so doing causes the common file system unit 10 to read a plurality of descriptors of
'file identification of the DVD-RAM. The recording-editing-editing-control unit 12 then analyzes the directory structure of the DVD-RAM of the file identification descriptors and receives an indication from the user of a file or directory to be operated. In. receiving the indication of the user of the target file or directory, the recording-editing-reproducing control unit 12 identifies the content of the desired operation based on the operation of the user identified by the signal receiving unit 8 of the remote control, and issuing instructions to cause the AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15 to perform appropriate processing for the file or directory indicated as the operation target. To cause the user to indicate the operation objective, the recording-editing-reproduction control unit 12 transfers the graphic data, which visually represents the directory structure, the total number of AV files, and the empty area data sizes. in the present disc, to the video signal processing unit 5. The video signal processing unit 5 converts this logical data into an image signal and makes them display on the TV monitor 72. Figure 42 shows an example of the graphic data displayed on the TV monitor 72 under the control of the recording-editing-playback control unit 12. During the display or visualization of this graphic data, the display or display color of any of the files or directories may change to show potential operation objectives. This change in color is used to focus the user's attention, and in this way is called the "focus state". While using the normal color, meanwhile, it is called the "norm 1 state" When the user presses the dial key on the remote control 71, the display of the file or directory that is currently in the focus state returns to the normal state and a file or directory recently indicated, different, in the focus state is displayed. When any of the files or directories are in the focus state, the recording-editing control unit 12 plays the ion waiting for the user to press the "confirm" key on the remote control 71. When the user presses the enter key , the recording-edition-playback unit 12 identifies the file or directory that is currently in the focus state with a potential operation objective. In this way, the recording-editing-reproducing control unit 12 can identify the file or directory that is the operation objective. To identify the operation content, however, the recording-editing-playback control unit 12 determines what operation content has been assigned to the key code received from the remote control signal receiving unit 8. As shown on the left side of Figure 41, the keys with the legend "PLAY", "REWIND", "STOP", "FAST FORWARD", "RECORD", "MARK", "VIRTUAL EDIT", and " REAL EDITION "are present in the remote control 71. In this manner, the recording control-editing-playback control unit 12 identifies the operation content indicated by the user in accordance with the key code received from the unit 8 of. reception of remote control signals.
(3-2-4-1) Operation Content that Can Be
Receive by Unit 12 Control of
Recording-Editing-Reproduction
The operation contents are classified in operation contents that are provided in the domestic AV equipment, conventional, and the operating contents that are provided especially for video editing. As specific examples, "play", "rewind", "stop", "fast forward", and "record" all fall into the previous category, while "mark", "virtual edition" and "real edition" all fall in the last category. A "play" operation causes the DVD recorder apparatus 70 to play a VOB that is recorded in an AV file that is specified as the operation target.
A "rewind" operation causes the DVD recorder apparatus 70 to rapidly reproduce a VOB currently played back in reverse. A "stop" operation causes the VOB recorder apparatus 70 to stop playback of the present VOB. A "fast forward" operation causes the VOB recording apparatus 70 to rapidly reproduce the present VOB in the forward direction. A "record" operation causes the DVD recorder 70 to generate a new AV file in the directory indicated as the operation target and write the VOB to be recorded in the new AV file. These operations in this prior category are well known to users as functions of conventional, domestic AV equipment, such as video cassette recorders and CD players. The operations in the last category are performed by users when, to use an analogy of editing a conventional film, sections of the film are cut and spliced together to produce a new film sequence. A "dial" operation causes the DVD apparatus 70 to play a VOB included in the * AV file indicated as the operation target and marks the desired images of the video images reproduced by the VOB. To use the analogy of editing a movie, this "mark" operation involves marking points where the movie will be cut. A "virtual edition" operation causes the DVD recorder apparatus 70 to select a plurality of pairs of two points indicated by a mark operation such as playback start points and playback end points and then define a logical production path. by assigning a reproduction order to these pairs of points. In a virtual editing operation, the section defined by a pair of a playback start point and the point of 'completion of playback selected by the user is called a "cell". The playback path defined by assigning a play command to the cells is called a "program string". A real "edit" operation causes the DVD recorder 70 to cut each section indicated as a cell of a file AV recorded on a DVD-RAM, adjust the sections cut as stop files, and link a plurality of sections cut according to the order of production shown by a program chain. These editing operations are analogous to the cutting of a film in the marked positions and the splicing of the sections cut together. In these editing operations, the extension of the linked files is equal to or greater than the length of an AV block. The recording control-editing-playback control unit 12 controls which of the AV data recording unit 13 to the AV data editing unit 15 are used when performing the operation contents described above. In addition to specifying the operation objective and the operation content, the recording-editing-reproduction control unit 12 chooses the appropriate component (s) for the operation content of the data recording unit 13. AV to the AV data editing unit 15 and transfers the instructions informing them of the components of the operation content. The following is a description of example instructions of the recording-editing-playback control unit 12 given to an AV data recording unit 13, the AV data reproduction unit 14, and the AV data editing unit 15. using combinations of an operation objective and an operation content. In Figure 42, the "DVD_Video" directory is in the focus state, so that if the user presses the "REC" key, the recording-editing-playback control unit 12 identifies the "DVD_Video" directory as the operation objective and "record" as the operation content. The recording-editing-playback control unit 12 selects the AV data editing unit 13 as the component capable of performing a recording operation, and instructs the AV data recording unit 13 to generate a new AV file in the directory indicated as the operation target. When the file "AV_ARCHIV0 # 1" is in the focus or focus state and the user presses the "PLAY" key on the remote control 71, the recording control-edition-playback control unit 12 identifies the file "AV_ARCHIV0 # 1" as the operation objective and "reproduces" as the operation content. The recording-editing-reproducing control unit 12 selects the AV data reproduction unit 14 as the component capable of performing a reproduction operation, and instructs the AV data reproduction unit 14 to reproduce the AV file indicated as the purpose of operation. When the file "AV_FLEX # 1 is in the focus state and the user presses the" MARK "key on the remote control 71, the recording control unit-editing-reproduction control unit 12 identifies the file" AV_FIGURE # l "as the target of operation and "mark" as the operation content The recording-editing-playback control unit 12 selects the AV data editing unit 15 as the component capable of performing a mark operation, and instructs the unit 15 AV data editing to perform a trademark operation for the AV file indicated as the operation target.
(3-2-5) AV Data Recording Unit 13
The AV data editing unit .13 controls the encoding operations of the MPEG encoder 2 while issuing commands oriented to the common file system and commands oriented to the AV file system in a predetermined order to the common file system unit 10. and to unit 11 of the AV file system. By doing so, the AV data editing unit 13 makes use of the functions (1) to (10) and performs the recording operations.
(3-2-6) AV Data Reproduction Unit 14 The AV data reproduction unit 14 controls the decoding operations of the MPEG decoder 4, while issuing commands oriented to the common file system and commands oriented to the file system AV in a predetermined order to unit 10 of the common file system and unit 11 of the common file system. By doing so, the AV data reproduction unit 14 makes use of the functions (1) and (10) and performs the operations of "playback", "rewind", "fast forward", and "stop".
(3-2-7) Unit 15 of AV Data Editing
The AV data editing unit 15 controls the encoding operations of the MPEG encoder 4, while issuing commands oriented to the common file system and commands oriented to the AV file system in a predetermined order to unit 10 of the common file system and unit 11 of the AV file system. In so doing, the AV data reproduction unit 14 makes use of the functions (1) to (10) and performs the operations of "dialing", "virtual editing", and "editing". In more detail, upon receiving instructions from the recording control-editing-playback control unit 12 to mark the AV file indicated as the operation target, the AV data editing unit 15 causes the data reproduction unit 14 AV plays the indicated AV file and inspects when the user presses the "MARK" key on the remote control 71. When the user presses the "MARK" key during playback, the AV data editing unit 15 writes the information called a " mark point "on it as a non-AV file. This brand point information shows the time e? seconds from the start of playback of the AV file to the point where the user presses the "MARK" key. Upon receiving instructions from the record-editing-playback control unit 12 for a virtual editing operation, the AV data editing unit 15 generates the information defining a logical reproduction path according to the key operations of the user of the video editing operation. remote control 71. The AV data editing unit 15 then controls the unit 10 of the common file system so that this information is written on the DVD-RAM as a non-AV file. Upon receiving instructions from the recording control-editing-playback control unit 12 for a real editing operation, the AV data editing unit 15 cuts the sections of the DVD-RAM indicated as cells and adjusts the cut sections as separate files which it links to. form a sequence of cells. When linking a plurality of files, the AV data editing unit 15 performs the processing so that seamless reproduction of the images will be achieved. This means there will be no interruptions in the display or display of the image when a linked AV file is played. The AV data editing unit 15 links the extensions to make all extensions, except for the last extension to be played, equal to or greater than the AV block length.
(3-2-7-1) Processing for Editions
Virtual and Editions by Unit 15 of AV Data Editing
Figure 43 is a flowchart for processing the actual editing and virtual editing operations. Figures 44A to 44F are figures showing a complementary example of the processing by the AV data editing unit 15 according to the flow chart of Figure 43. The following describes the editing processes of the data editing unit 15 AV with reference to the flow chart of Figure 43 and the example in Figures 44A and 44F. The AV file shown in Figure 44A is already stored on the DVD-RAM. When this AV file is indicated as the operation target, the user presses the "PLAY" key on the remote control 71. The recording control-editing-playback control unit 12 detects the key operations, so that when the user presses the "MARK" key, the editing unit 15 of AV data causes the AV data reproduction unit 14 to start playback of the AV file in step SI. After the start of playback, playback continues until time ti in Figure 44B when the user then presses the "MARK" key. In response to this, the AV data editing unit 15 adjusts mark point # 1, which expresses a relative time code for time ti, in the present AV file. The user subsequently presses the "MARK" key a total of seven times at times t2, t3, t4, ... t8. In response, the AV data editing unit 15 adjusts the mark points # 2, # 3, # 4, # 5, ... # 8, which express the relative time codes for time t2, t3, t4, ... td, in the present AV file, as shown in Figure 44B. After the execution of the step SI, the processing proceeds to step S2 where the AV data editing unit 15 causes the user to indicate the pairs of mark points. The AV data editing unit 15 then determines the cells to be played within the present AV file according to the selected pairs and mark points. In Figure 44C, the user indicates which mark points # 1 and # 2 form the pair (1) mark points # 3 and # 4 form the pair (2), mark points # 5 and # 6 form the pair ( 3), and mark points # 7 and # 8 form the pair (4). In this way, the AV data editing unit 15 adjusts the AV data within each pair of points as a separate cell, and in the present example adjusts whether the four cells, Cell # l, Cell # 2, Cell # 3 and Cell # 4 It is pointed out in the present example, that the AV data editing unit 15 can alternatively adjust the pair of Mark # 2 and Mark # 3 as a cell, and the pair of Mark # 4 and Mark 5 as another cell. Then, in step S3, the AV data editing unit 15 generates a program string by assigning a playback command to the cells it has reproduced. In Figure 44D, Cell # l is the first one in the reproduction path (shown by the legend w-ra // in the drawing), Cell # 2 in the second reproduction path (shown by the legend in the drawing), and Cell # 3 and Cell # 4 respectively are the third and fourth in the reproduction path (shown by the legends "3a" and "4a" in the drawing). In doing so, the AV data editing unit 15 treats the plurality of cells as a program string, based on the chosen playback order. It is noted that Figure 44D shows the simplest reproduction order of cells, with the adjustment of other orders, such as Cell # 3? Celdafl? Cell # 2? Cell # 4 that is equally possible. In step S6, the AV data editing unit 15 inspects whether the user has indicated the reproduction of the program chain. In step S5, the AV data editing unit 15 inspects whether the user has indicated an editing operation for the program chain. When the user indicates playback, the AV data editing unit 15 instructs the AV data reproduction unit 14 to play the program chain indicated for playback.
Upon receipt of the playback instructions of the AV data editing unit 15, the AV data playback unit 14 causes the optical reader to search for Marcatl which is the starting playback position for the CellDail, as shown in FIG. Figure 44E. Once the optical reader has moved to, 1st Marcatl in the AV file according to the SEARCH command, the AV data editing unit 15 causes the section between Mark 1 and Mark # 2 to be read when issuing an order READ to unit 10 of the common file system. In this way, the VOBUs in the Cell # 1 are read from the DVD-RAM, before they are sequentially decoded by the MPEG decoder 4 and displayed as images on the TV monitor 72. Once the VOBUs have been decoded to Mark # 2, the AV data editing unit 15 performs the same processing performed for the remaining cells. In doing so, the AV data editing unit 15 'has only the sections indicated as cells # 1, # 2, # 3, and # 4 reproduced.
The AV file shown in Figure 44A is a movie that was broadcast on television. Figure 44F shows the image content of the different sections in this AV file. The section between time tO and time ti is the credit sequence VI that shows the cast and the director of the film. The section between time ti and time t2 is the first diffusion sequence V2 of the film itself. The section between time t2 and time t3 is a commercial V3 sequence that was inserted into the TV broadcast. The section between time t3 and time t4 is the second broadcast sequence V4 in the movie. The section between time t5 and time t6 is the third sequence of diffusion V5 in the film. Here, times ti, t2, t3, t4, t5, and t6 are set as Marcatl-Mark2, Mark # 3, Mark # 4, Mark # 5 and Mark # 6 and pairs of marks are adjusted as cells. The display order of the cells is set as a program string. When a reading is made as shown in Figure 44E, the AV data editing unit 15 causes the credit sequence VI to skip, so that the reproduction starts with the first movie sequence V2 given the time ti and the t2. After this, the AV data editing unit 15 causes the commercial sequence V3 to skip, and causes the second movie sequence V4 between the time t3 and the t4 to be reproduced. The following is a description of the operation of the AV data editing unit 15 when the user indicates a real editing operation, with reference to Figures 45A to 45E and Figures 46A to 46F. Figures 45A to 45E show a complementary example of the processing of the AV data editing unit 15 in the flow chart of Figure 43. The variables mx, Af in the flow diagram of Figure 43 and Figures 45A to 45E indicates a position in the AV file. The following explanation deals with the processing of the AV data editing unit 15 for a real editing operation. First, in step S8, the AV data editing unit 15 determines at least two sections to be cut from the present AV file according to the program quality that was generated during a virtual editing operation. The "source AV file" in Figure 45A has been given the points of Marcatl, Mark # 2, Mark # 3, ... # 8. The cells that have been adjusted for this source AV file are defined by Marcatl brand point pairs; # 2, # 3, ... # 8, so that the AV data editing unit 15 treats the marking points in each pair as an edit start point and an edit completion point, respectively. As a result, the AV data editing unit 15 treats the pair of Marks # 1 and # 2 as the edit start point "Entry (1)" and the editing ending point "exit (1)". The AV data editing unit 15 similarly treats the trademark pairs # 3, and # 4 as the edit start point "Entry
(2) "'and the completion and edition point
"Output (2)", the pair of Marks # 5 and # 6 as the starting point of editing "Input (3)" and the ending point of editing "Output (3)", and the pair of Marks # 7 and # 8 as the edit start point "Input (4)" and the edit ending point "Output (4)". The period between Marcatl and Mark # 2 corresponds to the first movie sequence V2 between time ti and time t2 shown in Figure 44F. Similarly, the period between Mark # 3 and Mark # 4 corresponds to the second movie sequence V4 between time t3 and time t4 shown in Figure 44F. and the period between Marker # 5 and Marcaid corresponding to the second film sequence V5 between time t5 and time t6. Therefore, by indicating this actual editing operation, the user obtains an AV file that only includes the movie sequence of V2, V4 and V5. Then, in step S9, the AV data editing unit 15 issues a DIVID command to unit 11 of the AV file system to make the determined division region divided into mx AV files (where mx is an integer not less than 2). The AV data editing unit 15 treats each closed area indicated by a pair of an edit start point and an edit completion pair in Figure 45A as an area to be cut, and thus cuts the four AV files shown in Figure 45B. The AV data editing unit 15 subsequently specifies one of the mx to AV files cut using the Af variable, with the files cut which are numbered AV file, Afl, Af2, Af3, ... Afm. In step S10, the AV data editing unit 15 sets the 'Af variable to "1" to initialize the Af variable. In step Sil, the AV data editing unit 15 issues a READ command to unit 11 of the AV file system for the VOBUs (hereinafter referred to as the "final part") located at the end of the AV Af file and the VOBU (later called in the present "first part") located at the beginning of the AV Af + 1 file. After issuing these commands, in step S12, the AV data editing unit 15 uses the same procedure as the second mode to re-encode the last part of the AV file and the first part of the AV Af file. After re-encoding, the AV data editing unit 15 issued a "SHORT" command to unit 11 of the AV file system for the last part of the file Af - and the first part of the file Af + 1 (Af2) . In Figure 45C, the last part of the AV • Afl file and the first part of the AV file Af2 are read as a result of the READ command and are re-encoded. As a result of the re-encoding process, the encoded data induced upon re-encoding the read data accumulates in the memory of the DVD recorder 70. In step S13, the AV data editing unit 15 issues an ACORT command which results in the area previously occupied by the last and first part read being deleted. It should be noted that the deletion performed in this manner results in one of the following two cases. The first case is when in spite of whether either the AV Afl file or the AV Af + 1 file, whose sections are to be re-encoded have been deleted, has a continuous length that is equal to or greater than the length of the AV block, The continuous length of the other AV file is below the data size of an AV block.
Since the length of an AV block is set to the length that prevents overflows from occurring, if the AV Af or Af + 1 file is played in a state where its continuous length is shorter than the length of an AV block, an subflow in the track buffer. The second case is where the data size of the data (data in memory) that has been re-encoded and * stored in the memory is below the size (length) of data of an AV block. When the data size of the data in memory is larger and thus occupy a region in a DVD-RAM that is equal to or greater than an AV block, the data can be stored in a different position on the DVD-RAM away from the AV Af and Af + 1 files. However, when the size of the data in memory is smaller than an AV block, the data can not be stored in a different position on the DVD-RAM away from the AV Afl and Af + 1 files. This is for the following reasons. During a reading performed for in-memory data that is smaller than the size of an AV block but stored in a separate position, a sufficient amount of data can not accumulate in the track buffer. If the jump from the memory data to the AV Af + 1 file takes a relatively long time, a subflow will be taking or will occur in the track buffer while the jump is taking place. In Figure 45D, broken lines show that the last part of the AV file Afl and the first part of the AV file Af2 have been erased. This results in the length of the AV Afl file that is below the length of an AV block, and in the length of the data in memory that is below the length of an AV block. If this AV Afl file is left as it is, there is a risk that the subflow will occur when jumping from the AV Afl file to the AV file Af2. To prevent the occurrence of these subflows, in step S14 the AV data editing unit 15 issues an ANNEXAR command for the AV file and the AV file Af + 1.
As shown in Figure 45E and Figure 46A, this processing results in the binding of the AV Afl file and the re-encoded VOBUs, so that the continuous length of the recording region for all the extensions that make up the AV file Afl ends equal to or longer than the length of an AV block. After issuing the ANEXAR command, the AV data editing unit 15 judges in step S15 whether the Af variable corresponds to the number of files' AV mx-1. If the numbers do not correspond, the AV data editing unit 15 increments the Af variable in step S16 and returns to the Sil step. In this way, the AV data editing unit 15 repeats the processing in steps Sil to S14. After the variable Af has been increased to become "2", the AV data editing unit 15 issues a READ command so that the last part of the AV file Af2 (after the previous link) and the first part of the AV file Af3 are read, as shown in Figure 46B. Once the VOBUs in this last part and first part have been re-encoded, the resulting re-encoded data is stored in the memory in the DVD recorder 70. The regions in the DVD-RAM that were originally occupied by the first part and last part are erased as a result of the SHORT command that the AV data editing unit 15 issued by step S13. As a result, the remaining AV Af3 file has a continuous length that is below the length of an AV block. The AV data editing unit 15 issues an ANNEXAR command to unit 11 of the AV file system for the AV file Af2 and Af3, as shown in Figure 46D and 46E. This procedure is repeated until the variable Af equals the value mx-1. As a result of the above processing, the extensions in the storage area only contain the movie sequences V2, V4 and V5. These extensions each have a continuous length that is above the length of an AV block, so as to ensure that there will be no interruptions to the image display during playback of these AV files.
The period between Marcaf 1 and Marca # 2 corresponds to the first movie sequence V2. The period between Mark # 3 and Mark # 4 corresponds to the first movie sequence V4, and the period between Mark # 5 and Mark 6 corresponds to the third movie sequence V5. As a result, when performing an editing operation, the user can obtain a sequence composed of AV files only for the movie sequences V2, V4 and V5.
(3-2-7-1-2) Processing of Unit 11 of the AV File System When the Divide Order is Issued
The following discussion deals with processing details by unit 11 of the AV file system when it provides extended functions in response to a DIVID command. Figure 48A shows the operation of unit 11 of the AV file system when extended functions are provided in response to a DIVID command. In this flowchart, one of the mx pairs of an edit start point (entry point) and an edit completion point (exit point) is indicated using the variable h. In step S22, the value "1" is substituted for the variable h so that the first pair of the entry point and the exit point are processed. The unit 11 of the AV file system generates an entry (h) in step S31, and edition to the file identifier (h) for the file entry (h) in a directory file of a temporary directory. In step S33, unit 11 of the AV file system calculates the first address s of the sequence of u logical blocks (where u> 1) from the logical block corresponding to the entry point (h) to the corresponding logical block to the exit point (h), and the number of occupied blocks r. In step S34, unit 11 of the AV file system generates allocation descriptors within the file entry (h). In step S35, the unit of the AV file system records the first address s of the sequence of the u logical blocks and the number of blocks r occupied in each of the u assignment descriptors. In step S35, unit 11 of the AV file system judges whether variable h has reached the value mx-1-. If the variable h has not reached this value, the unit 11 of the AV file system increases the variable h and returns to step S31. By doing so, unit 11 of the AV file system. repeats the processing in steps S31 to S35 until the variable h reaches the value mx-1, and thus cuts the closed sections within each of the mx-1 pairs of an entry point and an exit point as AV files.
(3-2-7-1-3) Procesami ent of Unit 11 of the
AV File System When a
Order Shorten
The explanation deals with the processing of unit 11 of the AV file system when it provides the extended functions in response to the command SHORT. Figure 48 is a flow diagram showing the content of this processing. In step S38, the unit 11 of the AV file system calculates both the first address c of the logical block sequence between the delete start address and the delete end address which specifies the area to be deleted and the number of occupied blocks. In step S45, the unit 11 of the AV file system accesses the allocation identifiers of the AV file whose first or last part is to be deleted. In step S46, unit 11 of the AV file system judges whether the area to be deleted is the first part of the extension of an AV file. If the area to be erased is the first part of an extension ("Yes" in step S46), unit 11 of the AV file system proceeds to step S47 and updates the first storage address p of the extension to the first storage address p + c * d in the allocation descriptor. After this, in step S48, the unit 11 of the AV file system updates the data size q of the extension of the number q of the occupied blocks given in the data size allocation descriptor q-c * d. On the other hand, if in step S46 the unit 11 of the AV file system finds that the area to be deleted is the last part of an AV file, the unit 11 of the AV file system directly proceeds to step S48, and update the data size q of the extension of the number q of the occupied blocks given in the data size allocation descriptor qc * d.
(3-2-7-1-4) Processing of Unit 11 of the AV File System When an Order is Issued ANNEX
The following discussion deals with the processing content of unit 11 of the AV file system when it provides extended functions in response to an ANNEXAR command. The following explanation is intended to clarify the procedure used to process the areas enclosed by the dot and dash lines y3, y4 in Figure 45E and Figure 46D. In response to an ANEXAR command, unit 11 of the AV file system fixes the AV Af and Af + 1 files, which were partially erased as a result of the DIVIDE and SHORT commands, and the re-coded data (in-memory data). ), which are present in the memory of the recording device 70 of. DVD as a result of the re-encoding, in the DVD-RAM in a way that allows seamless playback of the AV Af file, the data in the memory, and the AV Af + 1 file in that order. Figure 47A shows an example of AV data. processed by unit 11 of the AV file system when it provides extended functions in response to an ANNEXAR command. In Figure 47A, the AV x and y files have been processed according to a DIVID command. The virtual edition has defined a playback path by which the AV data is played in the order AV file x? data in memory? AV files and. Figure 47A shows an example playback path for the AV data in the AV x and y files. In Figure 47A, the horizontal axis represents the time, so that the playback path can be viewed to adjust the display order as AV x? File. data in memory -? AV file and. From the AV data in the AV x file, the data part m located at the end of the AV x file is stored in a consecutive area of the DVD-RAM, with this being called the "previous extension". From the AV data in the AV file and, the data part n located at the beginning of the AV file and also stored in a consecutive area of the DVD-RAM, with this being called the "last extension". As a result of the "DIVIDE" command, the AV x and y files are obtained with certain actions of the AV data that have been cut. Nevertheless, while the file system manages the areas on the disk that correspond to them. data cut as if they were empty, the data of the original AV file is actually left as are the logical blocks in the DVD-RAM. It is assumed that when the reproduction path is set by the user, the user need not consider the manner in which the AV blocks on the DVD-RAM store the cut AV files. As a result, there is no way in which the positions in the DVD-RAM can be identified by certain and that it stores the previous and last extensions. Even if the specification path specifies the order as an AV x file? AV file and, there is a possibility that the AV data that is not related to the reproduction path are present on the disk between the previous and last extension. In view of the previous consideration, the link of the AV files cut by the DIVIDIR command does not assume that the previous extension and the last extension are recorded in consecutive positions in the DVD-RAM, and in this way they must assume instead that the previous one The extension and the last extension are recorded in completely unrelated positions on the DVD-RAM. Here, it must be assumed that at least one
"Extension of different files", which is not related to the playback path indicated by the AV x and y files, is present between the storage regions of the previous extension and the last extension.
Figure 47B shows a representation of the positional relationship of the storage areas in the DVD-RAM of the previous extension and the last extension, in view of the previous consideration. The AV x file that includes the previous extension is partially cut off as a result of the divide order, and thus includes an empty area when the former extension was formerly present. This area is called the Departure area. As described above, this Output area actually still includes data from the AV x file that it cut, although unit 11 of the AV file system treats the area as an empty area since the DIVIDIR command has already been issued. The AV file and including the last extension is partially cut off as a result of the DIVIDIR command, and thus includes an empty area where the last extension was previously present. This area is called the Entrance area. As described above, this Input area actually still includes data from the AV file and was cut, even though unit 11 of the AV file system treats the area as an empty area since the DIVID command has already been issued. In Figure 47B, the previous extension is stored in a position preceding the last extension, although this illustrates only one example, so that it is perfectly possible that the last extension is stored in a position preceding the previous extension. In the present example, the other file extension is present between the previous extension and the last extension. While the Input area and the Output area are ideal for recording the data in memory, the continuous length and the input area and the output area is restricted due to the presence of another file extension between the previous extension and the last extension. In step S62 in the flow diagram of Figure 49, unit 11 of the AV file system calculates the data size of the Output area, and data size of the Input area.
When finding the data size of the "Input area and the Output area, the unit 11 of the AV file system refers to the data size m of the previous extension and the data size n of the last extension and judges thus the The previous extension may cause a subflow in the track buffer during playback.
(3-2-7-1-4-1) Processing When the Previous Extension m is Less than the AV Block Length
When the former extension m is shorter than the length of the AV block and the last extension n is at least equal to the length of the AV block, a subflow for the former extension m may occur. Processing proceeds to step S70 in Figure 50. Figure 50 is a flowchart when the former extension m is shorter than the length of the AV block and the last extension n is at least equal to the block length
AV. The processing for unit 11 of the AV file system in Figure 50 is explained with reference to Figures 51, 52 and 53. Figures 51, 52 and 53 show the relationships between the data sizes of the extensions m and n, the area of Entry. and the area of Exit i and j, the data in memory k, and the block AV B, as well as the areas in which each piece of that data is recorded and the areas to which the data moves. The previous extension is shorter than the length of the AV block. As a result, a subflow will occur if no corrective action was taken. Accordingly, the flow chart in Figure 50 shows the processing to determine the appropriate storage location for the previous extension and the data in memory. In step S70, it is judged whether the sum of the sizes of the previous extension and the data in memory is equal to or greater than the length of the AV block. If so, the processing proceeds to step S71, and it is judged if the Input area is greater than the data in memory. When the Input area is greater than the data in memory, the data in memory is written in the Output area to be the consecutive length of the previous extension at least equal to the length of the AV block. Figure 51A shows an array of the previous extension, the last extension, the Input area and the Output area e, n the DVD-RAM in a relation i = k, m + k >; B. In Figure 51B, when the data in memory is recorded in the output area, the consecutive length of the previous extension becomes at least equal to the length of the AV block. On the other hand, when the output area is smaller than the data in memory, the data is moved. Figure 52A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in an i < k, m + k > B. In Figure 52A, the previous extension is read first in the memory, and in Figure 52B the previous extension is written in an empty area in the same area as the previous extension. After the first extension has been moved, the data in memory is written immediately after the previous moved extension, as shown in Figure 52 C. When the sum of the sizes of the previous extension and the data in memory is less than the length of the AV block, the processing proceeds to step S72. In step S72, it is judged whether the sum of the sizes of the previous extension, the last extension, and judged whether the data in memory is at least equal to the two AV block lengths. When the sum of the sizes is less than the length of the AV block, even if the data is moved, the size is smaller than the length of the AV block. As a result, a subflow occurs. When the sum of the sizes is less than the lengths of the two AV blocks, even if the previous extension, and the data in memory, and the last extension are written in a logical block, the recording time will not be too long. The flow chart in Figure 50, when the sum of the sizes of the data in memory, the previous extension, and the last extent is less than the two AV blocks, processing proceeds from step S71 'to step S73, and the previous extension and the last extension move. Figure 53A shows an array of the previous extension, the last extension, of the input area and the output area in the DVD-RAM in an i < k, m + k < B, B = m + n + k < 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When an empty area is found, the previous extension is read into memory and written to the empty area to move the previous extension to the empty area, as shown in Figure 53B. After the movement, the data in memory is written just after the previous moved extension, as shown in Figure 53C. After the data in memory has been written, the previous extension is read in memory and written immediately after the occupied area of the data in memory to move the last extension to the empty area, as shown in Figure 53D. When the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to the lengths of the two AV blocks, the processing proceeds from step S72 to step S74. When the sum of the sizes is equal to or greater than two AV block lengths, it will take a longer time to write the data in the logic block. Meanwhile, a simple method in which the previous extension is moved and the data in memory is written just after the previous moved extension should not be adopted in view of the access speed. Here, it should be noted especially that the processing proceeds from step S72 to step S74 because the sum of the sizes of the data in memory and the previous extension is less than the length of the AV block. The reason why the sum of the sizes of the data in memory and the previous extension is less than the length of the AV block even the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to two AV block lengths is that the size of the last extension is relatively large, with the difference between the size of the last extension and the length of the AV block that is large. As a result, when the sum of the sizes of the previous extension and the data in memory is less than the length of the AV block, part of the data in the last extension can be added to the sum, with this there is no risk that the size of the remaining data of the last extension is insufficient. When the sum of the sizes of the data in memory, the previous extension, and the last extension is at least equal to two AV block lengths; the processing proceeds from step S72 to step S74, and the data is linked in the manner shown in Figures 54A to 54 D. Figure 54A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a m + k < B, m + n + > 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this empty area is found, the previous extension is read into memory and then written into the empty area to move the previous extension, as shown in Figure 54B. Then, the data in memory is written immediately after the previous moved extension, as shown in Figure 54C. when they have written. data in memory, a data set that is large enough to make the size of the data in this empty area equal to the block size AV moves from the beginning of the last extension just after the data in memory as shown in the Figure 54D. After the previous extension, the data in memory and the front of the last extension are linked in the procedure described above, the file entries of the AV Af file that includes the previous extension and the AV Af + 1 file are integrated. You get an integrated file entry, and the processing is finished.
(3-2-7-1-4-2) Processing When the Last Extension n is shorter than the length of the block 'AV
When the "No" judgment is given in step S63 in the flow diagram of Figure 49, processing proceeds to step S64 where it is judged whether the former extension m is at least equal to the length of the AV block but the last extension n is shorter than the length of the AV block. In other words, in step S63, it is judged whether a flow for the last extension can occur. Figure 55 is a flow chart when the last extension is shorter than the length of the AV block and the previous extension is at least equal to the length of the AV block. The processing with the unit 11 of the AV file system in the flow diagram in Figure 55 is explained with reference to Figures 56, 57, 58 and 59. Figures 56, 57, 58 and 59 show the relationships between the sizes data of the extensions myn, the input area and the output area i and j, the data in memory k, and the block AV B, as well as the areas in which each piece of data is recorded and the areas to which the data moves. In step S75, it is judged whether the sum of the sizes of the last extension and the data in memory is at least equal to the length of the block 'AV. If so, the processing proceeds from step S76 to step S76, where it is judged whether the input area is greater than the data in memory. Figure 56A shows an array of the previous extension, the last extension, the input area, and the output area on the DVD-RAM in a relation j = k, n + k > B. In Figure 56B, recording the data in memory in the input area results in the consecutive length of the last extension that becomes at least equal to the length of the AV block. On the other hand, when the input area is smaller than the data in memory, the data is moved. Figure 57A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a j < k, n + k = B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this area is empty, the data in memory is written to the empty area as shown in Figure 57B. The last extension is then read into memory and written immediately after the occupied area of the data in memory, as shown in Figure 57C. When the sum of the sizes of the last extension and the data in memory is less than the length of the AV block, processing proceeds from step S75 to step S77. In step S77, it is judged whether the sum of the sizes of the previous extension, the last extension and the data in memory are at least equal to the two AV block lengths. When the sum of the sizes is less than two AV block lengths, processing proceeds to step S78. Figure 58A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a j < k, n + k < B, m + n + k < 2B. In step S7d, unit 11 of the AV file system searches for an empty area in the same area as the previous extension and the last extension. When this empty area is found, the previous extension is read into memory and written into the empty area to move the previous extension to the empty area, as shown in Figure 58B. Then, the data in memory is written immediately after the previous moved extension, as shown in Figure 58C. When the data in memory has been written, the last extension is read into memory and written immediately after the area occupied by the data in memory to move the last extension to the empty area, as shown in Figure 58D. When the sum of the sizes of the data in memory, the previous extension and the last extension is equal to the two lengths of AV block, the processing proceeds from step S77 to step S79, and the sides are linked in the manner shown in Figure 59A to 59D. Figure 59A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in a n + k <relation; B, m + n + k = 2B. In this case, a search is performed for an empty area in the same area as the previous extension and the last extension. When this empty area is found, the data with a data size of which it is (the length of the * AV block - (n + k)) is moved from the end of the previous extension to the empty area, as shown in the Figure 59B. As shown in Figure 59C, the data in memory is written immediately after this data moves from the previous extension when the data in memory has been written, the last extension moves immediately after the occupied area of the data in memory , as shown in Figure 59D. When the "No" judgment is given in step S64 in the flow chart in Figure 49, the processing proceeds to step S65, where it is judged whether both the anterior extension m and the last extension n are shorter than the length of the AV block is judged. In other words, it is judged whether a subflow can occur for both the previous extension m and the last extension n. Figure 60 is a flow diagram for when both the previous extension and the last extension are shorter than the length of the AV block. The processing by the unit 11 of the AV file system in the flow diagram in Figure 60 is explained with reference to Figures 61, 62, 63, and 64. Figures 61, 62, 63 and 64 show the relationships between the data sizes of extensions myn, in the input area and the output area i and j, the data in memory k, and the block AV B, as well as the areas in which each piece of data is recorded and the areas to which move the data. In step S80 in this flowchart, it is judged whether the sum of the sizes of the data in memory, the previous extent, and the last extent is at least equal to the length of the AV block. If not, the processing proceeds to step S81. In this case, the sum of the sizes of the previous extension, the data in memory, and the last extension are shorter than the length of the AV block. As a result, it is judged if there is an extension that follows the last extension. When no extension follows the last extension, the last extension is the end of the AV file that is created by the data link, so no additional processing is needed. When an extension follows the last extension, a subflow may occur since the sum of the sizes of the previous extension, the data in memory, and the last extension is less than the length of the AV block. In order to avoid this flow process, when the extension following the last extension is linked to the last extension by the link processing shown in Figures 61A-61D. Figure 61A shows an array of the previous extension, the last extension, the input area and the output area in the DVD-RAM in a relation m + n + k < B. In step S81, unit 11 of the AV file system writes the data into memory in the entry area, as shown in Figure 61B. When the data in memory has been written in the input area, the unit 11 of the AV file system reads the last extension in memory and writes the last extension read immediately after the area occupied by the data in memory to move the last extension to the empty area, as shown in Figure 61C. Then, as shown in Figure 61D, unit 11 of the AV file system takes the data whose size is (block length AV - (previous extension + data in memory + last. Extension)) of the extension. Follow the last extension. Unit 11 of the AV file system links this data with the previous extension, the data in memory, and the last extension. When the sum of the sizes of the previous extension, the last extension, and the data in memory is at least equal to the length of the AV block, the processing proceeds to step S62. In step Sd2, unit 11 of the AV file system judges whether the data size of the output area following the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory. If not, processing proceeds to step S83. Figure 62A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in a ratio i = n + k, m + n + k >; B. In step S83, unit 11 of the AV file system writes the data into memory in the entry area, as shown in Figure 62B. After writing the data in memory, the unit 11 of the AV file system reads the last extension in memory and writes the last extension immediately after the occupied area of the data in memory to move the last extension. When the data size of the output area following the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory, the processing proceeds from step S82 to step S84. In step S84, it is judged whether the data size of the input area proceeding to the previous extension is smaller than the sum of the sizes of the previous extension and the data in memory. If not, the processing proceeds to step S85. Figure 63A shows an array of the previous extension, the last extension, the input area, and the output area on the DVD-RAM in an i < n + k, m + n + k = B. In step S85, unit 11 of the AV file system writes the data into memory in the entry area as shown in Figure 63B. After writing the data into memory, unit 11 of the AV file system reads the previous extension in memory and writes the previous extension into a storage area immediately before the occupied area of the data in memory to move the previous extension to the entrance area, as shown in Figure 63C. When the "No" judgment is given in step S84, processing proceeds to step. S8ß. Figure 64A shows an array of the previous extension, the last extension, the input area and the output area on the DVD-RAM in an i < n + k, j < m + k, m + n + k = B. In step S86, it is judged whether the sum of the sizes of the previous extension, the last extension, and the data in memory is more than two AV block lengths. If not, each unit 11 of the AV file system searches for an empty area in the same amount as the previous extension. When an empty area is found, unit 11 of the AV file system reads the previous extension in memory and writes the previous extension read in the empty area to move the previous extension to the empty area, as shown in Figure 64B. After the movement, the unit 11 of the AV file system writes the data into memory in a storage area immediately after the previous moved extension, as shown in Figure 64C. After writing the data in memory, the unit 11 of the AV file system reads the previous extension in memory and writes the last extension in a storage area just after the occupied area of the data in memory to move the last extension to the empty area, as shown in Figure 64D. When the combined size of the previous extension, the last extension, and the data in memory exceed the AV blocks, it is judged whether either the entry area or the exit area is large. When the output area is large, a part of the data in memory is recorded in the output area to make the continuous length equal to the length of the AV block. The remaining part of the data in memory is recorded in a different empty area, and the last extension is moved to a position directly after this remaining part of the data in memory.
When the input area is large, in unit 11 of the AV file system moves the previous extension to an empty area and records a first part of the data area in memory to make the continuous length equal to the length of the AV block. . After this, the remaining part of the data in memory is recorded in the input area. As a result of the previous processing to move the extensions, the total consecutive length can be maintained equal to or below two AV block lengths. After the previous extension, the data in memory and the front of the last extension are linked in the processing described above, the file entries of the AV Af file including the previous extension and the AV Af + 1 file are integrated. An integrated file entry is obtained, and processing ends.
(3-2-7-1-4-3) Processing When Both
Previous Extension as the Last Extension are at least Equal to AV Block Length
When the "No" judgment is given in step
565 in the flow diagram of Figure 49, the processing proceeds to step S66 where it is judged whether the data in memory is at least equal to the length of the AV block. If so, the data in memory is recorded in an empty area and the processing ends. When the "No" judgment is given in step
566 in the flowchart of Figure 49, unit 11 of the AV file system judges whether the previous extension m is at least equal to the length of the AV block, the last extension n is at least equal to the length of the AV block , but the data in memory is less than the combined size of the input area i and the output area j. Figure 65 is a flowchart when the last extent is at least equal to the length of the AV block. Figures 66A-66D show a complementary example showing the processing of unit 11 of the AV file system in Figure '65. In Figure 66A, the anterior extension and the last extension are both at least equal to the length of the AV block. Figures 66B-66D show how the data in memory and the extensions are recorded in the entry area, the exit area, and other empty areas as a result of the steps in Figure 65. In this case, there is no risk of that a subflow occurs for any of the previous or last extension. However, it would be ideal, if the data in memory could be recorded in at least one of the output area that follows the AV Af file and the input area that precedes the AF Af + 1 file without causing the previous or last one to move. extension . In step S87 of the flow chart of Figure 65, it is judged whether the size of the output area exceeds the data size of the data in memory. If so, the data in memory is simply recorded in the output area in step S88. As shown in Figure 66B.
If the size of the output area is below the size of the data in memory, the processing proceeds to step S89, where it is judged whether the size of the input area exceeds the data size of the data in memory. If so, the data in memory is simply recorded in the input area in step S90, as shown in Figure 66C. If the data in memory can not be recorded in the input area or the output area, processing proceeds to step S91 where the data in memory is divided into two parts that are respectively recorded in the input area and the output area , as shown in Figure 66D. After the previous extension, the data in memory and the front of the last extension are linked in the procedure described above, the file entries of the AV Af file that includes the previous extension and the AV Af + 1 file are integrated. You get an integrated file entry, and the processing ends.
(3-2-7-1-4-4) Processing When Both
Previous Extension as the Last Extension are at least Equal to AV Block Length
In step S69 in the flow diagram of Figure 49, it is judged whether the previous extension m is at least equal to the length of the AV block and the last extension n is at least equal to the length of the AV block, but the size of the data in memory k exceeds the combined size of the area of exit j and the entrance area i. Figure 67 is a flow chart showing the processing when both the previous extension but the combined size of the input area and the output area is below the data size of the data in memory. Figure 68A-68E show complementary examples for the processing of unit 11 of the AV file system in the flow chart of Figure 67. In Figure 68A, both the previous extension and the last extension are at least equal to the length of the AV block. Figures 68B-68D show how extensions and data in memory are recorded in the entry area, the exit area, and other empty areas as a result of the steps in Figure 67. In this case, both the previous extension as the last extension are at least equal to the length of the AV block, so there is no risk of a subflow occurring, although the recording area of the data in memory must have a continuous length that is at least equal to the length of the AV block. In step S92, it is judged whether the total size of the previous extension and the data in memory is at least equal to the two lengths of AV bl. If the total size exceeds two AV block lengths, the processing proceeds to step S93 where the data whose size is (block length AV - size of data in memory data k) is read from the end of the previous extension and is moved to an empty area where the data is also recorded in memory. This results in the recording status of this empty area and both extensions that are equal to the AV block length, as shown in Figure 68B.
296
If the "No" judgment is given in step S92, the processing proceeds to step S94, where it is judged whether the total size of the last extension and -the data in memory is at least equal to the two lengths of the AV block. If so, the processing follows the pattern, in step S92, since an excessively long logical block write operation is to be edited and since a relatively large amount of data can be moved from the last extension without any risk of that the last extension ends shorter than the length of the AV block. If the total size of the last extension and the data in memory is at least equal to two AV block lengths, processing proceeds to step S95, where the data whose size is (AV block length - size of data in memory data) k) they are read from the beginning of the last extension and move to an empty area in the same zone as the previous and last extensions, where the data is also recorded in memory. This results in the recording status of this empty area and both extensions are equal to the length of the AV block, as shown in Figure 68C. If the total size of the previous extension in the data in memory is below two AV block lengths, the total size of the last extension and the data in memory is below two AV block lengths, the total data amount which is written in the logical batches will be less than two AV block lengths, so that the movement processing can be performed without interest of the time taken by the included writing processing. Accordingly, when the total size of the previous extension and the data in memory is below two AV block lengths, and the total size of the last extension and the data in memory is below two AV block lengths, the processing proceeds to step S96, where the largest of the previous extension and the last extension is located. In this situation, any of the previous or last extension can be moved, although in the present modality, it is ideal for the smallest of the two that moves; therefore this judgment in step S96. When the former extension is the smaller of the two, in step S97 the previous extension is moved, with the data in memory which is then recorded in a position immediately after the data in memory. When this is done, the continuous length of the data recorded in this empty area will be below two AV block lengths, as shown in Figure 68D. When the last extension is the smaller of the two, in step S98 the last extension is moved, with the data in memory which is then recorded in a position immediately before the data in memory. When this is done, the continuous length of the data recorded in this empty area will be below two AV block lengths, as shown in Figure 68E. After the previous extension, the data in memory and the front of the last extension are linked in the previous procedure, the file entries in the AV Af file that includes the previous extension and the AV Af + 1 file are integrated. An integrated file entry is obtained, and processing- completes. The flow diagrams for the processing of "ANNEX" in a circumstantial variety have been explained, with which it is possible to limit the data size of the moved data, and recorded at two AV block lengths in the worst case scenario. However, this does not mean that there are no cases where the data having two AV block lengths need to be written, with the following two cases describing these exceptions where the data exhibiting two AV block lengths need to be written. In the first section, an empty area with a continuous length of two AV block lengths is required, although only empty areas separated by AV block length are available. In this case, to create an empty area with a continuous length of two AV block lengths, the AV data must be moved for a length of the AV block. In the second exception, in step S81 of Figure 60, the movement of the data from the last extension results in the remaining part of the last extension becoming below the block length AV. In this case, an additional motion operation becomes necessary, with the total amount of data moved in the complete processing that exceeds two AV block lengths. While the above explanation deals only with the link of two AV files, of the data in memory, an "ANNEX" command can be executed to make an AV file and the data in memory. This case is the same as when the data is added to the final station in an AV file, so that the total size after this edition needs to be at least equal to the AV block size. As a result, the data in memory is recorded in the output area following this final extension. When the output area is too small to record all the data in memory, the remaining part of the data in memory can be recorded in a separate, empty AV block. The previous link process you have explained for the premise of seamless playback within a file, although you can also use it for seamless cross-file playback. Seamless cross-play files refer to a branch in the playback from one AV file present to another AV file. In the same way as described above, when two AV files and data in memory are linked, the continuous length of each extension must be at least equal to the length of the AV block, so a careful link procedure must be used. . This ends the explanation of the linking procedure used by unit 11 of the AV file system.
(3-2-7-1-5) Updating the Information of
VOB and the PGC Information
The following is an explanation of the update of the VOB information (time map table, seamless link information), and PGC information (cell information) when a DIVIDE command and ANEXAR command is executed.
First, the procedure when a DIVIDIR command has been executed will be explained. Of the plurality of AV files that are obtained by executing the command DIVIDIR, - an AV file is assigned to the same AV_Archi vo_I D as the AV file that recorded the VOB from which it was divided. The AV_File_IDs of the other AV files divide the AV file however new values need to be assigned. The VOBs that were originally recorded as an AV file will lose several sections due to the execution of a DIVID command, so that the marks that indicated the lost sections need to be deleted. In the same way, the cell information that gives these marks as the start points and the end points need to be deleted from the RTRW administration file. In addition to deleting mark points, it is necessary to generate new cell information indicating the video presentation start box of the AV file, C_V_S_PTM and the AV file video presentation completion table as CVE PTM, and to add this new cell information to the RTRW administration file. The VOB information that includes the seamless link information and the time map table is divided into a plurality of parts when the corresponding VOB is divided. In more detail, mx VOB is produced by the division, the VOB information is divided to give mx time map tables and mx seamless link information sets. The video display start time VOB_V_S_PTM and the video display completion time VOB_V_E_PTM of a VOB generated by the processing accompanying the execution of the DIVIDIR command is respectively adjusted based on the C_V_S_PTM, C_V_E_PTM indicated by the start point and the end point in the cell information used by the DIVID command. The Last_SCR and the First_SCR in the seamless link information are also updated. The following is a description of how the information is updated when an ANNEXAR command has been executed. The execution of an ANEXAR command results in an AV file that was produced from a plurality of AV files, so that the VOBs included in this plurality of AV files will be composed of frame data sets that do not interrelate, say that the timestamps through your AV files will not be continuous. Since these are administered as a VOB that differs from the plurality of VOBs that were originally included in different AV files, the separate VOB_IDs will be assigned to these VOBs. The other necessary processing is as described in the second embodiment. However, the C_V_E_PTM in the cell information specifying a division area needs to be incremented by the number of frames included in the part of the previous VOBU that has been encoded. Similarly, the C_V_S_PTM in the cell information specifying a division area in a last AV file needs to be decreased by the number of frames included in the part of the last VOBU that has been coded.
(3-2-3)
The fragmenting unit 16 is connected to a fixed magnetic disk apparatus. This fragmenting unit 16 reads an extension, from the extensions recorded in the DVD-RAM that have been subjected to link processing or other processing, having an empty area on either side of its recording area describe this extension in the Fixed magnetic disk apparatus for generating backup data in the fixed magnetic disk apparatus. After writing all of these extensions into the fixed magnetic disk apparatus, the fragmenting unit 16 reads the generated backup data and writes the backup data for the backed up extension in the empty area adjacent to the extension. Here, extensions having an empty area adjacent to their recording area are extensions that have been generated by unit 11 of the AV file system executing a "DIVIDE" command or an "ACQUIRE" command. These empty areas match the areas that have been cleaned and are not used as the recording area of the data in memory or the area moved to for an extension when an ANNEXAR order has been made. Figures 69A-69D show an example illustrating the operation of the fragmentation unit 16. Figure 69A, extension #x is shown as an extension with empty areas i, j on both sides of its recording area. As shown in Figure 69A, the fragmenting unit 16 detects this extension, reads it from the DVD recorder apparatus 70, and writes it to the fixed magnetic disk apparatus. As a result of this writing operation, the backup data is generated in the fixed magnetic disk apparatus, as shown in Figure 69B. After this, the fragmenting unit 16 reads the backup data from the fixed magnetic disk apparatus, as shown in Figure 69C, and writes the extension to the DVD-RAM to use both the current recording area of the extension #x and the empty area j followed by this recording area. This creates a continuous empty area of length i + j before the extension #x, as shown in Figure 69D. When you then perform this procedure for extension #y, the continuous length of the empty area may be increased further. The recording made by the fragmenting unit 16 is achieved by first storing an extension in the fixed magnetic disk apparatus, so that even if a power failure occurs for the DVD recorder apparatus 70 during writing of the extension backing On the DVD-RAM, this write processing can still be re-executed. When generating the backup data before moving the extensions to the large free empty areas in the DVD-RAM, there is this risk of data loss in an extension even when there is a power failure for the DVD recorder 70. With the present embodiment described above, the editing of a plurality of AV files can be performed freely by the user. Even if a plurality of fragmentary AV files with short continuous lengths are generated, the DVD recorder 70 will be able to link these short AV files to the AV files generated with continuous lengths that are at least equal to the length of the AV block. As a result, problems caused by the fragmentation of AV files can be handled, and uninterrupted playback can be performed for the AV data that is recorded in these AV files. During link processing, it is judged whether the total size of the data to be written is at least equal to the two AV block lengths, and if so, the moved amount of pre-recorded AV data is restricted. As a result, it can be guaranteed that the total size of the data to be written is below two AV block lengths, so that the link can be determined in a short amount of time. Even though it is necessary as a result of the user's editing operation for a plurality of files, to record the re-encoded data with a short continuous length, the DVD recorder 70 will record this re-encoded data in a recording position that allow the re-encoded data to be linked to the AV data that precedes or continues during playback. This means that fragmented recording of the re-encoded data is prevented from the beginning, so that uninterrupted playback for the AV data that is recorded in this AV file will be possible. It should be noted here that the movement of the data can also be performed to enable the excessive separation on the disk of two AV data sets that have been linked together. In this case, the data produced by the link of the data sets that are physically separated on the disk are arranged in a way that it will be possible to ensure the uninterrupted playback of the two AV data sets. Nevertheless, when performing special playback such as fast forward, excessive separation of the data from the disk will result in the spasmodic reproduction of the data. To ensure smooth reproduction in this case, when two sets of AV data are linked, if one of the data sets has a consecutive length that is several times a predetermined amount and an empty block of the appropriate size is placed between the two sets of data. data, the data can be moved to this empty block. By doing so, you can ensure smooth reproduction for both normal and special playback. It should be noted here, that the time information can be taken at the mark points in the cell information and managed with the information such as the address taken from the time map table in the form of a table. By doing so, this information can be presented to the user as potential selections on a screen that shows the initial pre-edition status. You can also generate small images (known as "thumbnails") for each mark point and store them as separate files, with the indicator information that is also produced for each model. When the cell information is displayed in the pre-editing stage, these thumbnails can be 'displayed to show the potential selections that can be made by the user.
Also, while the present embodiment describes a case when video and audio data are handled, this is not an effective limitation for the techniques of the present invention. For a DVD-ROM, sub-picture data for subtitles that have been encoded with track length and even images can be handled as well. Processing for unit 11 of the AV file system (Figures 48A, 48B, 49-50, 55, 60, 65, 67) that were described in that third embodiment using flowcharts can be achieved by a machine language program . This machine language program can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language language program recorded on the recording medium can not be installed on a normal, personal computer. When executing the machine language programs, installed, the normal, personal computer can achieve the functions of the video data editing apparatus of this fourth modality.
Fourth Modality
The fourth embodiment of the present invention performs a two-stage editing process composed of virtual editions and real editions using two types of program chain, specifically the user-defined PGCs and the original PGCs. To define the PGCs defined by the user and the original PGCs, a new table is added to the RTRW administration file of this fourth mode.
(4-1) RTRW Administration File
The following is a description of the construction of the RTRW administration file in this fourth mode. In the fourth mode, the RTRW administration file is written to the same directory as the AV files (the RTRW directory), and has the content shown in Figure 70A.
Figure 70A shows a detailed expansion of the stored content of the RTRW administration file in the fourth mode. That is, the logical format located on the right side of Figure 70A shows the logical format located on the left side in more detail, with the broken guides in Figure 70A showing the correspondence between the left and right sides. The logical format of the VOBs shown in Figure 70A, the RTRW administration file can be seen to include the original PGC information table, a user-defined PGC information table, and a title search indicator, in addition of the VOB information, of the first modality.
(4-1-2) Contents of the PGC Original Information
The original PGC information table is composed of a plurality of original PGC information sets. Each set of original PGC information is information that indicates any of the VOBs that are stored in an AV file present in the RTRW directory or sections within these VOBs, according to the order in which these are arranged in the AV file . Each set of original PGC information corresponds to one of the VOBs recorded in an AV file present in the RTRW directory, so that when an AV file is recorded in the RTRW directory, sets of original PGC information are generated by the video data editing device and are recorded in the RTRW administration file. Figure 70B shows the data format of an original PGC information set. Each set of original PGC information is composed of a plurality of cell information sets, with each set of cell information that is composed of a cell ID (CELL # 1,. # 2, # 3, # 4,. .. in Figure 70B) which is a unique identifier assigned to the cell information set, an AV file ID (AVF_ID in Figure 70B), a VOB_ID, a C_V_S_PTM, and a C_V_E_PTM. The AV file ID is a column for writing the identifier of the AV file that corresponds to the cell information set. The VOB_ID is a column to write the identifier of a VOB that is included in the AV file. When a plurality of VOBs are included in the AV file corresponding to the cell information set, this VOB_ID indicates which of the plurality of VOBs corresponds to the present set of cell information. The cell start time C_V_E_PTM
(abbreviated to C_V_S_PTM in the drawings) shows the start time of the cell indicated by the present cell information, and so has a column to write the PTS which is assigned to the start time of the first video field in the section that uses the format of the PTM descriptor. The cell termination time C_V_E_PTM (abbreviated to C_V_E_PTM in the drawings) shows the end time of the cell indicated by the present cell information, and thus has a column to describe the end time of the final video field in the section that uses the PTM descriptor format.
The time information given as the cell start time C_V_S_PTM and the end time of C_V_E_PTM shows the start time for a coding operation by the video encoder and the end time for the coding operation, with these corresponding to the mark points inserted by the user. The end time of cell C_V_E_PTM in each set of cell information in an original PGC information set corresponds to the cell start time C_V_S_PTM of the next set of cell information in the given order. Since this relationship establishes between the cell information sets, a PGC indicates all the sections in a VOB without omitting any of the sections. As a result, an original PGC is unable to indicate the sections of a VOB in an order where the sections are exchanged.
321
PGC information defined by the user does not need to indicate each section in a VOB, so that one or more parts of a VOB can not be indicated. While the original PGCs have strict limitations with reference to their playback commands, the PGCs defined by the user are not subject to these limitations, so that the reproduction order of the cells can be defined in a free manner. As a specific example, the order of reproduction of the cells in a PGC defined by the user may be the inverse of the order in which the cells are arranged. Also, a user-defined PGC can indicate sections of the VOBs that are recorded in the different AV files. The original PGCs indicate the partial sections in an AV file or a VOB according to the order in which the AV file or the VOBs are arranged, so that the original PGCs can be indicated to respect the arrangement of the indicated data. In the user defined PGCs, however, they do not have this restriction, and in this way they are 322
able to indicate the sections in the desired order of the user. As a result, these user-defined PGCs are ideal for storing the playback commands that are provisionally determined by the user to link a plurality of sections in the VOBs during the process of a video data editing operation. The original PGCs are associated with the AV files and the VOBs in the AV files, and the cells in an original PGC only indicate sections in these VOBs. The user defined PGCs, meanwhile, are not limited to being associated with the particular VOBs, so that the cell information sets included in the user defined PGC information may indicate sections in different VOBs. As another difference, an original PGC is generated when an AV file is recorded, whereas a user-defined PGC can be generated at any point after the recording of an AV file.
323
(4-1-4) PGC Information Unit
Video Attribute Information - AV File
The following is an explanation of the int er-relevance of AV files, VOBs, and PGC information sets. Figure 71 shows the int er-relevancy of the AV files, the BOVs, the time map table, the PGC information sets, with the elements that form a unified body that encloses within the plotted frames using thick lines. It is noted that in Figure 71, the term "PGC information" has been abbreviated to "PGC I". In Figure 71, the AV # 1 file, the VOB # 1 information, and the original PGC information # 1 composed of the cell information sets # 1 to # 3 have been arranged within the same frame, while the AV # file, VOB # 2 information, and the original PGC # 2 information composed of cell information sets # 1 to # 3 have been arranged within a different box. These combinations of an AV (or VOB) file, VOB information, and PGC 324 information
Original that are present in the same box in Figure 71 are called an "original PGC" under the DVD-RAM standard. A video data editing device that complies with the DVD-RAM standard treats the so-called original PGC units as a management unit called a video title. For the example in Figure 71, the combination of the AV. # File, the VOB # 1 information, and the original PGC information # 1 is called the original PGC # 1, while the combination of the AV # 2 file, the VOB # 2 information and the original PGC # 2 information is called the original PGC # 2. When recording an original PGC, in addition to recording the VOB encoded in the DVD-RAM, it is necessary to generate the VOB information and the original PGC information for these VOBs. The recording of an original PGC is therefore considered complete when all three of the AV file, the VOB information table, and the original PGC information have been recorded in the DVD-RAM. Putting this in another way, the recording of the VOB encoded in a DVD-RAM as an AV file itself is not 325
consider that the recording of an original PGC on the DVD-RAM ends. This is also the case for deletion, so that the original PGCs are erased as a whole. Putting this in another way, when any of an AV file, VOB information and original PGC information is deleted, other elements in the same original PGC are also erased. The reproduction of an original PGC is performed by the user that indicates the original PGC information. This means that the user does not give direct indications for the reproduction of a certain AV or VOB file. It should be noted here that an original PGC can also be reproduced in part. This partial reproduction of an original PGC is performed by the user indicating sets of cell information that are included in the original PGC, although the reproduction of a section that is not small a cell, such as a VOBU, can not be indicated. The following describes the reproduction of a PGC defined by the user. Figure 71, you can see that information # 3 defined 326
by the user, composed of cells # 1 to # 4, the original PGC # 1 and # 2 described above is included in a separate table. This shows that for the DVD-RAM standard, the PGC information defined by the user is not actually AV data, and instead is managed as a separate title. As a result, a video data editing apparatus defines the PGC information defined by the user in the RTRW administration file, and in doing so is capable of terminating the generation of a user-defined PGC. For the PGC defined by the user, there is a relation whereby the production of a PGC defined by the user is equal to the definition of a set of PGC information defined by the user. When a user-defined PGC is deleted, it is sufficient to delete the PGC information defined by the user from the RTRW administration file, with the PGC information defined by the user that is considered as not existing subsequently.
327
The units for the reproduction of a PGC defined by the user are the same as for an original PGC. This means that the reproduction of a PGC defined by the user is performed by the user indicating the PGC information defined by the user. It is also possible that the user-defined PGCs are reproduced partially. This partial reproduction of a PGC defined by the user is achieved by the user that indicates the cells that are included in the PGC defined by the user. The differences between the original PGCs and the PGCs defined as described above, but, from the user's point of view, there is no need to take these differences into account. This is because the complete reproduction or partial reproduction of both types of PGC is done in the same way by indicating PGC information or cell information respectively. As a result, both PGC classes are managed in the same way using a unit called a "video title".
328
The following is an explanation of the reproduction of the original PGCs and the PGCs defined by the user. The arrow drawn with dashed thick lines in Figure 71 shows how certain data sets refer to other data. The arrows y2, y4, y6 and y8 show the relationship between each VOBU in a VOB and the time codes included in the time map table in the VOB information, while yl, y3, y5 and y7 show the relationship between the time codes included in the code map table in the VOB information and the cell information sets. Here, it is assumed that the user has indicated one of the PGC, so that a video title will be played. When the PGC indicated in the original PGC # 1, the cell information set # 1 located in front of the original PGC information # 1 is extracted by the reproduction apparatus. Then, the reproduction apparatus refers to the AV file and the VOB identifiers included in the extracted set of cell information # 1, and specifies the AV file # 1, the VOB # 1, and the map table # 1 of time for this VOB as 329
the AV file and the VOB corresponding to this cell information. Table # 1 of the specified time table includes the size of each VOBU that composes the VOB and the reproduction period of each VOBU. To improve data accessibility, the specified time map table # 1 also includes the address and elapsed time relative to the start of the VOB for the representative VOBUs that is selected at a constant interval, such as a multiple of 10 seconds. As a result, when referring to the time map table using the cell start time C_V_S_PTM, as shown by the arrow yl, the reproduction apparatus may specify the VOBU in the AV file corresponding to the cell start time C_V_S_PTM included in cell information set # 1, and thus you can specify the first address of this VOBU. By doing so, the reproduction apparatus can determine the first address of the VOBU corresponding to this cell start time C_V_S_PTM, it can access the VOBU # 1 as shown by the arrow y2, and of this 330
mode can start reading the VOBU sequence that starts from VOBU # 1. Since the cell information set # 1 also includes the cell termination time C_V_E_PTM, the reproduction apparatus can access the time map table using this cell termination time C_V_E_PTM, as shown by the arrow y3 , to specify the 'VOBU in the AV file corresponding to the end time of cell C_V_E_PTM included in cell information set # 1. As a result, the reproduction apparatus can determine the first address of the VOBU corresponding to the end time of cell C_V_E_PTM. When the VOBU corresponding to the end time of cell C_V_E_PTM is VOBU # 10, for example, the playback apparatus will stop reading the VOBU sequence when searching for VOBU # 10, as shown by the arrow y4. By accessing the AV file via cell information # 1 and VOB information # 1, the playback apparatus can read only the section indicated by cell information 331
# 1, of the data in the VOB # 1 that is included in the AV # 1 file. If readings are also made in cell information # 2, # 3, and # 4, all VOBUs that are included in VOB # 1 can be read and played. When playback is performed for an original PGC as described above, the sections in the VOB can be played back in the VOB in which it is arranged in the VOB. The following explanation for when the user indicates the reproduction of a video title indicated by one of the PGC defined by the user. When the indicated PGC is the user defined PGC # 1, the reproduction apparatus extracts the cell information set # 1 that is placed in front of the user-defined PCG information # 1 for this PGC # 1 defined by the user. Then, the reproduction apparatus refers to table # 1 of time map using the cell start time C_V_S_PTM included in this cell information # 1, as shown by the arrow y5, and specifies the VOBU in VOBU # 1 that 332
corresponds to this cell start time C_V_S_PTM included in cell information # 1. In this case, the specific reproduction apparatus VOBU # 11 as the VOBU corresponding to the start time of cell C_V_S_PTM, access VOBU # 11 as shown by arrow y6, and start reading a sequence of VOBU that start from VOBU # 11. The cell information # 1 included in the PGC # 1 defined by the user also includes the cell end time C_V_E_PTM, so that the reproduction apparatus refers to the time map table using this cell termination time C_V_E_PTM , as shown by the arrow y7, and it specifies the VOBU in the VOB # 1 that corresponds to the end time of cell C_V_E_PTM that is included in cell information # 1. When the VOBU corresponding to the cell termination time C_V_E_PTM is VOBU # 1, for example, the reproduction apparatus determines the reading of the VOBU sequence when searching for VOBU # 1, as shown by the arrow y8.
333
As described above, after accessing the AV file, via cell information # 1 of the VOB information # 1, the reproduction apparatus performs the same processing for the cell information # 2, # 3 and # 4 included in the PGC information # 1 defined by the user. After extracting the information from cell # 2 which is located in a position following the information cell # 1, the reproduction apparatus refers to the AV file identifier included in the extracted cell information # 2 and in this way determines that the AV # 2 file corresponds to this cell information and that the time map table # 2 corresponds to this AV file. Table # 2 of the specified time map includes the size of each VOBU that makes up the VOB and the reproduction period of each VOBU. To improve the accessibility in the data, the specified time map table # 2 also includes the address and elapsed time relative to the start of the VOB for the representative VOBUs that are selected at a constant interval, such as 334
a multiple of 10 seconds. As a result, when referring to the time map table using the cell start time C_V_S_PTM, as shown by the arrow y9, the reproduction apparatus may specify the VOBU in the AV file corresponding to the cell start time C_V_S_PTM included in cell information set # 2, and thus you can specify the first address of this VOBU. By doing so, the playback apparatus can determine the first address of the VOBU corresponding to this cell start time C_V_S_PTM, it can access 1 VOBU # 2 as shown by the arrow and 10, and thus can start the reading of the VOBU sequence that starts from VOBU # 2. Since the cell information set # 2 also includes the cell end time C_V_E_PTM, the playback apparatus can access the time map table using this cell end time C_V_E_PTM, as shown by the yll arrow, to specify the VOBU in the AV file that corresponds to the end time of cell C_V_E_PTM included in the set of 335
cell information # 2. As a result, the reproduction apparatus can determine the first address of the VOBU corresponding to the end time of cell C_V_E_PTM. When the VOBU corresponding to the end time of cell C_V_E_PTM is VOBU # 11, the playback apparatus will stop reading the VOBU sequence when searching for VOBU # 11, as indicated by the arrow yl2. When playing the PGC information defined by the user in this way, the desired sections in the VOBs included in the two AV files can be played in the order given. This ends the explanation of the AV file unit, the VOB information, the PGC information. The following is a description of the title search indicator shown in Figure 70.
(4-1-5) Contents of the title search indicator.
The title search indicator is the information to manage the information 336
of VOB, the time map table, the PGC information, and the AV files recorded on a DVD-RAM in the units called video titles that were described above. Each title search indicator is composed of the PGC number that is assigned to an original PGC information set or a set of user defined PGC information, a type of title, and a title recording history. Each type of title corresponds to one of the PGC numbers, and is set to the value of "00" to show that the AV file with the corresponding PGC number is an original PGC type, or adjusts to the value "01" for show that the AV title with the corresponding PGC number is a PGC defined by the user. The title recording history shows the data and the time in which the corresponding PGC information was recorded on the DVD-RAM. When the RTRW directory on a DVD-RAM is indicated, a playback device that complies with the DVD-RAM standard reads the 337
RTRW administration file title search indicators and in this way you can instantly know how the many original PGCs and the user-defined PGCs are given in each directory on the DVD-RAM and when each of these titles were recorded video in the RTRW administration file.
(4-1-6) Int erchangeability of the PGC 'Defined by the User and the Original PGCs in a
Real Edition.
The user-defined PGC information defined in a virtual edition can be used to indicate the binding order for the cells in a real edition, as shown in this fourth mode. Also, once an actual edition has been made as described in the fourth embodiment, if a set of PGC information defined by the user is converted into an original PGC information set, the original PGC information can be easily generated. for the VOB obtained by this link.
338
This is because the construction of PGC information data defined by the user and the original type information only differs in the given value as the type of title, and because the sections of a VOB obtained by a real edition are the sections that were indicated by the PGC information defined by the user before the actual edition. The following is an explanation of the procedure for a real edition in this fourth embodiment, and of the process for updating the PGC information defined by the user to the original PGC information. Figure 72 shows an example defined by the user and an original PGC.
In Figure 72, the original PGC information # 1 includes only cell # 1, and is part of an original PGC with VOBU # 1 and VOB information. On the other hand, the PGC information # 2 defined by the user forms a PGC defined by the user using only cell # 1, cell # 2, and cell # 3.
339
In Figure 72, cell # 1 indicates the section from VOB # 1 to VOBU # i, as shown by the dashed arrows y51 and y52, while cell # 2 indicates the section from VOBU # i + VOBU # j , as shown by dashed lines y53 and y54, and cell # 3 indicates the section from VOBU # j + V0BU # k + 2, as shown by the dashed arrows y55 e and 56. In the following example, the cell # 2 is cleared from PGC deformation defined by the user, and the user indicates a real edition using the PGC information # 2 defined by the user composed of cell # 1 and # 3. In Figure 73, the area corresponding to the deleted cell is shown using shading. The cell # 2 that is deleted here indicates one of the video frames, of the plurality of image data sets included in the V0BU # i + l shown inside the wll box using the cell start time C_V_S_PTM. Cell # 2 also indicates one of the video frames, of the plurality of image data sets included in the VOBU #j + l shown 340
inside the wl2 box, using the cell termination time C_V_E_PTM. If a real edition is made using the information of # 2 of PGC defined by the user, the VOBU #il, i, and i + 1 located at the end of cell # 1 and the VOBU # j, j + 1, and j + 2 located at the beginning of cell # 2 will be re-encoded. This re-coding is done according to the procedure described in the first and second modalities, and the link of the extensions are then performed according to the procedure described in the third embodiment. Figure 74A shows the ECC blocks in the DVD-RAM that are free by a real edition made using the PGC information # 2 defined by the user. As shown in the second level of Figure 74A, the VOBU #i, #i + l, and #i + 2 are recorded in the AV block #m, and the VOBU #j, # j + l, and #j +2 are recorded in block #n. As shown in Figure 73, cell # 2 indicates the image data included in the VOBU #i + l as the C_V_S_PTM, and the image data included in the VOBU #j + l as the 341
C_V_E_PTM. As a result, a DIVIDE command and an ACORT command of the second mode are issued to release the area of the ECC block occupied by the VOBU #i + 2. to the ECC block occupied by VOBU #j, as shown by the tables wl3 and wl4 of Figure 74A. However, the ECC blocks occupied by the VOBU # 1 and #i + l and the ECC blocks occupied by the VOBU # j + l and j + 2 are not released. Figure 74B shows an example of a
VOB, VOB information and PGC information after real information. Since the area corresponding to cell # 2 has been erased, VOB # 1 is erased in the (new) VOB # 1 and VOB # 2. When the DIVIDE command is issued, the VOB information for VOB # 1 is divided into the VOB information # 1 and the VOB information # 2. The time map tables included in this information are also divided into table # 1 of time map and table # 2 of time map. Although not illustrated, the seamless link information is also divided.
342
The VOBUs in VOB # l and VOB # 2 are referred to by a reproduction apparatus via these divided time map tables. The PGC information defined by the user and the original PGC information have the same data construction, with only the value of the different title types. The sections of the VOB obtained after a real edition were originally indicated by the 'PGC information # 2 defined by the user before the actual edition, so that the PGC information # 2 defined by the user becomes the information of original PGC. Since this PGC information # 2 defined by the user is used to define the information, there is no need for a separate process to generate the new original PGC data after a real edition.
(4-2) Functional Blocks of the DVD Recorder 70
Figure 75 is a functional block diagram showing the construction of the DVD recorder 70 in this fourth 343
modality. Each function shown in Figure 75 is performed by the CPU which executes the programs in the ROM and which controls the physical equipment shown in Figure 17. The DVD player shown in the
Figure 75 is comprised of a disk rewriting unit 100, a disk reading unit 101, a common file system unit 10, an AV file system unit 11, and a recording-editing control unit 12 -reproduction, in the same way as in the video data editing apparatus described in the third embodiment. The present embodiment differs with the third embodiment, however, in that the AV data recording unit 13 is replaced with the title recording control unit 22, the AV data reproduction unit 14 is replaced with the unit 23 of title reproduction control, and the AV data editing unit 15 is replaced with the multi-stage editing control unit 26. This DVD player also includes a PGC information table work area 21, a RTRW administration file work area 24, and a 344
PGC information generator 25 defined by the user, - in place of the fragmentation unit 16.
(4-2-1) Unit 12 of Recording Control-Editing-Reproduction
The recording-editing-reproducing control unit 12 of this fourth embodiment receives an indication of the user of a directory in the directory structure on the DVD-RAM as the operation target. Upon receiving the indication of the user of the operation target, the recording-editing-reproducing control unit 12 specifies the operation content according to the user's operation that has been reported by the remote control signal receiving unit 8. at the same time, the recording-editing-reproducing control unit 12 gives instructions so that the processing corresponding to the operation content is performed for the directory that is the operation objective by the title recording control unit 22, the control unit 23 of 345
Title recording, or any of the other component s. Figure 77A shows an example of graphical data that is displayed on the TV monitor 72 under the control of the Recording-editing-playback Control Unit 12. When any of the directories is set to the focus state, the Recording-editing-playback Control Unit 12 waits for the user to press the enter key. When the user does so, the recording-editing-playback control unit 12 specifies the directory that is currently in the focus state as the current directory.
(4-2-2) Work Area 21 of the PGC Information Table
The PGC information table work area 21 is a memory area having a standardized logical format so that the PGC information sets can be defined successively. This work area 21 of the PGC information table has 346 regions
internal ones that are administered as a matrix. The plurality of PGC information sets that are present in the PGC information table work area 21 are arranged in different columns while a plurality of cell information sets are arranged in different rows. In the PGC information table work area 21, any of the cell information in a stored set of PGC information can be accessed using a combination of row number, and column number. Figure 76 shows examples of original PGC information sets that are stored in the PGC information table work area 21. It should be noted here that when the recording of an AV file is terminated, the PGC information table defined by the user will be emptied (shown as "NULL" in Figure 76). In Figure 76, information # 1 of the original PGC includes the set of cell information # 1 that shows the section between the start time tO and the end time ti, the set of cell information # 2 that shows the section between 347 Wer
starting ti and the final time t2, the cell information set # 3 showing the section between the start time t2 and the end time t3, and the set of cell information # 4 showing the section between the time of start t3 and the final time t4.
(4-2-3) Unit 22 of Control of Record of Title
The title recording control unit 22 records the VOBs in the DVD-RAM in the same manner as the AV data recording unit 13 in the third mode, although in doing so the title recording control unit 22 also stores a time map table in work area 24 of the RTRW administration file, generates the VOB information, generates the VOB information, and generates the original PGC information that is stored in the work area 21 of the table of PGC information. When the original PGC information is generated, the title recording control unit 22 follows the procedure described at 348
continuation. First, upon receiving the notification that the record key of the record-edit-playback control unit 12 was pressed, the title record control unit 22 secures a row area in the work area 21 of the information table of PGC. Then, after the AV data recording unit 13 has assigned an AV file identifier and a VOB identifier to the VOB to be recently recorded, the title recording control unit 22 has these identifiers and stores them in the secured row area corresponding to a PGC number recently released. Then, when the encoding for the VOB is started, the title recording control unit 22 instructs the MPEG encoder 2 to transfer the PTS of the first video frame. When the encoder control unit 2g has transferred this PTS for the first video frame, the title recording control unit 22 stores this value and expects the user to perform a marking operation.
349
Figure 80A shows how the data entry and exit is performed between the components shown in Figure 75 when a marking operation is performed. While viewing video images displayed on the TV monitor 72, the user presses the dial key on the remote control 71. This dialing operation is reported to the title recording control unit 22 already the "route" shown as F, ©, ®, in Figure 80A. The title recording control unit 22 then obtains the PTS for the point where the user presses the dial key of the encoder control unit 2g, as shown by © in Figure 80A, and adjusts this as the information of the user. me The title recording control unit 22 repetitively performs the above processing while a VOB is being encoded. If the user presses the key during the generation of the VOB, the instruction control recording unit 22 instructs the encoder control unit 2g to transfer the presentation completion time for the last 350
video box to be encoded. Once the encoder control unit 2g has transferred this presentation completion time for the last video frame to be encoded, the title recording control unit 22 stores this as the time information. By repeating the above processing until the encoding of a VOB is determined, the title recording control unit 22 ends up by storing the AV file identifier, the VOB file identifier, the start time of presentation of the first video frame , the start time for the presentation of each video frame corresponding to a point where a marking operation was made, and the time for ending the presentation of the final video frame. From this stored time information, the title recording control unit 22 adjusts the start time and the end time of a section and the corresponding AV file identifier and the VOB identifier as a set of 351
information stored in a newly secured row in work area 21 of the PGC information table. In doing so, the title recording control unit 22 recently generates original PGC information. At the end of the previous generation, the title recording control unit 22 associates this original PGC information with the assigned PGC number, and, in work area 21 of the PGC information table, generates a search indicator of title that has the type information showing that this PGC information is the original PGC information, and a title recording history showing the date and time at which the recording of this PGC information was finished. It should be noted here that if the title reproduction control unit 23 can detect when there is a large change in the content of the scenes, the PGC information generator 25 defined by the user can automatically obtain the PTS for the points in the scenes. which these changes of 352 occur
scene and automatically adjusts these PTS in cell information sets. The generation of a time map table or VOB information is not part of the essence of this modality, and will not explain.
(4-2-4) Unit 23 of Title Reproduction control.
The title reproduction control unit 23 performs partial reproduction or reproduction for any of the titles recorded in the current directory indicated by the recording-editing-playback control unit 12. This is described in more detail later. When, as shown in Figure 77A, one of the directories is selected as the current directory and the user gives an indication for the reproduction of one of the title stored in this directory, the title reproduction control unit 23 displays the image. screen shown in Figure 77A, reads the original PGC information table and table 353
of PGC information defined by the user in the RTRW administration file in that directory, and causes the user to select the full reproduction or partial reproduction of one of the original PGCs or the PGCs defined by the user in the current directory. Figure 77B shows the PGCs and cells that are displayed as the list of potential operating objectives. The PGC information sets and cell information representing these PGCs and cells are the same as those shown in the example in Figure 76. The original PGCs that appear on this interactive screen are shown in a simple graph showing the time in the horizontal axis, with each original PGC that is displayed together with the data and date on which it was recorded. In Figure 77B, the menu on the right at the bottom of the screen shows whether full or partial playback is going to be performed for the video title in the current directory. By pressing the "1" or "2" key on the remote control 71, the user can select the full playback or the 354
partial reproduction or the video title. If the user selects the full reproduction, the title reproduction control unit 23 causes the user to select one of the PGC as the operation target, while if the user selects the partial reproduction, the reproduction unit 23c of The title causes the user to select one of the cells as the operation target. When the complete reproduction for a PGC has been selected, the title reproduction control unit 23 extracts the cells of the selected PGC as the operation target and, by referring to a time map table such as that shown in FIG. Figure 71, reproduces the sections indicated by the cells one by one. Upon completion of the reproduction of the sections, the title reproduction control unit 23 causes the interactive screen shown in Figure 77B to be displayed, and awaits the next selection of the information of the Ida.
355
Figure 78A is a flow chart showing the processing when cell information sets are partially reproduced. First, in step S271, the title reproduction control unit 23 reads the C_V_S_PTM and C_V_E_PTM of the cell information to be reproduced from the original PGC information or PGC information defined by the user. Then, in step S272, the title reproduction control unit 23 specifies the address of the VOBU (START) which includes the image data so graded. In step S273, the title reproduction control unit 23 specifies the reproduction of the VOBU (FIN) including the image data assigned C_V_E_PTM and in the step S274, the title reproduction control unit 23 reads the section of the VOBU (START) to the VOBU (END) of the present VOB. In step S275, the title reproduction control unit 23 instructs the MPEG decoder 4 to decode the read VOBUs. In step S276, the title reproduction control unit 23 transfers the 356
- cell presentation start time (C_V_S_PTM) and the cell presentation completion time (C_V_E_PTM) to the decoding control unit 4 k of the MPEG decoder 4 as the valid reproduction section information, together with a request for decoding processing. The reason why the title reproduction control unit 23 transfers the valid playback section information to the MPEG decoder 4 is that the decoder control unit 4 k in the MPEG decoder 4 will attempt to decode each data of image that is not inside the section indicated by the cell. In more detail, the unit for the coding processing of the MPEG encoder 4 is a VOBU, so that the MPEG decoder 4 will decode the entire VOBU section (START) to VOBU (END). ), and in doing so will have the image data outside the section indicated by the reproduced cell. A cell indicates a section of video field units, so a method is necessary to prohibit the coding and reproduction of video fields.
the image data outside the section. To prohibit the reproduction of this image data, the title reproduction control unit 23 transfers the valid reproduction section information to the title reproduction control unit 23. Figure 78B shows as only the section between cell presentation start time (C_V_S_PTM) and the cell presentation completion time (C_V_E_PTM), of the area between the VOBU (START) and the VOBU (FIN), is reproduced. Upon receiving this valid playback section information, the MPEG encoder 4 can stop the display transfer of an appropriate number of video fields from the start of the VOBU (START) to the C_V_S_PTM and the display transfer of an appropriate number of video fields from C_V_E_PTM to VOBU (END). For the construction of the physical equipment shown in Figure 17, in the disk access unit 3 it reads the VOBU sequence and transfers this to the MPEG decoder 4 via logical connection (1). MPEG encoder 4 decodes this VOBU sequence and prohibits the
transfer of reproduction of the part that precedes C_V_S_PTM and the part that follows C_V_E_PTM. As a result, only the section indicated by the cell information is reproduced. Since a set of the original PGC information or the user defined PGC information includes a plurality of cell information sets, the procedure shown in Figure 78A may be repeated for each set of cell information included in a set of information of PGC.
(4-2-5) Work Area 24 of the RTRW Administration File.
The work area 24 of the RTRW administration file is a work area for arranging the original PGC information table composed of the plurality of original PGC information sets generated in work area 21 of the PGC information table , the PGC information table defined by the user composed of a 359
plurality of user-defined PGC information sets, title search indicators, and VOB information sets, according to the logical format shown in Figure 70. Unit 10 of the common file system describes the data fixed in work area 24 of the RTRW administration file in e-1 RTRW directory as non-AV files, and in doing so stores a RTRW administration file in the RTRW directory.
(4-2-6) PGC Information Generator 25 Defined by the User.
The PGC information generator 25 defined by the user generates the PGC information defined by the user a basis to a set of PGC information recorded in the RTRW administration file of the current directory. Two types of cell information can be presented in the user-defined PGC information (called cell information sets defined by 360
the user) with these being a first type indicating an area within a section indicated by the cell information in an existing set of PGC information, and a second type indicating the same section as a set of cell information in an existing set of PGC information. The PGC information generator 25 defined by the user generates these two types of cell information using different methods. To generate the first type of user-defined cell information that indicates an area within a section indicated by the existing cell information, the user-defined PGC information generator 25 causes the playback control unit 23 to title performs partial reproduction of the section indicated by the existing cell information. During partial reproduction of this section, the PGC information generator 25 defined by the user inspects when the user reviews the marking operations, and generates cell information sets with the times of the marking operations as the starting point 361
and the end point. In this way, the PGC information generator 25 defined by the user generates the PGC information defined by the composite user to this first type of cell information. Figures 79A and 79B show how the user uses the TV monitor 72 and the remote control 71 when generating the PGC information defined by the user. Figure 80B shows the data entry and transfer between the components shown in Figure 75 when a marking operation is performed. As shown in Figure 79A, the user views the video images displayed on the monitor 79B and presses the dial key on the remote control 71 at the beginning of a desired scene. After this, the desired scene ends, as shown in Figure 79B, and the video images change to a content in which the user has no interest. Therefore, the user presses the dial key again. This marking operation is reported to the PGC information generator 25 defined by the user via the route shown, F, ©, (Den 362
Figure 80. The user defined PGC information generator 25 then obtains the PTS of the points when the user presses the dial key of the MPEG decoder 4, as shown by © in Figure 80B, and stores the PTS as the time information. The PGC information generator 25 defined by the user then generates a set of cell information by joining the appropriate identifier of the AV file and the VOB identifier to a pair of stored PTS which are the starting point and the end point of the user. a section, and store this cell information in a newly secured row area, the PGC information table work area 21, as shown by (D in Figure dOB.) When the PGC information defined by the user indicating a section indicated by an existing set of cell information, the PGC information generator 25 defined by the user specifically copies the existing cell information into a different row area in the 363
work area 21 of the PGC information table. In more detail, the user defined PGC information generator 25 secures a row area for a row in work area 24 of the RTRW administration file, and assigns a new PGC information identifier defined by the user to this row area. Once the cell information to be used in the present PGC information defined by the user has been indicated, of the cell information sets in the PGC information already stored in work area 21 of the information table of PGC, using a combination of a row number and a column number, the PGC information generator 25 defined by the user reads the cell information and copies it into a newly secured new row area in the work area 21 of the PGC information table.
364
(4-2-7) Unit 26 Control Multistage Edition
The editing multi-stage control unit 26 controls the title reproduction control unit 23, the user defined PGC information generator 25, and the seamless link unit 20 to perform a multi-stage editing process that includes: 1. virtual editions achieved by defining the PGC information defined by the user; 2. provided that allow the user to see the video images that will be obtained by a real edition, based on the results of a virtual edition; 3. Seamless links, as described in the first and second modalities; and 4. actual editions made when linking AV files as described in the third modality.
365
(4-2-7-1) Procedure For the editing of several stages by the multistage editing control unit 26.
The following is a description of the specific procedure for multistage control performed by the control unit 26 of various editing stages. When the user selects a virtual edition using the remote control 71 in response to the interactive screen shown in Figure 77A, the multi-stage editing control unit 26 has access to the RTRW directory, causes the common file system unit 10 to read the RTRW administration file from the RTRW directory, and causes the RTRW administration file to be stored in work area 24 of the RTRW administration file. Then, from the RTRW administration file stored in the work area 24 of the RTRW administration file, the multistage editing control unit 26 transfers the original PGC information table, the user defined PGC information table , and the indicators of 366
title search to work area 21 of the PGC information table, and transfer the time map table to the work area of the time map table. Based on the information table of
Original PGC transferred, the multistage edit control unit 26 displays the interactive screen shown in Figure 85, and waits for the next user indication. Figure 85 shows an example of the interactive screen displayed by the TV monitor 72 to make the user select the sections for the cells of a user-defined PGC in a virtual edition. This interactive screen displays the original PGCs and the PGC defined by the user as simple graphs, where the horizontal axis represents time. The date and time of recording of each original PGC and PGC defined by the user is also displayed. This interactive screen displays the plurality of cells as a horizontal rule of rectangles. The user can select any of these rectangles using the 367
cursor keys on the remote 71. These original PGCs and cells are the same as those shown in Figure 76, and the following describes the update of the original PGC information table, the user-defined PGC information table and the title search indicators with Figure 76 as the initial state. Figure 81 is a flow diagram showing the processing of the multistage edit control unit 26 when defining a user defined PGC. In this flowchart, the variable j indicates one of the plurality of original PGCs that are arranged vertically on the interactive screen and the variable k indicates one of the plurality of cells that are arranged on the interactive screen. . The variable m is the PGC number that must be assigned to the set of PGC information defined by the user that is being newly defined in the RTRW administration file and the variable n is the cell number that must be assigned to the set of PGC information that is being defined 368
recently in the RTRW administration file. In step S201, the editing multi-stage control unit 26 replaces a given value by adding one to the last number of the original PGC information in the RTRW administration file in variable b and "1" in variable n. In step S202, the editing multi-stage control unit 26 adds a space for the mth PGC information defined by the user to the user-defined PGC information table and in step S203, the unit 26 of Multi-stage editing control waits for the user to perform a key operation. Once the user has made a key operation, in step S204 the editing multi-stage control unit 26 adjusts the mark for the key pressed, of the marks corresponding to the keys on the remote control 71, in "1", and in step S205 judge whether the Enter_Brand, which shows whether the enter key has been pressed, is" 1". In step S206, the multi-stage editing control unit 26 judges whether the Final Mark, which shows whether the key 369
completion has been pressed, it is "1". When both of these marks are "0", the editing multi-step control unit 26 uses the Right_Mar, I left_Mar, Downward_Brand, To Ar r_Bar_Mark, which shows respectively yes. the right, left, down or up keys have been pressed to perform the following calculations before replacing the calculation results in the variables k and j.
k-k + 1 * (Right_Marca) - 1 * (I left_Marca j-j + 1 * (Towards Abaj or_Marca) -1 * (Upwards Mark)
When the right key has been pressed, Der echa_Mar ca is set to "1", so the variable k is incremented by "1". When the up key has been pressed, the Up_Mark is set to "1", so the variable j is incremented by "1". Conversely, when the Left key has been pressed, the Left_Mark is set to "1", so the variable k is decremented by "1". In the same way, when the key down 370
has been pressed, the Downward_Mark is set to "1", so the variable j is decreased by "1". After updating the values of the variables k and j in this manner, the editing multi-stage control unit 26 has the cell representation in row j and column k displayed in the focus state in step S208, cleans all marks assigned to the keys on the remote control 71 to zero in step S209, and returns to step S203 where it waits once more for a key operation. By repeating the procedure of steps S203 to S209 described above, the focus state can be moved up / down and left / right between the cells according to the key operations made using the remote control 71. If the user presses the enter key with any of the cells in the focus state during the previous processing, the multistage editing control unit 26 proceeds to step S251 in Figure 82. In step S251 of Figure 82 , the multistage control unit 26 of edition 371
it causes the user of an indication as to whether the cell information in row j and column k should be used as-is, or if only one area within the indicated section- by this cell information is to be used. When the cell information is to be used as it is, the editing multi-step control unit 26 copies the cell representation in row j and column k to the space given as row 1 and column n in step S252, and defines Original_PGC # j. CELL # k as
User_Def inido_PGC # m. CELL # n in step S253. After this is defined, in step S254 the multistage editing control unit 26 increments the variable n and proceeds to step S209 in Figure 81. When an area within the section indicated for this cell information in row j and the column k should be used, the multistage editing control unit 26 proceeds to step S255 to cause the title control unit 23 to begin partial reproduction for the cell information in row j and column k.
372
In step S255, the multistage editing control unit 26 determines the circumstances for the reproduction of the cell information in row j and column k. This determination is made since when the section indicated by this cell information is reproduced in part, the need to reproduce the section once more again from the beginning, with this being preferred in this case for the reproduction of the indicated section for the cell section in row j and column k to begin at the position where the previous reproduction was finished (step S266), this point being called the reproduction termination point t. On the other hand, when the cell information in row j and column k has not been reproduced, the section indicated by the cell information in row j and column k is reproduced from the start in step S255, when processing then return to steps S256 and enter the circuit formed by steps S256 and S257. Step S256 waits for the cell reproduction to finish, while step S257 waits for the 373
user press the dial key. when the "Yes" judgment is given in step S257, processing proceeds to step S258, where the time information for pressing the dial key is obtained, and then to step S259. In step S259, the multistage editing control unit 26 judges whether two sets of time information have been obtained. If not, the processing returns to step S256, or it is so, the processing proceeds to step S260 where the two sets obtained from the time information are adjusted as the start point and the end point. One of the sets of time information obtained here is the start of the video scene that was marked by the user during its display on the TV monitor 72, while the other set of time information is the completion of this scene from video. These sets of time information are interpreted as the marking of a section in the original PGC that is especially needed by the user as material for a video editing. Accordingly, the PGC information defined by the user should be generated from 374
this section, so that the cell information in work area 21 of the PGC information table is generated. The processing then proceeds to step S261. In step S261, the PGC information generator 25 defined by the user obtains the VOB_ID and the ID of the AV file in Original_PGC # j. CELL # k. In step S262, the PGC information generator 25 defined by the user generates
Usiario_Def inido_PGC # m. CELL # n using the obtained start point and the end point, the VOB_ID, and the AV file ID. In step S263, the end point information is stored as the reproduction completion point t and in step S254, the variable n is incremented, before the processing returns to step S209. As a result of the above processing, new user-defined cell information is generated from the cell information in row j and column k. After this, other cells are adjusted in the focus state and another set of cell information defined by the user is generated from this 375
cell, so that a set of PGC information defined by the user is gradually defined in a cell at a time. It should be noted here that if the reproduction based on the cell information in row j and column k in the circuit process shown as step S256 to step S257 ends without a marking operation having been made the processing will return to step S254 . When it is determined that the end or end key has been pressed, the "Yes" judgment is given in step S206 in Figure 80B and the processing proceeds to step S213. In step S213, a menu is displayed to make the user indicate whether to define a next PGC defined by the user. When the user wishes to define a new PGC defined by the user and gives an indication of this, in step S204 the variable m is incremented, the variable n is initialized and the processing proceeds to the steps S209 and S203.
376
(4-2-7-2) Specific Example of Defining the PGC Information Defined by the User.
The following is a description of the operation when defining the user-defined PGC information of a plurality of original PGC information sets that are displayed in the interactive screen image of Figure 85. Figures 86A and 86B show the relationship between the user operations made and the remote control 71 and the display processing that accompanies the various operations of the user. Figure 87A through Figure 90 also illustrate the examples of these operations, and refer to the following explanation of these operations. As shown in Figure d5, once cell # 1 that is in row 1 and column 1 has been set to the focus state, the user presses the enter key, as shown in Figure d6B. As the result, the "Yes" judgment is given in step S205 and processing proceeds to the flow diagram in 377
Figure 82. In steps S251 to S266 of the flow chart in Figure 82, the first cell information CELL # 1A in the PGC at # 1 defined by the user is generated based on the Original_PGC # l. CELDAtl shown in Figure 86A. Once this generation is completed, the variable n is incremented in step S254, and processing returns to step S203 via step S209 with the value of variable n to "2". In this example, the user presses the down key once, as shown in Figure 87B, and the right key twice, as shown in Figures 87C and 87D. In step S204, the marks corresponding to the keys that have been pressed are set to "1".
As a result of the first pressing of the Down button: k = l (= 1 + 1 * 0-1 * 0) j = 2 (= 1 + 1 * 1-1 * 0)
As a result of the first pressing of the key to the right: 378
k = 2 (= 1 + 1 * 1-1 * 0) j = 2 (= 2 + 1 * 0-1 * 0)
As a result of the second pressing of the key to the right: k = 3 (= 2 + 1 * 1-1 * 0) j = 2 (2 + 1 * 0-1 * 0)
As shown in Figure 87A, cell # 7 is located in row 2 and column 3 is set in the focus state. Once the cell in row 2 and column 3 has been set to the focus state, the user presses the enter key, as shown in Figure 88B, so that the "Yes" judgment is made in step S205 and the processing proceeds to the flow chart with Figure 82. The cell information # 7A, which is the second set of cell information in User defined_PGC # 1, is then generated based on the original_PGC # 2. CELL # 7 located in row 2 and column 3 of the original PGC information table (see Figure 88A).
379
After the second set of cell information has been generated, the previous processing is repeated. The user presses the enter key as shown in Figure 89B so that cell information # 11A and cell information # 3A are respectively generated as the third and fourth sets of cell information in UserDef ino_PGC # l. The processing returns to step S203 and, in the present example, the user then presses the end key. As a result, the End_Mark corresponding to the end key is set to "1", and the processing proceeds to step S213. Since the end key has been pressed, the multi-stage editing control unit 26 considers the definition of the PGC information # 1 defined by the user as complete. In step S213, the user is asked to indicate whether he or she wishes to define another set of PGC information defined by the user (the PGC information # 2 defined by the user) that follows this PGC # 1 information defined by the user . If the user wishes to do so, the variable m is 380
increment, variable n is initialized, and processing proceeds to step S209. By repeating the previous processing, the PCG # 2 information defined by the user and the user-defined information # 3 are defined. As shown in Figure 91, the PGC # 2 information defined by the user is composed of # 2, cell # 4, cell # 10B, and cell # 5B, and PGC information # 3 defined by the user composed of cell # 3C, cell # 6C, cell # 8C, and cell # 9C. Figure 91 shows the contents of the PGC information table defined by the user, the original PGC information table and the title search indicators at the end of the virtual editing process. If the user presses the end key at this point, the interactive display shown in figure 90 will be displayed in step S215 in Figure 81, and the multipartition control unit 26 waits for the user to select a set of information of PGC defined by the user using the up and down keys. Here, the user can 381
select an expected one by pressing the play key, and you can select an edit by pressing the actual edit key, with the PGC information table defined by the user that is not yet recorded. If the user gives the indication for an operation that is recorded by a user-defined PGC, the user defined PGC information table includes the new PGC defined by the user generated in work area 21 of the information table of the user. PGC is transferred to the work area 24 of the RTRW administration file, where it is written to the part of the RTRW administration file described in work area 24 of the RTRW administration file corresponding to the PGC information table defined by the user. At the same time, file system commands are issued so that a newly generated "newly defined" title search indicator for PGC information is added to the title search indicators that are already present in the file. administration of the RTRW transferred to the 382 area
work 24 of the RTRW administration file. 'Figure 83 is a flow chart showing the processing during a planned or a real edition. The following is a description of the processing when a scheduled VOB link operation is performed, with reference to this flow diagram in Figure 83. Figures 92A-92B and 93A-93C show the relationship between operations made using the remote control 71 and the display processing that accompanies these operations. In step S220 of the flow chart of Figure 83, the first number in the PGC information table defined by the user is replaced in the variable j, and the step S221, a key operation is expected. When the user performs a key operation, in step S222 the mark corresponding to the key pressed by the user is set to "1". In step S223, the Repro_R_Marca is judged. , which shows if the play key has been pressed, is "1", and in step S224, it is judged whether the RealEdi ci ón_Mar ca, 383
which shows if the real edition key "1." has been pressed. When both of these marks are "0", the processing proceeds to step S225 where the following calculation is performed using the Up_Mark and Down_Marca values that respectively show whether the up and down keys have been pressed. The results of this calculation are substituted in variable j.
j - j + 1 * (Towards Bottom_Brand) -1 * (Up Brand)
When the user has pressed the up key, the Up_Mark will be set to "1", meaning that the variable j is decreased. Conversely, the user has pressed the down key, the
Downward_Brand will be set to "1", meaning that variable j increases.
Once variable j has been updated in this manner, in step S226, the image in the display corresponding to the PGC information placed in row j is adjusted in the focus state. In step S227, all 384
marks corresponding to the keys of remote control 71 are cleared to zero and the processing returns to step S221 where another key operation is expected. This processing in steps S221 to S227 is repeated, when the focus state moves to a different set of PGC information according to the user operations of the keys up and down on the remote control 71. If the user presses the play key, during the previous processing that is repeating, with one of the sets of PGC information in the focus state, the Play_Brand is set to "1", the "Yes" judgment is given in step S223 , and processing proceeds to S228. in step S228, the editing multi-step control unit 23 instructs the title reproduction control unit 23 to reproduce the VOBs according to the PGC, of the user-defined PGCs, which have been indicated by the user. user. When the PGC indicated by the user is a PGC defined by the user, the 385
cells included in the user defined PGC will indicate the sections of the plurality of sections in one or more VOBs in a user-defined order. Since this reproduction will not satisfy the conditions necessary for seamless reproduction that are described in the first and second modes is, so that the image display and transfer will stop at the cell boundary during playback before advancing to the next cell. Since the conditions necessary for seamless reproduction of the cells are not met, the display of the image or audio display will be interrupted. However, the object of this operation is only to give the user an expected result of linking to a plurality of scenes so that this object is achieved even in spite of the interruptions.
386
(4-2-7-3) Processing of a Preview of a Multistage Edition and for a Real operation.
The operation for linking the VOBs in a real edition is described below. Figures 94A to 94C show the relationship between the user operations of the remote control 71 and the display processing that accompanies these key operations. The user presses the key up, as shown in Figure 94B to cause cell # A1 to adjust in the focus state, and this is reflected in the display screen displayed on the TV monitor 72 as shown in Figure 94A. If the user then presses the actual edit key, as shown in Figure 94C, the "Yes" judgment is made in step S224 in Figure 83, and the processing of step S8 to step S16 in the flow diagram of the Figure 43 described in the third embodiment is performed. After finishing this processing in the third embodiment, the processing proceeds to step S237 in Figure 84. After 387
If the variable n is set to "1" in step S237, a search is made for the Or_Gal_PGC # j. CELL # k that was used when the UserDef inido_PGC # m was generated. CELL # n in step S238 and in step S239 it is judged if this Original_PGC # j exists. If so, this Orginal_PGC # j is deleted in step S240, or if not, a search is performed for the UserDef inido_PGC # q that was generated from this Original_PGC # j in step S240. In step S242, it is determined if there is at least one of Usuar ioDe f inido_PGC # q, and if so, all the Users ioDe finido_PGC # q are deleted in step S243. In step S244 it is judged whether the value of the variable a corresponds to the last number of the cell information, and if not, the processing proceeds to step S245 where the variable a is incremented by indicating the next set of cell information in the PGC # q information before the processing returns to step S238. The circuit process in step S2378 to step S245 is repeated until the variable n reaches the last number of the cell information in the information #g.
388
The sections indicated by the PGC information # 1 defined by the user are all VOB, # 1, # 2, and # 3, so that they all submit to the real edition. The original PGC information sets that were used to generate the cell information included in the PGC information # 1 defined by the user indicating the VOBs that undergo the actual editing, so that all of these information sets are erased from Original PGC. The user-defined PGC sets that were generated from these PGC information sets also indicate the VOBs that are subjected to the actual editing, so that all these sets of PGC information defined by the user are also deleted. The "Yes" judgment is made in step S244, so that the processing proceeds to step S246, and, from the free PGC numbers obtained by deleting the original PGC information sets, the lowest number is obtained as the #e number of PGC. Then, in step S247, the cell information is updated using the ID of the AV file assigned to the AV file and the 389
VOB_ID after d the command to be appended, and in step
S248 the PGC number of the User ioDef inido_PGC # q
• it is updated to the #e number of PGC. Meanwhile, in the title search indicators, the type information is updated to the original type. Figure 95 shows the example of the PGC information table and the title search indicators after the deletion of the original PGC information sets and • the user-defined PGC information that achieves a real edition. Since the VOBll, # 2, and # 3, indicated by the sections in the information # 1 of PGC defined by the user are subjected to the real edition, the information # 1 of the original PGC, the information # 2 of the original PGC, and the information # 3 of the original PGC, the information # 2 of the PGC defined by the user, and the information # 3 of the PGC defined by the user have already been deleted. Conversely, what was before PGC information # 1 defined by the user has been defined as the original PGC information # 1.
390
Once the PGC information has been updated in work area 21 of the PGC information table as indicated above, the new original PGC information is transferred to work area 24 of the RTRW administration file ,, where it is used to overwrite the RTRW administration file currently stored in work area 24 of the RTRW administration file. At the same time, the title search indicator for this newly generated original PGC information is transferred to work area 24 of the RTRW administration file where it is used to overwrite the title search indicators already present in the administration file. of RTRW. Once the user-defined PGC information table and the title search indicators have been written, the orders of the file system are issued so that the RTRW administration file stored in the work area 24 of the file RTRW administration is written to the RTRW directory.
391
With this modality, the sections that are going to be used as materials for a real edition are indicated by the cell information defined by the user, with these which are freely arranged to provisionally decide the reproduction path. When the user wishes to adjust a reproduction path of the editing materials, this can be achieved without having to temporarily produce a VOB, so that the editing of the VOB materials can be done in a short time using a simple method. This also means that there is no need to use more than the storage capacity of the DVD-RAM to provide a temporarily produced VOB. If the provisional determination of cell path can be achieved by defining only a set of PGC information defined by the user, the user can produce many variations of the reproduction path in a short time. the user-defined cell information sets are indicated using the time information for 392
the sections in the VOBs, so that the indicated VOBs can be maintained in the state in which they were already recorded. The user can generate a plurality of PGC information sets defined by the user for different reproduction routes and then see the predicted routes of these routes to find the appropriate mass of these reproduction routes. The user can then indicate a real edition for his preferred reproduction path, and in this way process the VOBs according to the information selected by the user. This means that the user can perform a prominent editing process that directly rewrites the VOBs that are already stored on an optical disc. While the original VOBs will be effectively erased from the disk, the user is able to verify the result of this before giving the actual edit indication, making this not a particular problem for the present invention. Once a real edition has been made, the type of title in the title search indicator of PGC 393 information
defined by the user used for the actual editing will be set to "PGC original type information", so this can be used as the basis for the following video editing operations. As described above, a single video data editing apparatus using only an optical disc can perform advanced video editing, whereby a user can select from a plurality of freely chosen potential arrays of the source material. As a result, by using the present video data editing apparatus, a large number of video enthusiasts will be able to perform advanced editing operations that were considered out of reach of conventional home video equipment. It should be noted that the time information can be taken from the mark points in the cell information and managed with the information such as the address taken from the time map table in the form of a table. By doing so, this information can be presented to the user as selections 394
potentials on a screen that shows the pre-edition status. Reduced images (known as "thumbnails") can also be generated for each mark point and stored as separate files, with the indicator information that is also reproduced for each thumbnail. When the cell information is displayed in the pre-editing stage, these thumbnails can be displayed to show the potential selections that can be made by the user. The processing of the components such as the title reproduction control unit 23 (see figure 78) and the processing of the editing multi-stage control unit 26
(Figures 81 to 84) that was described in this fourth modality using the flow diagrams can be achieved by a machine language program. This machine language program can be distributed and sold having been recorded on a recording medium. Examples of this recording medium are an IC card, an optical disk, or a flexible disk. The machine language program recorded in the recording medium can then be installed in 395
a normal personal computer. when running the installed machine language programs, the normal personal computer can achieve the functions of the video data editing apparatus of this fourth embodiment. As a final note regarding the relationship between the VOBs and the original PGC information, it is preferred that an original PGC information set be provided for each VOB. Although the present invention has been fully described by way of example with reference to the accompanying drawings, it should be noted that various changes and modifications will be apparent to those skilled in the art. Therefore, unless the changes and modifications depart from the scope of the present invention, they should be considered as being included herein.
Industrial Field of the Request
The video data editing device, the optical disc, and the program of 396
storage and the recording medium stored by an editing program of the present invention capable of editing video images that are stored on an optical disc that will be easily performed in a short time. This is highly suitable for home video equipment, and creates a new market for home video editing devices.
It is noted that in relation to this date, the best method known by the applicant to carry out the present invention, is the conventional one for the manufacture of the objects to which it refers.
Having described the invention as above, the content of the following is claimed as property:
Claims (15)
1. A video data editing apparatus for an optical disk, the optical disk that records at least one video data file divided into a plurality of segments, each segment that is recorded in a consecutive area within a zone on the optical disk , the video data editing apparatus is characterized in that it comprises: a detection means for detecting a first segment, of the plurality of segments, wherein a length of the consecutive area is below a predetermined length; and a linking means for linking the first detected segment with the least part of a second segment, and for being a total continuous length of the first segment and a linked portion of the second segment at least equal to the predetermined length, moving at least one of the first segment and the linked part of the second segment to a different area of the optical disk, the second segment that includes video data that is played one immediately before and immediately after reproduction 398 of the video data in the first segment, the different position that is completely located within an area of the optical disc.
2. The video data editing apparatus according to claim 1, characterized in that the linking means includes: a first measuring unit for measuring a continuous length of an empty area on the optical disk on at least one side of an area of recording of the first segment detected by the detection means; a second measuring unit for measuring a continuous length of an empty area of the optical disk on at least one side of a recording area of the second segment; a first judgment unit for judging whether a continuous length of any empty area by the first measurement unit is greater than a data size of the second segment; a first unit of movement to move, when it is affirmative a trial of the 399 first judgment unit, the second segment to the empty area judged as being larger than the data size of the second segment, so that the first segment and the second segment are recorded on the disc in the order of reproduction; a second judgment unit for judging, when the judgment of the first judgment unit is negative, if a continuous length of any empty area measured by the second measurement unit is greater than a size of the first segment; a second movement unit to move, when a judgment of the second judgment unit is affirmative, the first segment to the empty area judged to be larger than the data size of the first segment, so that the first segment and the second segment are recorded on the disc in the order of reproduction.
3. The video data editing apparatus according to claim 2, characterized in that the link means further includes: 400 a search unit to search, when the judgments of both the first judgment unit and the second judgment unit are negative, of the optical disk for an empty area that the continuous length is greater than the length L, where the length L is a total length of the first segment and the second segment; and a third movement unit for moving, when the search unit has found an empty area with a continuous length greater than the length L, the first segment and the second segment to the empty area found by the search unit.
4. The video data editing apparatus according to claim 3, characterized in that: a third judgment unit for judging, when the search unit has found an empty area with continuous length greater than the length L, if the length L is below a maximum length S, the maximum length S is at least twice the predetermined length, 401 where the third movement unit moves the first segment and the second segment to the empty area only when the length L is below the * maximum length S, the link means that also includes: a forty unit of movement to move, when the length of L is not below the maximum length S, the entire first segment and only the linked part of the second segment to the empty area found by the search unit.
5. The video data editing apparatus according to claim 2, characterized in that further: a storage means for storing re-encoded data obtained by re-encoding a section of the video data read by the data measurement apparatus of video during an edit operation; a fourth judgment unit for judging, when a trial of the first trial unit is affirmative, if the first segment is a remaining part of a segment and was recorded 402 originally on the optical disc, but having had a data section read by the video data editing apparatus during the editing operation; and a first judgment unit to record, when a judgment of the fourth judgment unit is affirmed, the re-encoded data that is stored by the storage medium in the empty area, the first unit of movement that moves the second segment to a position on the optical disc that follows immediately after a recording position of the pre-encoded data.
6. The video data editing apparatus according to claim 5, characterized in that it further comprises: a second recording unit for recording, when the judgment of the second judgment unit is affirmative and the first segment is a remaining part, the data re-encoded stored by the storage medium immediately after a recording position of the first segment after the 403 first segment has been moved by the second unit of movement.
7. The video data editing apparatus for an optical disc, the optical disc that records at least one video data file divided into a plurality of segments, each segment being recorded in a consecutive area within a zone on the disc optical, the video data editing apparatus is characterized in that it comprises: a storage means for storing recording data which is the video data to be played immediately after the video data in one of the segments and immediately before of the video data in a different segment; a receiving means for receiving an instruction to record the recording data; a first measuring means for measuring, when an instruction to record recording data has been received, at a continuous length of a following region, the next region that is immediately after a first occupied area for the segment whose 404 video data will be played immediately before the recording data, the next region that is in the same area as the first occupied area; a second measuring means for measuring, when an instruction to record the recording data has been received, a continuous length of a preceding region, the preceding region that is immediately before a second occupied area for the segment whose video data is going to reproduce immediately after the recording data, the preceding region that is in the same zone as the second occupied area; and a recording medium for recording the recording data on the optical disc, based on the continuous lengths measured by the first measurement means and the second measurement means.
8. The video data editing apparatus according to claim 7, characterized in that the recording medium includes: 405 a first judgment unit for judging whether the continuous length of the next region measured by the first measurement means and the continuous length of the preceding region measured by the second measurement means exceed a data size of the recording data; and a first recording unit for recording the recording data in a region, of the preceding region and the next region, whose continuous length is judged by the first judgment unit as exceeding the data size of the recording data.
9. The video data editing apparatus according to claim 8, characterized in that the recording medium includes: a second judgment unit for judging, when the first judgment unit finds that the length continues both in the next region and in the preceding region are below the data size of the recording data, if a combined length L of the next region and the preceding region 406 exceeds the data size of the recording data; and a second recording unit for dividing the recording data, when the second judgment unit finds that the combined length L exceeds the data size of the recording data, to obtain divided parts and to record the respective divided parts in the following region and in the preceding region.
10. The video data editing apparatus according to the rei indication 9, characterized in that the recording means includes: a search unit for searching, when the combined length L is below the data size of the recording data, by an empty area on the disk whose continuous length is not greater than a maximum length; a third judgment unit for judging whether a combined length L2 of the recording data and one of the segments whose video data is to be played immediately before the recording data 407 and the segment whose video data is to be played after the recording data is below a predetermined maximum value; a first unit of movement to move a segment for which the third trial gives an affirmative judgment to the empty area found by the search unit; a third recording unit for recording the recording data in the empty area to which the first motion unit has moved the segment.
11. The video data editing apparatus according to claim 9, characterized in that the recording medium includes: a second unit of movement, to calculate, when the combined length of L is below the data size of the recording data and the third judgment unit has given a negative judgment for both the segment whose video data will be played immediately before the recording data and for the segment whose 408 video data will be played after the recording data, a data size equal to a difference between a predetermined size and a data size of the recording data, to rip off a part of data with the calculated data size of at least one of the elements whose respective video data is reproduced immediately before and immediately after the recording data, and to move the taken data part to the empty area; and a fourth recording mode for recording the recording data in the empty area.
12. A video data editing apparatus for an optical disk, the optical disk that records at least one video data file divided into a plurality of segments, each segment being recorded in a consecutive area within a zone on the optical disk , the video data addition apparatus is characterized in that it comprises: a detection means for detecting a first segment, of the plurality of segments, whose consecutive area on the optical disk is presided over by an empty area; a means of generating backup data that is connected to a hard disk drive, to read the first segment and write the first detected segment on the hard disk drive to generate the backup data; and a recording medium for recording the backup data for the first segment in the fixed disk unit in the empty area preceding the first segment.
13. A recording medium that stores an editing program that is read by a computer, the editing program that edits data on an optical disc having a plurality of zone area, with at least one file storing video data that is divided in a plurality of segments that are each stored within one of the zones of the zone areas; The recording program is characterized because it comprises: 410 a detection step for detecting a first segment, of the plurality, wherein a length of the consecutive area is below a predetermined length; and a link step for linking the first detected segment with at least part of a segment, and for making a total continuous length of the first segment and a linked part of the second segment at least equal to the predetermined length, by changing a position on the disk optical of at least one of a first segment and the linked part of the second segment, the second segment that includes video data that is reproduced one immediately before and immediately after the reproduction of the video data in the first segment.
14. The recording medium according to claim 13, characterized in that the link step includes: a first measurement sub-step for measuring a continuous length of an empty area on the optical disk on at least one side of a 411 area of recording of the first segment detected by the detection step; a second measurement sub-step for measuring a continuous length of an empty area on the optical disk on at least one side of a recording area of the second segment; a first judgment sub-step for judging whether a continuous length of any empty area measured by the first measurement sub-step is larger than a data size of the second segment; a first sub-step of movement to move, when a judgment of the first judgment sub-step is affirmative, the second segment to the empty area judged as being larger than the data size and the second segment so that the first segment and the segment are recorded on the disc in the playback order; a second judgment sub-step to judge, when the judgment of the first judgment sub-step is negative, if a continuous length of any empty area measured by the second measurement sub-step is greater than a data size of the first segment; 412 a second sub-step of movement to move, when a judgment of the second judgment sub-step is affirmative, the first segment to the empty area judged to be greater than the data size of the first segment, so that the first segment and the second segment are recorded on the disc in the order of reproduction.
15. The recording medium according to claim 13, characterized in that the computer stores the re-encoded data obtained by re-encoding a section of video data read during an editing operation, and the link step includes: a fourth sub-step of trial to judge, when a trial of the first sub-trial trial is affirmative, if the first segment is a remaining part of a segment that was originally recorded on the optical disc, but has had a section of data read during the editing operation; a first sub-step of judgment to record, when a judgment of the fourth sub-step of judgment is affirmative, the data of re- 413 encoded that are stored by the computer in the empty area, and the first sub-step of movement that moves the second segment to a position on the optical disk that follows immediately after a recording position of the re-encoded data.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP9/251990 | 1997-09-17 | ||
| JP10/169616 | 1998-06-17 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| MXPA99004453A true MXPA99004453A (en) | 2000-01-01 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6148140A (en) | Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer readable recording medium storing an editing program | |
| EP1020862B1 (en) | Optical disc, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing an reproduction program | |
| EP0903743B1 (en) | Video data editing apparatus and computer-readable recording medium storing an editing program | |
| US6487364B2 (en) | Optical disc, video data editing apparatus, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing a reproduction program | |
| JP3050311B2 (en) | Optical disk, recording device and reproducing device | |
| JP3410695B2 (en) | Playback apparatus, playback method, and computer-readable recording medium | |
| JP2000078519A (en) | Video data editing device and computer-readable recording medium recording editing program | |
| JPH11155131A (en) | Video data editing apparatus, optical disk used by video data editing apparatus as editing medium, computer-readable recording medium storing editing program | |
| MXPA99004453A (en) | Video data editing apparatus and computer-readable recording medium storing an editing program | |
| MXPA99004448A (en) | Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium storing an editing program | |
| MXPA99004447A (en) | Optical disc, video data editing apparatus, computer-readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer-readable recording medium storing a reproduction program | |
| JP2002093125A (en) | Optical disk, video data editing device, computer readable recording medium recording editing program, optical disk reproducing device, computer readable recording medium recording reproduction program |