CN1845573A - Simultaneous interpretation video conference system and method for supporting high capacity mixed sound - Google Patents
Simultaneous interpretation video conference system and method for supporting high capacity mixed sound Download PDFInfo
- Publication number
- CN1845573A CN1845573A CN200610040060.1A CN200610040060A CN1845573A CN 1845573 A CN1845573 A CN 1845573A CN 200610040060 A CN200610040060 A CN 200610040060A CN 1845573 A CN1845573 A CN 1845573A
- Authority
- CN
- China
- Prior art keywords
- mixing
- simultaneous interpretation
- high capacity
- audio
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Telephonic Communication Services (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
本发明公开了一种支持大容量混音的同声传译视频会议系统及方法,提出了基于Mel尺度倒谱特征与支持向量机静音检测方法、大容量混音方法和同声传译方法。可以实现更高的静音检测率、比其它混音方法更多的混音路数和在同一会议室进行多语种同步混音。静音检测方法以Mel尺度倒谱系数为语音特征,二分类支持向量机为分类器判断出静音和正常语音;混音方法采用语音的短时能量作为权重处理依据;多语种同步混音通过定义新的音频数据包头格式实现。
The invention discloses a simultaneous interpretation video conferencing system and method supporting large-capacity sound mixing, and proposes a mute detection method based on Mel scale cepstrum features and a support vector machine, a large-capacity sound mixing method and a simultaneous interpretation method. It can achieve higher silence detection rate, more mixing channels than other mixing methods, and multilingual simultaneous mixing in the same conference room. The silence detection method uses the Mel scale cepstral coefficient as the speech feature, and the binary support vector machine is used as the classifier to judge the silence and normal speech; the sound mixing method uses the short-term energy of the speech as the basis for weight processing; The audio packet header format implementation.
Description
技术领域technical field
本发明为一种用于互联网同声传译视频会议系统,具体地说解决了一个会议室大容量混音和同声传译的通讯问题。The invention is a video conferencing system for simultaneous interpretation on the Internet, specifically solving the communication problem of large-capacity sound mixing and simultaneous interpretation in a conference room.
背景技术Background technique
随着国内涉外事务、外贸等行业的高速发展,一种可以满足大容量混音和多语种交流的网络语音通讯平台将会有很好的应用前景。With the rapid development of domestic foreign affairs, foreign trade and other industries, a network voice communication platform that can satisfy large-capacity mixing and multilingual communication will have a good application prospect.
现在比较通用的混音架构是集中式和分布式混音,在集中式结构下,每个会议终端将自己的音频数据发送给中心混频器,在中心混频器上完成音频混合过程并将混音结果反馈给所有终端。在分布式结构下,每个会议终端从其他所有成员处接收音频数据并在自身站点上独立执行混音。很明显,这种方式导致了混音过程的重复计算,网络通信量很大,容易造成网络拥塞且投资昂贵。集中式处理具有减少客户端计算量,网络通信量低,简单且易于实现等特点。目前规模较小的多媒体会议系统都是采用的这种处理方式,但随着会议规模的增大,集中式处理的弊端也越来越明显。首先是混音计算量随着与会终端数的增加而增加,同时混音延时必然增加;其次是语音质量的下降,目前公开的几种混音算法:线性叠加、平均调整权重法、强对齐权重法、弱对齐权重法等,在混音语音路数达到一定数目时存在混音后音量降低、求和溢出及引入随机噪声的缺点。因此,为了对混音数量加以限制,一般都采用话语权切换来实现,这样对于使用者而言非常不方便。本发明的一部分就是为了解决这一系列的问题,具体方法是通过高效静音检测方法抑制发言端静音的传送和在混音器里使用更有效的混音方法,使用中可以做到至少20路的实时混音。Now the common mixing architecture is centralized and distributed mixing. Under the centralized structure, each conference terminal sends its own audio data to the central mixer, and the audio mixing process is completed on the central mixer and The mixing result is fed back to all terminals. In a distributed structure, each conference endpoint receives audio data from all other members and performs mixing independently at its own site. Obviously, this approach leads to repeated calculations in the mixing process, a large amount of network traffic, easy to cause network congestion and expensive investment. Centralized processing has the characteristics of reducing the amount of calculation on the client side, low network traffic, simple and easy to implement. At present, small-scale multimedia conference systems all adopt this processing method, but as the conference scale increases, the disadvantages of centralized processing become more and more obvious. The first is that the volume of mixing calculations increases with the number of participating terminals, and at the same time the mixing delay will inevitably increase; the second is the decline in voice quality. Currently, there are several public mixing algorithms: linear superposition, average weight adjustment method, strong alignment Weighting method, weak alignment weighting method, etc., have the disadvantages of volume reduction after mixing, sum overflow and random noise introduction when the number of mixing voice channels reaches a certain number. Therefore, in order to limit the number of mixed voices, it is generally implemented by switching the speaking right, which is very inconvenient for users. Part of the present invention is to solve this series of problems. The specific method is to suppress the transmission of the mute at the speaking end through an efficient mute detection method and use a more effective sound mixing method in the sound mixer. In use, at least 20 channels can be achieved. Mix in real time.
一般多媒体会议系统以会议室为单位进行语音处理,每个会议室只有一个混音器,这种模式是无法满足国际型交流活动要求的,国际型交流活动包括会议、商务交流、产品推介会等,该会议环境要求多语种信息可以同时发布和允许主办方与不同国家人员进行交流,而目前市场的一些视讯会议系统必须针对不同语种开设多个会议室,才能保证多种语言音频能同时被混音和传送到不同对象,显然这种方式是不经济的和带来操作的不便利。Generally, the multimedia conferencing system performs voice processing in units of conference rooms, and each conference room has only one mixer. This mode cannot meet the requirements of international communication activities. International communication activities include conferences, business exchanges, product promotion meetings, etc. , the conference environment requires that multilingual information can be released at the same time and allow the organizer to communicate with people from different countries. However, some video conferencing systems in the current market must set up multiple conference rooms for different languages in order to ensure that multilingual audio can be mixed at the same time. It is obvious that this method is not economical and brings inconvenience to operation.
发明内容Contents of the invention
为了提高混音效率和解决同声传译问题,本发明提供一种更高效静音检测方法、混音方法和同声传译方法。可以实现更高的静音检测率、比其它混音方法更多的混音路数和在同一会议室进行多语种同步混音。In order to improve the sound mixing efficiency and solve the problem of simultaneous interpretation, the present invention provides a more efficient silence detection method, a sound mixing method and a simultaneous interpretation method. It can achieve higher silence detection rate, more mixing channels than other mixing methods, and multilingual simultaneous mixing in the same conference room.
本发明的目的是通过以下技术方案来实现的:The purpose of the present invention is achieved through the following technical solutions:
系统采用集中式处理架构,定义了两个主要的系统:客户终端(Terminal)、多点控制单元(MCU)。客户终端包括视频编解码、音频编解码、控制单元、网络传输层、辅助办公等功能模块,音频编解码采用下面提出的静音检测方法,在压缩音频之前检测出是否需要压缩该帧语音。多点控制单元一般安装在服务器上,MCU包含了多点控制模块以及多点处理模块,多点处理模块式用下面提出的短时自适应权重混音方法。The system adopts a centralized processing architecture and defines two main systems: client terminal (Terminal) and multipoint control unit (MCU). The client terminal includes functional modules such as video codec, audio codec, control unit, network transport layer, and auxiliary office. The audio codec uses the silence detection method proposed below to detect whether the frame of voice needs to be compressed before compressing the audio. The multi-point control unit is generally installed on the server. The MCU includes a multi-point control module and a multi-point processing module. The multi-point processing module uses the short-time adaptive weight mixing method proposed below.
支持大容量混音的方法由以下步骤实现:The method of supporting large-capacity mixing is realized by the following steps:
1、客户终端中音频编解码模块使用本发明提供的基于Mel尺度倒谱特征与支持向量机静音检测方法以减少音频数据的传输。这里采用Mel尺度倒谱系数作为语音特征,Mel尺度倒谱系数利用人耳的听觉掩蔽效应,将语音在频率域上划分为一系列的临界带组成三角形的滤波器组,即Mel滤波器序列。静音检测的过程是:1. The audio codec module in the client terminal uses the Mel-scale cepstrum feature and support vector machine silence detection method provided by the present invention to reduce the transmission of audio data. Mel-scale cepstral coefficients are used here as speech features. Mel-scale cepstral coefficients use the auditory masking effect of the human ear to divide speech into a series of critical bands in the frequency domain to form a triangular filter bank, that is, a Mel filter sequence. The process of silence detection is:
1)提取一帧音频数据的Mel尺度倒谱系数,Mel尺度倒谱系数(CMFCC)计算公式如下:1) Extract the Mel-scale cepstral coefficient of a frame of audio data, and the formula for calculating the Mel-scale cepstral coefficient (CMFCC) is as follows:
其中:in:
式中,o(l)、c(l)和h(l)分别是1个三角形滤波器的下限、中心和上限频率。In the formula, o(l), c(l) and h(l) are the lower limit, center and upper limit frequency of a triangular filter respectively.
2)用二分类支持向量机对音频的Mel尺度倒谱系数加以判别,得到正常语音和静音两类结果。当然也可使用其它分类器,本发明对此无限制。2) Discriminate the Mel-scale cepstral coefficients of the audio with a binary support vector machine, and obtain two types of results: normal speech and mute. Of course, other classifiers can also be used, and the present invention is not limited thereto.
2、多点控制单元中短时自适应权重混音方法2. Short-term adaptive weight mixing method in multi-point control unit
定义混音权重w[j],首先计算每路声音在k个数据帧中的平均幅度值:Define the mixing weight w[j], first calculate the average amplitude value of each sound in k data frames:
上式中data[j,i]表示第j路语音的第i个样本值,字母1代表一个数据帧中声音的样本数。然后根据Avg[j]计算出第j路语音应占有的权重w[j]:In the above formula, data[j, i] represents the i-th sample value of the j-th voice, and the letter 1 represents the number of sound samples in a data frame. Then calculate the weight w[j] that the jth voice should occupy according to Avg[j]:
然后根据w[j]对声音进行混合:Then mix the sounds according to w[j]:
同声传译方法的实现步骤是:定义新的音频数据包头格式,使具可以表明语种。当MCU建立会议室时,为一个会议室创建n个语种混音器。发言方开始时表明发言语种类别,接受方表明接受语种类别,或者对发言、接受语种进行设置。MCU接受到音频时判断该路音频属于哪个会议室、语种,并送入相应的混音器。然后MCU根据接受方的请求分别传输混音后数据。The realization steps of the simultaneous interpretation method are: defining a new audio data packet header format, so that the language can be indicated. When the MCU creates a meeting room, create n language mixers for one meeting room. At the beginning, the speaking party indicates the type of language to be spoken, and the receiving side indicates the type of language to be accepted, or sets the language to be spoken and accepted. When the MCU receives the audio, it judges which conference room and language the audio belongs to, and sends it to the corresponding mixer. Then the MCU transmits the mixed data respectively according to the request of the receiving party.
附图说明Description of drawings
图1是本发明的模块结构示意图;Fig. 1 is a schematic diagram of the module structure of the present invention;
图2是本发明的系统流程图。Fig. 2 is a system flow chart of the present invention.
具体实施方式Detailed ways
1、图1所示为系统模块的组成框图,在发送客户终端,从输入设备获取的视频和音频信号,经编码器压缩后,按照一定格式打包,通过网络发送出去;在多点控制单元,多点控制模块对所有会议提供控制功能,多点处理模块提供数据转发服务;在接收客户终端,来自网络的数据包首先被解包,获得的视频、音频压缩数据经解码后送入输出设备,用户数据和控制数据也得到了相应的处理。系统所包含各个功能是:1. Figure 1 shows the composition block diagram of the system modules. When sending the client terminal, the video and audio signals obtained from the input device are compressed by the encoder, packed according to a certain format, and sent out through the network; in the multi-point control unit, The multi-point control module provides control functions for all conferences, and the multi-point processing module provides data forwarding services; at the receiving client terminal, the data packets from the network are first unpacked, and the obtained video and audio compressed data are decoded and then sent to the output device. User data and control data are also processed accordingly. The functions included in the system are:
视频编解码:完成对视频码流的冗余压缩编码,可以通过MPEG4、H.264等实现。Video codec: Complete redundant compression coding of video stream, which can be realized by MPEG4, H.264, etc.
音频编解码:完成语音信号的静音检测和编解码,并在接收端可选择地加入缓冲延迟以保证语音的连续性,可以使用g.723、g729等。Audio codec: complete the silence detection and codec of the voice signal, and optionally add a buffer delay at the receiving end to ensure the continuity of the voice, you can use g.723, g729, etc.
控制单元:提供端到端信令,以保证终端的正常通信。定义了请求、应答、信令和指示四种信息,通过各种终端间进行通信能力协商,打开/关闭逻辑信道,发送命令或指示等操作,完成对通信的控制。Control unit: provide end-to-end signaling to ensure the normal communication of the terminal. Four types of information are defined: request, response, signaling, and indication. Communication capabilities are negotiated between various terminals, logical channels are opened/closed, and commands or indications are sent to complete communication control.
网络传输层:将视频、音频、控制等数据格式化并发送,同时从网络接收数据。另外,还负责处理一些诸如逻辑分帧、加序列号、错误检测等功能。Network transport layer: format and send video, audio, control and other data, and receive data from the network at the same time. In addition, it is also responsible for processing functions such as logical framing, serial number addition, and error detection.
辅助办公:用来实现电子白板、文字聊天、文件传送等具体操作功能。Auxiliary office: used to realize specific operation functions such as electronic whiteboard, text chat, file transfer, etc.
图2描述了本发明系统中音、视频的数据流流程。音、视频的特征和序列号等可通过RTP协议实现,发送时采用TCP或UDP协议。Fig. 2 has described the data flow process of audio and video in the system of the present invention. The characteristics and serial numbers of audio and video can be realized through the RTP protocol, and the TCP or UDP protocol is used when sending.
2、支持大容量混音的方法实施描述:静音检测中,Mel尺度倒谱系数为L=12个,支持向量机的内积函数选用径向基函数,支持向量机的训练方法可以采用SMO方法,本发明对此并无限制。2. Implementation description of the method of supporting large-capacity mixing: In silence detection, the Mel-scale cepstral coefficients are L=12, the inner product function of the support vector machine uses the radial basis function, and the training method of the support vector machine can use the SMO method , the present invention is not limited thereto.
短时自适应权重混音方法可以设计出高度并行化的计算结构。注意到式(4)中各路音频的平均幅度值Avg[j]的计算是相互独立的,因此各路可以并行地计算Avg[j]。而到了混合这一步,各路的计算仍然是相互独立的,因此同样适合进行并行计算。程序编写过程中还可用MMX、SSE、SSE2指令集对程序进行优化。实际测试表明,该方法混音效果良好,不产生新的混音噪声,在音量公平的原则下较好地保留了原各路声音的细节。The short-term adaptive weight mixing method can design a highly parallel computing structure. Note that the calculation of the average amplitude value Avg[j] of each audio channel in formula (4) is independent of each other, so each channel can calculate Avg[j] in parallel. When it comes to mixing, the calculations of each channel are still independent of each other, so it is also suitable for parallel computing. In the process of programming, MMX, SSE, and SSE2 instruction sets can also be used to optimize the program. The actual test shows that this method has a good mixing effect, does not generate new mixing noise, and preserves the details of the original sounds under the principle of fair volume.
3、同声传译技术在具体使用时,每个客户端都可以从多个不同的语种中自由选择收听的语种,对于发言权,需要进行权限设定,对于一般身份的客户,其发言的语种只能使用默认的一种语种,只有身份为翻译或高级的客户可以选择发言的语种为其它的语种。每个客户端都把本地的音频压缩后上传给MCU,由MCU根据客户发言选择的语种,分别在不同混音器中解压后混合起来,然后再根据客户收听所选择的语种将其所需要的语种再压缩传输下去。对于发言与收听处于同一语种的客户,MCU还需要先将其声音从混合的声音中减掉,以避免该客户听到自己的声音。3. When the simultaneous interpretation technology is used in practice, each client can freely choose the language to listen to from multiple different languages. For the right to speak, authority settings need to be made. For customers with general identities, the language they speak Only one default language can be used, and only translators or senior customers can choose to speak other languages. Each client compresses the local audio and uploads it to the MCU. The MCU decompresses it in different mixers and mixes them according to the language selected by the customer, and then converts the audio it needs according to the language selected by the customer. The language is then compressed and transmitted. For a client who speaks and listens in the same language, the MCU also needs to subtract its voice from the mixed voice first, so as to avoid the client hearing his own voice.
为了使MCU、客户端能有效表示和区别发送或接收的数据报语种类型,定义新的音频数据包头格式,在数据包头中使用多比特位数对语种加以定义,一般3个比特就可以满足8个语种的同时使用。In order to enable the MCU and the client to effectively represent and distinguish the language types of datagrams sent or received, a new audio data packet header format is defined, and multiple bits are used to define the language in the data packet header. Generally, 3 bits can satisfy 8 Simultaneous use of two languages.
Claims (3)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN200610040060.1A CN1845573A (en) | 2006-04-30 | 2006-04-30 | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN200610040060.1A CN1845573A (en) | 2006-04-30 | 2006-04-30 | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN1845573A true CN1845573A (en) | 2006-10-11 |
Family
ID=37064483
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN200610040060.1A Pending CN1845573A (en) | 2006-04-30 | 2006-04-30 | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN1845573A (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2008040258A1 (en) * | 2006-09-30 | 2008-04-10 | Huawei Technologies Co., Ltd. | System and method for realizing multi-language conference |
| CN103327014A (en) * | 2013-06-06 | 2013-09-25 | 腾讯科技(深圳)有限公司 | Voice processing method, device and system |
| CN105304079A (en) * | 2015-09-14 | 2016-02-03 | 上海可言信息技术有限公司 | Multi-party call multi-mode speech synthesis method and system |
| CN106060707A (en) * | 2016-05-27 | 2016-10-26 | 北京小米移动软件有限公司 | Reverberation processing method and device |
| CN107046523A (en) * | 2016-11-22 | 2017-08-15 | 深圳大学 | A simultaneous interpretation method and client based on a personal mobile terminal |
| CN113257256A (en) * | 2021-07-14 | 2021-08-13 | 广州朗国电子科技股份有限公司 | Voice processing method, conference all-in-one machine, system and storage medium |
| CN118865942A (en) * | 2024-07-16 | 2024-10-29 | 中国长江电力股份有限公司 | A low-latency real-time speech-to-text and text-to-speech transmission method |
-
2006
- 2006-04-30 CN CN200610040060.1A patent/CN1845573A/en active Pending
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9031849B2 (en) | 2006-09-30 | 2015-05-12 | Huawei Technologies Co., Ltd. | System, method and multipoint control unit for providing multi-language conference |
| WO2008040258A1 (en) * | 2006-09-30 | 2008-04-10 | Huawei Technologies Co., Ltd. | System and method for realizing multi-language conference |
| US9311920B2 (en) * | 2013-06-06 | 2016-04-12 | Tencent Technology (Shenzhen) Company Limited | Voice processing method, apparatus, and system |
| CN103327014A (en) * | 2013-06-06 | 2013-09-25 | 腾讯科技(深圳)有限公司 | Voice processing method, device and system |
| WO2014194728A1 (en) * | 2013-06-06 | 2014-12-11 | Tencent Technology (Shenzhen) Company Limited | Voice processing method, apparatus, and system |
| US20150112668A1 (en) * | 2013-06-06 | 2015-04-23 | Tencent Technology (Shenzhen) Company Limited | Voice processing method, apparatus, and system |
| CN103327014B (en) * | 2013-06-06 | 2015-08-19 | 腾讯科技(深圳)有限公司 | A kind of method of speech processing, Apparatus and system |
| CN105304079A (en) * | 2015-09-14 | 2016-02-03 | 上海可言信息技术有限公司 | Multi-party call multi-mode speech synthesis method and system |
| CN105304079B (en) * | 2015-09-14 | 2019-05-07 | 上海可言信息技术有限公司 | A kind of multi-mode phoneme synthesizing method of multi-party call and system and server |
| CN106060707A (en) * | 2016-05-27 | 2016-10-26 | 北京小米移动软件有限公司 | Reverberation processing method and device |
| CN106060707B (en) * | 2016-05-27 | 2021-05-04 | 北京小米移动软件有限公司 | Reverberation processing method and device |
| CN107046523A (en) * | 2016-11-22 | 2017-08-15 | 深圳大学 | A simultaneous interpretation method and client based on a personal mobile terminal |
| CN113257256A (en) * | 2021-07-14 | 2021-08-13 | 广州朗国电子科技股份有限公司 | Voice processing method, conference all-in-one machine, system and storage medium |
| CN118865942A (en) * | 2024-07-16 | 2024-10-29 | 中国长江电力股份有限公司 | A low-latency real-time speech-to-text and text-to-speech transmission method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101502089B (en) | Method for carrying out an audio conference, audio conference device, and method for switching between encoders | |
| CN112104836A (en) | Audio mixing method, system, storage medium and equipment for audio server | |
| CN113140225A (en) | Voice signal processing method and device, electronic equipment and storage medium | |
| CN115171705B (en) | A method for compensating for voice packet loss, a method for voice communication and a device | |
| CN105304079A (en) | Multi-party call multi-mode speech synthesis method and system | |
| CN101414463B (en) | A kind of sound mixing coding method, device and system | |
| CN116013367A (en) | Audio quality analysis method and device, electronic equipment and storage medium | |
| US7945006B2 (en) | Data-driven method and apparatus for real-time mixing of multichannel signals in a media server | |
| CN1845573A (en) | Simultaneous interpretation video conference system and method for supporting high capacity mixed sound | |
| US9258429B2 (en) | Encoder adaption in teleconferencing system | |
| CN100420298C (en) | Digital voice-activated orientation method for camera shooting azimuth | |
| CN114363553A (en) | Dynamic code stream processing method and device in video conference | |
| CN113299299A (en) | Audio processing apparatus, method and computer-readable storage medium | |
| US20030088622A1 (en) | Efficient and robust adaptive algorithm for silence detection in real-time conferencing | |
| CN101502043B (en) | Method for implementing voice conference and voice conference system | |
| Sethi et al. | A new weighted audio mixing algorithm for a multipoint processor in a VoIP conferencing system | |
| CN116866321B (en) | Center-free multipath sound consistency selection method and system | |
| US20230154474A1 (en) | System and method for providing high quality audio communication over low bit rate connection | |
| US20230005469A1 (en) | Method and system for speech detection and speech enhancement | |
| Hardman et al. | 13 Internet/Mbone Audio | |
| Baskaran et al. | Audio mixer with automatic gain controller for software based multipoint control unit | |
| Katorin et al. | Improving the QoS multiservice networks: New methods, impact on the security of transmitted data | |
| US20230186900A1 (en) | Method and system for end-to-end automatic speech recognition on a digital platform | |
| Hardman et al. | Internet/Mbone Audio | |
| CN108364652A (en) | A kind of intelligent sound for artificial intelligence phone answers intersection control routine |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C57 | Notification of unclear or unknown address | ||
| DD01 | Delivery of document by public notice |
Addressee: Xue Wei Document name: Notification before expiration of term |
|
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20061011 |