[go: up one dir, main page]

WO2002030050A1 - Distributed discovery system - Google Patents

Distributed discovery system Download PDF

Info

Publication number
WO2002030050A1
WO2002030050A1 PCT/CA2001/000666 CA0100666W WO0230050A1 WO 2002030050 A1 WO2002030050 A1 WO 2002030050A1 CA 0100666 W CA0100666 W CA 0100666W WO 0230050 A1 WO0230050 A1 WO 0230050A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
discovery
distributed
network topology
performance monitor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CA2001/000666
Other languages
French (fr)
Inventor
Loren Christensen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Linmor Technologies Inc
Original Assignee
Linmor Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA002322117A external-priority patent/CA2322117A1/en
Application filed by Linmor Technologies Inc filed Critical Linmor Technologies Inc
Priority to GB0306298A priority Critical patent/GB2383228B/en
Priority to AU2001258116A priority patent/AU2001258116A1/en
Publication of WO2002030050A1 publication Critical patent/WO2002030050A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/042Network management architectures or arrangements comprising distributed management centres cooperatively managing the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Definitions

  • the present invention relates to the discovery of the network topology of devices comprising a high speed data network, and more particularly to a high performance distributed discovery system.
  • a network needs to be monitored for the existence, disappearance, reappearance and status of traditional network devices such as routers, hubs and bridges and more recently high speed switching devices such as ATM, Frame Relay, DSL, VoIP and Cable
  • Discovery is the process by which network management systems selectively poll a network to discover very large numbers of objects in a very short period of time, without introducing excessive network traffic. It is the function of a discovery system to discover devices on a network and the structure of that network. Discovery is primarily intended to get network management users quickly up to speed, track changes in the network, update network maps, and report on these changes.
  • Discovery typically further involves discovering the configuration of individual devices, their relationship, as well as discovering interconnection links or implied relationships.
  • the present invention is directed to a high performance distributed discovery- system that satisfies this need.
  • the system leveraging the functionality of a high speed communications network, comprises distributing records of discovered network devices using a plurality of discovery engine instances located on at least one data collection node computer whereby the resulting distributed record compilation comprises a distributed network topology database.
  • the distributed network topology database is accessed using at least one performance monitor server computer to facilitate network management.
  • At least one discovery engine instance is located on the data collection node computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing of the distributed network topology database.
  • a vendor specific discovery subroutine is launched upon detection by the system of anon-MIB II standard device so as to query the vendor's private MB using a vendor specific algorithm.
  • Advances in overall scalability are achieved by dividing the workload of network topology discovery across several computing nodes.
  • the discovery job is distributed across all the data collectors such that the only requirement for each data collector is to be able to reach, typically via TCP/IP and SNMP, the nodes and networks for which it is responsible. This reachability requirement already exists for telemetry, in any case, and has therefore already been provided for.
  • Figure 1 is a schematic overview of the high performance distributed discovery system.
  • the high performance distributed discovery system leveraging the functionality of a high speed communications network 14, comprises at least one data collection (DC) node computer 12 and at least one performance monitor
  • (PM) server computer 18 in network 14 contact with the DC node computers 12.
  • the DC node computers 12 poll and register managed network 14 objects with the resulting distributed record compilation forming a distributed network topology database 16 that is accessed by the PM server computers 18.
  • a plurality of discovery engine instances 20 are located on the DC node computers 12 on a ratio of one engine instance 20 to one central processing unit so as to provide for the parallel processing of the distributed network topology database 16.
  • the discovery engine 20 is comprised of a base program and a scalable family of vendor-specific discovery subroutines.
  • the base program is designed to query and register any IP device and subsequently obtain detailed device, , state and topology information for any IP device that responds to an SNMP query, such as any device that is managed by an SNMP agent.
  • the base program discovers detailed information for any device that supports the standard MIB-II, but not the vendor's private MEB.
  • the dis covery of detailed information from a vendor' s private MDB is accomplished through what is known as vendor-specific discovery subroutines.
  • discovery subroutines are lightweight independent applications that are launched whenever the main discovery program detects a particular vendor's hardware.
  • the discovery subroutines contain vendor-specific algorithms designed to query the vendor' s private MIB .
  • the DC node computers 12 are responsible for telemetry to the managed elements and management of the topology database 16.
  • the PM server computers 18 provide system control and reporting interface.
  • the proximal topology of the DC node computers 12 in relation to the managed network 14 provides for inherent scalability and a reduction in required bandwidth.
  • the ability to utilize excess memory and disk storage resources on the DC node computers 12 facilitates the discovery of larger networks.
  • the aggregate resources of many DC node computers 12 is far greater than that available on any one PM server computer 18. Advances in overall scalability are achieved by dividing the workload of network topology discovery across several computing nodes.
  • the discovery job is distributed across all the DC node computers 12 such that the only requirement for each DC node computer 12 is to be able to reach, typically via TCP/IP and SNMP, the nodes and networks for which it is responsible. This reachability requirement already exists for telemetry, in any case, and has therefore already been provided for.
  • PM server computers 18 are utilized to access the distributed network topology database 16 for object management.
  • unique algorithms selectively discover network devices based on "clues" picked up from existing information such as router tables and customer input.
  • the vendor specific discovery subroutines extend the base discovery application to provide for inter-operability with amultiplicity of ATM and FR vendors' equipment.
  • All of the processing intensive data collection takes place as close to the customers network and network devices as possible, thereby providing for faster discovery as well as distributed storage and processing.
  • the unwanted side-effect of the PM server computerl ⁇ unwittingly becoming a router is removed, thereby enhancing security.
  • Devices are reliably re-discovered, thereby enabling the tracking of changes to a network's topology as it evolves in real time or near real time.
  • the system will not re-discover existing devices unless explicitly requested to do so, which is significant when discovering a large network that is typically discovered in stages.
  • the system handles timeouts in a more reliable manner. This is important on wide area networks where timeouts are more common during discovery.
  • This invention allows Network Service Providers to automatically discover more of the existing devices in their networks, permitting customers to reconcile what is really out in their network with what their administrative records tell them is out there. It has been shown that such verification can potentially lead to great cost savings in operations, as well as vastly improved discovery times as speed will now be directly correlated with the number of DC node computers 12 deployed.
  • the system provides forthe rapid automatic mapping of a customer's networkfor the purpose of object management, down to unprecedentedly fine levels of granularity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)
  • Small-Scale Networks (AREA)

Abstract

A high performance distributed discovery system, leveraging the functionality of a high speed communications network, for the discovery of the network topology of a high speed data network. The system comprises a plurality of discovery engines on at least one, and preferably a plurality of data collection node computers that poll and register managed network objects with the resulting distributed record compilation forming a distributed network topology database that is selectively accessed by at least one performance monitoring server computer to provide for network management. A plurality of discovery engine instances are located on the data collection node computers on ratio of one engine instance to one central processing unit so as to provide for the parallel processing of the distributed network topology database.

Description

DISTRIBUTED DISCOVERY SYSTEM
Field of the Invention
The present invention relates to the discovery of the network topology of devices comprising a high speed data network, and more particularly to a high performance distributed discovery system.
Background of the Invention
Today's high speed data networks contain an ever-growing number of devices.
A network needs to be monitored for the existence, disappearance, reappearance and status of traditional network devices such as routers, hubs and bridges and more recently high speed switching devices such as ATM, Frame Relay, DSL, VoIP and Cable
Modems.
In order to enable network monitoring, a process known as discovery is typically performed. Discovery is the process by which network management systems selectively poll a network to discover very large numbers of objects in a very short period of time, without introducing excessive network traffic. It is the function of a discovery system to discover devices on a network and the structure of that network. Discovery is primarily intended to get network management users quickly up to speed, track changes in the network, update network maps, and report on these changes.
Discovery typically further involves discovering the configuration of individual devices, their relationship, as well as discovering interconnection links or implied relationships.
In the past rapid discovery was not an issue, since the level of scalability of performance monitoring did not require the depth of discovery that is now required. Major advances in scalability have recently been achieved in performance monitoring, and as performance monitoring scales to manage larger and larger networks the scalability of discovery must advance accordingly in order to deal with the inevitable increase in the number of network objects and react quickly to changes in network topology.
At present network devices are typically polled over long distances from the network management system. This consumes valuablebandwidthandresults in increased processing times and potential data loss. As well, customers often dislike inadvertent access around their firewalls, via the common connection to the network performance monitoring server computer. Therefore, what is needed is a method of object discovery that is proximal to the managed network.
For the foregoing reasons, there is a need for an economical method of network topology discovery that provides forhigh speed polling, high object capacity, scalability, and proximity to managed networks, while preserving security policies that are inherent in the network domain configuration.
Summary of the Invention
The present invention is directed to a high performance distributed discovery- system that satisfies this need. The system, leveraging the functionality of a high speed communications network, comprises distributing records of discovered network devices using a plurality of discovery engine instances located on at least one data collection node computer whereby the resulting distributed record compilation comprises a distributed network topology database. The distributed network topology database is accessed using at least one performance monitor server computer to facilitate network management.
At least one discovery engine instance is located on the data collection node computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing of the distributed network topology database.
In aspects of the invention a vendor specific discovery subroutine is launched upon detection by the system of anon-MIB II standard device so as to query the vendor's private MB using a vendor specific algorithm.
Advances in overall scalability are achieved by dividing the workload of network topology discovery across several computing nodes. The discovery job is distributed across all the data collectors such that the only requirement for each data collector is to be able to reach, typically via TCP/IP and SNMP, the nodes and networks for which it is responsible. This reachability requirement already exists for telemetry, in any case, and has therefore already been provided for.
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
Brief Description of the Drawings
These and other features, aspects, and advantages of the present invention will become betterunderstood with regard to the following description, appended claims, and accompanying drawings where: Figure 1 is a schematic overview of the high performance distributed discovery system.
Detailed Description of the Presently Preferred Embodiment
As shown in figure 1, the high performance distributed discovery system, leveraging the functionality of a high speed communications network 14, comprises at least one data collection (DC) node computer 12 and at least one performance monitor
(PM) server computer 18 in network 14 contact with the DC node computers 12. The DC node computers 12 poll and register managed network 14 objects with the resulting distributed record compilation forming a distributed network topology database 16 that is accessed by the PM server computers 18.
A plurality of discovery engine instances 20 are located on the DC node computers 12 on a ratio of one engine instance 20 to one central processing unit so as to provide for the parallel processing of the distributed network topology database 16.
The discovery engine 20 is comprised of a base program and a scalable family of vendor-specific discovery subroutines. The base program is designed to query and register any IP device and subsequently obtain detailed device, , state and topology information for any IP device that responds to an SNMP query, such as any device that is managed by an SNMP agent. The base program discovers detailed information for any device that supports the standard MIB-II, but not the vendor's private MEB. The dis covery of detailed information from a vendor' s private MDB is accomplished through what is known as vendor-specific discovery subroutines.
These discovery subroutines are lightweight independent applications that are launched whenever the main discovery program detects a particular vendor's hardware. The discovery subroutines contain vendor-specific algorithms designed to query the vendor' s private MIB .
Launch points for each discovery subroutine are included in the main program. So, if during the normal operation of discovery a valid element value is encountered identifying a specific vendor's hardware, the appropriate discovery subroutine is launched.
The DC node computers 12 are responsible for telemetry to the managed elements and management of the topology database 16. The PM server computers 18 provide system control and reporting interface. The proximal topology of the DC node computers 12 in relation to the managed network 14 provides for inherent scalability and a reduction in required bandwidth. As well, the ability to utilize excess memory and disk storage resources on the DC node computers 12 facilitates the discovery of larger networks. The aggregate resources of many DC node computers 12 is far greater than that available on any one PM server computer 18. Advances in overall scalability are achieved by dividing the workload of network topology discovery across several computing nodes. The discovery job is distributed across all the DC node computers 12 such that the only requirement for each DC node computer 12 is to be able to reach, typically via TCP/IP and SNMP, the nodes and networks for which it is responsible. This reachability requirement already exists for telemetry, in any case, and has therefore already been provided for.
All the discovery and topology database storage is taking place behind the client' s firewall requiring only a minimal amount of management traffic to be exerted on the network to generate reports. PM server computers 18 are utilized to access the distributed network topology database 16 for object management.
In embodiments of the invention unique algorithms selectively discover network devices based on "clues" picked up from existing information such as router tables and customer input.
The vendor specific discovery subroutines extend the base discovery application to provide for inter-operability with amultiplicity of ATM and FR vendors' equipment.
All of the processing intensive data collection takes place as close to the customers network and network devices as possible, thereby providing for faster discovery as well as distributed storage and processing. As well, the unwanted side-effect of the PM server computerlδ unwittingly becoming a router is removed, thereby enhancing security. Devices are reliably re-discovered, thereby enabling the tracking of changes to a network's topology as it evolves in real time or near real time.
The ability to limit what is discovered by criteria such as vendor & device type has been added thereby eliminating the need to specify the address of each device when discovering the network.
The system will not re-discover existing devices unless explicitly requested to do so, which is significant when discovering a large network that is typically discovered in stages.
The system handles timeouts in a more reliable manner. This is important on wide area networks where timeouts are more common during discovery.
Since all the discovery sub-tasks can be performed simultaneously, the overall time to characterize the customer's network is reduced. This enables discovery to deal with larger networks in a faster manner, and eliminates the PM server computer's 18 reachability requirement with respect to managed elements.
This invention allows Network Service Providers to automatically discover more of the existing devices in their networks, permitting customers to reconcile what is really out in their network with what their administrative records tell them is out there. It has been shown that such verification can potentially lead to great cost savings in operations, as well as vastly improved discovery times as speed will now be directly correlated with the number of DC node computers 12 deployed.
The system provides forthe rapid automatic mapping of a customer's networkfor the purpose of object management, down to unprecedentedly fine levels of granularity.

Claims

What is claimed is:
1. A network topology distributed discovery system, leveraging the functionality of a high speed communications network, comprising the steps of: (i) distributing records of discovered network devices using a plurality of discovery engine instances located on at least one data collection node computer whereby the resulting distributed record compilation comprises a distributed network topology database; and
(ii) importing the distributed network topology database onto at least one performance monitor server computer so as to enable network management.
2. The system according to claim 1, wherein at least one discovery engine instance is located on the data collection node computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is atleasttwo so as to enable the parallel processing of the distributed network topology database.
3. The system according to claim 1, wherein a vendor specific discovery subroutine is launched upon detection by the system of a non-MIB II standard device so as to query the vendor's private M_B using a vendor specific algorithm.
4. The system according to claim 1, wherein at least one performance monitor client computer is connected to the network so as to communicate remotely with the performance monitor server computers.
5. A network topology distributed discovery system, leveraging the functionality of a high speed communications network, comprising:
(i) at least one data collection node computer connected to the network for discovering network devices using a plurality of discovery engine instances whereby a distributed network topology database is created; and (ii) at least one performance monitor server computer having imported the distributed network topology databasewherebynetworkmanagement is enabled.
6. The system according to claim 5, wherein at least one discovery engine instance is located on the data collection node computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances for the system is at least two so as to enable the parallel processing of the network topology database.
7. The system according to claim 5, wherein a vendor specific discovery subroutine is launched upon detection by the system of a non-MIB II standard device so as to query the vendor's private MIB using a vendor specific algorithm.
8. The system according to claim 5, wherein at least one performance monitor client computer is connected to the network so as to communicate remotely with the performance monitor server computers.
9. A storage medium readable by an install server computer in a network topology distributed discovery system including the install server, leveraging the functionality of a high speed communications network, the storage medium encoding a computer process comprising:
(i) a processing portion for distributing records of discovered network devices using a plurality of discovery engine instances located on at least one data collection node computer whereby the resulting distributed record compilation comprises a distributed network topology database; and (ii) a processing portion for importing the distributed network topology database onto at least one performance monitor server computer so as to enable network management.
10. The system according to claim 9, wherein at least one discovery engine instance is located on the data collection node computers on a ratio of one engine instance to one central processing unit whereby the total number of engine instances is at least two so as to enable the parallel processing of the network topology database.
11. The system according to claim 9, wherein a vendor specific discovery subroutine is launched upon detection by the system of a non-MIB II standard device so as to query the vendor's private MIB using a vendor specific algorithm.
12. The system according to claim 9, wherein at least one performance monitor client computer is connected to the network so as to communicate remotely with the performance monitor server computers.
PCT/CA2001/000666 2000-10-03 2001-05-23 Distributed discovery system Ceased WO2002030050A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0306298A GB2383228B (en) 2000-10-03 2001-05-23 Distributed discovery system
AU2001258116A AU2001258116A1 (en) 2000-10-03 2001-05-23 Distributed discovery system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CA2,322,117 2000-10-03
CA002322117A CA2322117A1 (en) 2000-10-03 2000-10-03 High performance distributed discovery system
CA2,345,292 2001-04-26
CA002345292A CA2345292A1 (en) 2000-10-03 2001-04-26 High performance distributed discovery system

Publications (1)

Publication Number Publication Date
WO2002030050A1 true WO2002030050A1 (en) 2002-04-11

Family

ID=25682142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2001/000666 Ceased WO2002030050A1 (en) 2000-10-03 2001-05-23 Distributed discovery system

Country Status (4)

Country Link
AU (1) AU2001258116A1 (en)
CA (1) CA2345292A1 (en)
GB (1) GB2383228B (en)
WO (1) WO2002030050A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2674876A1 (en) * 2012-06-14 2013-12-18 Alcatel Lucent Streaming analytics processing node and network topology aware streaming analytics system
EP3035595A1 (en) * 2014-12-17 2016-06-22 Alcatel Lucent Routable distributed database for managing a plurality of entities of a telecommunication network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10892938B1 (en) * 2019-07-31 2021-01-12 Abb Power Grids Switzerland Ag Autonomous semantic data discovery for distributed networked systems

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0772319A2 (en) * 1995-10-30 1997-05-07 Sun Microsystems, Inc. Method and system for sharing information between network managers
US5706440A (en) * 1995-08-23 1998-01-06 International Business Machines Corporation Method and system for determining hub topology of an ethernet LAN segment
US5812771A (en) * 1994-01-28 1998-09-22 Cabletron System, Inc. Distributed chassis agent for distributed network management
WO2000005657A1 (en) * 1998-07-21 2000-02-03 Conduct Ltd. Automatic network topology analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE68928016T2 (en) * 1988-01-29 1997-12-11 Network Equipment Tech MONITOR FOR CONDITION AND TOPOLOGY OF A TELECOMMUNICATION NETWORK
US6085243A (en) * 1996-12-13 2000-07-04 3Com Corporation Distributed remote management (dRMON) for networks
EP0849910A3 (en) * 1996-12-18 1999-02-10 Nortel Networks Corporation Communications network monitoring
GB2374247B (en) * 1999-03-17 2004-06-30 Ericsson Telefon Ab L M Method and arrangement for performance analysis of data networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812771A (en) * 1994-01-28 1998-09-22 Cabletron System, Inc. Distributed chassis agent for distributed network management
US5706440A (en) * 1995-08-23 1998-01-06 International Business Machines Corporation Method and system for determining hub topology of an ethernet LAN segment
EP0772319A2 (en) * 1995-10-30 1997-05-07 Sun Microsystems, Inc. Method and system for sharing information between network managers
WO2000005657A1 (en) * 1998-07-21 2000-02-03 Conduct Ltd. Automatic network topology analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2674876A1 (en) * 2012-06-14 2013-12-18 Alcatel Lucent Streaming analytics processing node and network topology aware streaming analytics system
EP3035595A1 (en) * 2014-12-17 2016-06-22 Alcatel Lucent Routable distributed database for managing a plurality of entities of a telecommunication network

Also Published As

Publication number Publication date
CA2345292A1 (en) 2002-04-03
AU2001258116A1 (en) 2002-04-15
GB0306298D0 (en) 2003-04-23
GB2383228A (en) 2003-06-18
GB2383228B (en) 2004-05-26

Similar Documents

Publication Publication Date Title
Meyer et al. Decentralizing control and intelligence in network management
US8001228B2 (en) System and method to dynamically extend a management information base using SNMP in an application server environment
EP1424808B1 (en) Hierarchical management system of the distributed network management platform
US6292838B1 (en) Technique for automatic remote media access control (MAC) layer address resolution
US20040093408A1 (en) IT asset tracking system
US20020083146A1 (en) Data model for automated server configuration
US6219705B1 (en) System and method of collecting and maintaining historical top communicator information on a communication device
US20070094527A1 (en) Determining power consumption in IT networks
US20020165934A1 (en) Displaying a subset of network nodes based on discovered attributes
US20020040393A1 (en) High performance distributed discovery system
Cassel et al. Network management architectures and protocols: Problems and approaches
US20040139194A1 (en) System and method of measuring and monitoring network services availablility
US20030037206A1 (en) Method and system for adaptive caching in a network management framework using skeleton caches
GB2406465A (en) Network fault monitoring
KR100716167B1 (en) Network management system and method
US20030009541A1 (en) Method and system for setting communication parameters on network apparatus using information recordable medium
CN109495501B (en) Network security dynamic asset management system
CN114338419B (en) IPv6 global networking edge node monitoring and early warning method and system
US7733800B2 (en) Method and mechanism for identifying an unmanaged switch in a network
US7373402B2 (en) Method, apparatus, and machine-readable medium for configuring thresholds on heterogeneous network elements
WO2002030050A1 (en) Distributed discovery system
Hong et al. Integration of the Directory Service in the Network Management Framework.
Rubinstein et al. Evaluating the performance of mobile agents in network management
US20020078190A1 (en) Method and apparatus in network management system for performance-based network protocol layer firewall
CN119583254A (en) Idle computing power scheduling system and method based on computing power gateway

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

ENP Entry into the national phase

Ref document number: 0306298

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20010523

Format of ref document f/p: F

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP