[go: up one dir, main page]

WO2009002031A2 - Belief propagation based fast systolic array system and message processing method using the same - Google Patents

Belief propagation based fast systolic array system and message processing method using the same Download PDF

Info

Publication number
WO2009002031A2
WO2009002031A2 PCT/KR2008/003277 KR2008003277W WO2009002031A2 WO 2009002031 A2 WO2009002031 A2 WO 2009002031A2 KR 2008003277 W KR2008003277 W KR 2008003277W WO 2009002031 A2 WO2009002031 A2 WO 2009002031A2
Authority
WO
WIPO (PCT)
Prior art keywords
messages
layer
nodes
current process
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2008/003277
Other languages
French (fr)
Other versions
WO2009002031A9 (en
Inventor
Hong Jeong
Sung Chan Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
POSTECH Academy Industry Foundation
Original Assignee
POSTECH Academy Industry Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by POSTECH Academy Industry Foundation filed Critical POSTECH Academy Industry Foundation
Publication of WO2009002031A2 publication Critical patent/WO2009002031A2/en
Publication of WO2009002031A9 publication Critical patent/WO2009002031A9/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks

Definitions

  • the present invention relates to a technology for processing messages on an MRF (Markov Random Field) network having a regular array structure; and, more particularly, to a BP (Belief Propagation) based fast systolic array system and a message processing method using the same.
  • MRF Markov Random Field
  • BP Belief Propagation
  • BP is being widely used in communications systems, image processing, and the like.
  • a system is modeled as a probability network, a hidden state to be estimated is assigned to each node, and the relationship between nodes is set by a probability model .
  • MAP Maximum A Posteriori
  • a node is allocated to each pixel; and a motion vector, a disparity, a segmentation label, or the like needs to be estimated for each node.
  • a motion vector, a disparity, a segmentation label, or the like needs to be estimated for each node.
  • Such large number of nodes causes a low error rate, but too much processing time is required.
  • a 2-dimensional (2D) MRF network is defined.
  • x [x 0 Xi] ⁇ using elements X 0 and Xi
  • a position of a node is represented by a 2D vector
  • the BP is broadly classified into a max-product algorithm that estimates the MAP state and a sum-product algorithm that computes a marginal probability.
  • a description will be given for the max-product algorithm.
  • a message computation process is as in Equation 2 .
  • the size of the data cost memory becomes NiN 0 SB bits. Accordingly, the total memory size becomes 5NiN 0 SB bits.
  • the conventional BP technique shows a low error rate, it requires a large memory and a lot of computation time to estimate the hidden states on the MRF network. Further, if a plurality of parallel processors accesses the large memory to increase a processing speed, the bandwidth of the data bus is remarkably increased.
  • the present invention provides a BP based fast systolic array system and a message processing method using the same, in which a message processing sequence is to scan an MRF network in a direction, instead of a conventional iterative manner, so that a memory size can be reduced and a message can be processed by using a compact distributed memory in a VLSI (Very Large Scale Integration) chip.
  • VLSI Very Large Scale Integration
  • a belief propagation based fast systolic array system wherein a Markov random field network is constructed as a dynamic Bayesian network in consideration of an iteration axis, and messages on the dynamic Bayesian network are updated while the Markov random field network is scanned in a specific axis direction.
  • a message processing method using a belief propagation based fast systolic array system including: constructing a Markov random field network as a dynamic
  • Bayesian network in consideration of an iteration axis; and scanning the Markov random field network in a specific axis direction to update messages on the dynamic Bayesian network .
  • an MRF network is constructed as a dynamic Bayesian network in consideration of an iteration axis, and messages on the dynamic Bayesian network are updated while scanning the MRF network in a specific axis direction.
  • the memory size can be reduced, so that restriction on parallel implementation within existing VLSI chips can be overcome and fast processing can be performed with a compact distributed memory in the VLSI chips.
  • the system of the present invention can be manufactured as a compact parallel VLSI chip having a small memory, such as an FPGA
  • a complex image processing system can be manufactured as an inexpensive and small device which performs fast real-time processing.
  • the present invention can be applied to the BP application having a small number of iterations, e.g., a fast converging segmentation method disclosed in Noam Shental, Assaf Zomet, Tomer Hertz, and Yair Weiss, "Pairwise Clustering and Graphical Models” , “Advances in Neural Information Processing Systems 16", MIT Press, 2004.
  • a fast converging segmentation method disclosed in Noam Shental, Assaf Zomet, Tomer Hertz, and Yair Weiss, "Pairwise Clustering and Graphical Models” , “Advances in Neural Information Processing Systems 16", MIT Press, 2004.
  • GBP Generalized Belief Propagation
  • the present invention can be widely used as an inexpensive and low-power compact system in various signal processing fields, such as sound processing or image processing, which can be modeled on a 2D MRF network.
  • Fig. 1 illustrates an MRF network having a regular array structure and a conventional BP update rule
  • Fig. 2 illustrates a layer structure in which a layer corresponds to an iteration of a message at each node shown in Fig. 1;
  • Figs. 3A to 3C respectively illustrate an explanatory view of a layer-transformed FBP (Fast Belief Propagation) structure of the structure shown in Fig. 2 and a message update sequence;
  • FBP Fast Belief Propagation
  • Fig. 4 illustrates an explanatory view of the message update sequence shown in Figs. 3A to 3C viewed from a different angle ;
  • Fig. 5 illustrates a block diagram of a BP based fast systolic array system in accordance with the present invention
  • Fig. 6 illustrates a block diagram of the internal configuration of the FBP module shown in Fig. 5;
  • Fig. 7 illustrates a flow chart of a parallel computation process for message update within a group in accordance with the present invention
  • Fig. 8 illustrates an explanatory view of the FBP sequence shown in Fig . 7 ;
  • Fig. 9 illustrates an exemplary view of an FBP parallel matching system in accordance with the present invention
  • Fig. 10 illustrates an explanatory view of the internal structure and process of the processor shown in Fig. 9;
  • Fig. 11 illustrates a flow chart of a sequential computation process for message update within a group in accordance with the present invention
  • Figs . 12A and 12B illustrate the message update processes in the structure shown in Fig. 2 and in the layer-transformed structure shown in Figs. 3A to 3C, respectively;
  • Fig. 13 illustrates a data cost reading process in the layer-transformed structure in accordance with the present invention.
  • Fig. 2 illustrates a layer structure in which a layer corresponds to an iteration of a message at each node shown in Fig. 1.
  • Figs. 3A to 3C respectively illustrate an explanatory view of a layer-transformed FBP (Fast Belief Propagation) structure of the structure shown in Fig. 2 and a message update sequence .
  • FBP Fast Belief Propagation
  • the MRF network structure shown in Fig. 1 is constructed as a dynamic Bayesian network in consideration of an iteration axis. That is, as shown in Fig. 2, the MRP network structure shown in Fig. 1 is constructed as a dynamic Bayesian network in which a layer is stacked each time message is repeatedly computed at each node, the number of iterations ⁇ t' corresponding to a layer index ⁇ l' .
  • p(l) represents the coordinate of a node at an 1-th iteration layer
  • Figs. 3A to 3C show the vertical-rearrangement result of pod) nodes according to Equation 4.
  • the relationship between a node p (1) and a node p (1-1) at a previous iteration layer which correspond to the same node on the MRF network is as in Equation 4.
  • the node p(l) and the node p(l-l) differ from each other by an offset -[I 0] ⁇ on the layer structure .
  • nodes are grouped, and nodes at the same layer in a group are then parallel-processed to obtain messages thereof.
  • messages of nodes at the previous layer in a group are read from a local buffer of the group, and messages of nodes within the adjacent group are read from a layer buffer in which messages of nodes in the previously processed group are stored.
  • messages of nodes at each layer in a previously processed group are stored in the layer buffer.
  • messages stored in the layer buffer are updated as the previous group 300 is right-shifted, i.e., as the previous group 300 moves in a positive direction of the p 0 axis.
  • messages of nodes in a current process group 310 are computed in parallel and stored in the local buffer.
  • the messages stored in the local buffer are used to process nodes at the next (upper) layer in the current process group. Further, the messages stored in the local buffer are to be stored in the layer buffer for use in processing messages of nodes in the next group .
  • the messages of the nodes at the second layer (current layer) in the current process group 310 are computed by using, according to the conventional BP algorithm, the messages of the nodes having the node indexes 2, 3, and 4 at the previous layer (the first layer) stored in the local buffer and the messages stored in the layer buffer.
  • the local buffer is updated by the computed messages of the nodes having the node indexes 2, 3, and 4 and a layer index 1, and then used in processing the top layer (the third layer) .
  • the messages stored in the local buffer are stored in the layer buffer for use in processing the next group.
  • the messages of the nodes, at the top layer, i.e. , the third layer, in the current process group 310 have been processed as shown in Fig 3B, the messages of nodes having node indexes 3 and 4 and layer indexes 0, 1, and 2, i.e., nodes in a region 310a adjacent to a next group 320, are stored in the layer buffer, as shown in Fig. 3C.
  • messages of nodes at each layer in the next group 320 are computed using the layer buffer and the local buffer reset before the start of processing the next group 320.
  • multiple nodes at a layer in a group are processed in parallel by executing parallel processors simultaneously, and, after the completion of processing the nodes at the layer, nodes at another layer (upper layer) in the group are processed.
  • the processors for processing each layer within a group are executed in parallel, and the group is sequentially scanned.
  • the messages for each layer in a region adjacent to the next group are stored in the layer buffer
  • processing is performed using the small layer buffer and local buffer while obtaining the same result as that of the conventional BP algorithm.
  • Fig. 5 illustrates a block diagram of a parallel matching system, which is an FBP scanning system, in accordance with the present invention.
  • This system includes: a data cost module 500 that receives an input image or sound signal and computes data costs on the MRF network; and an FBP module 510 that performs an FBP scanning sequence using the data costs computed by the data cost module 500 and outputs the messages and MRF hidden states fast and in parallel.
  • the data cost module 500 receives different input signals according to the applications, and computes the data costs on the basis of the received signals. For example, in case of stereo vision, left and right images are input, and differences between absolute pixel values corresponding to each other for each disparity level are computed as the data costs . In case of motion estimation, frame images at different timings are input, and differences between absolute pixel values corresponding to each other for each motion vector state are computed as the data costs. Further, in case of sound recognition, differences between different phonemes are computed as the data costs .
  • the FBP module 510 that performs the FBP scanning sequence in the above-described manner includes: a layer buffer 600 that stores messages and data costs of nodes within a region in a previous group, the region adjacent to a current process group; a local buffer 610 that stores messages and data costs of nodes at a previous layer within the current process group, wherein the previous layer is a layer stacked directly below a currently processed layer (hereinafter, referred to as "current layer"); a group initialization unit 620 that initializes a group before the start of message update for nodes within the group,- a message update unit 630 that updates messages for each layer within a group; and a state decision unit 640 that receives the messages from the message update unit 630 and estimates hidden states at a final layer.
  • a layer buffer 600 that stores messages and data costs of nodes within a region in a previous group, the region adjacent to a current process group
  • a local buffer 610 that stores messages and data costs of nodes at a previous layer within the current process group
  • the message update unit 630 receives the data costs and the messages from the layer buffer 600 and the local buffer 610 to perform message update for the nodes at the current layer within the current process group on the basis of the received data costs and messages.
  • the message update unit 630 includes: a message computation unit 632 that receives the data costs and the messages of the previous layer within the current process group and the previous group from the local buffer 610 and the layer buffer 600, computes messages of the current layer within the current process group, and stores the computed messages in the local buffer 610; and a buffer update unit 634 that stores in the layer buffer 600 the messages of the previous layer stored in the local buffer 610.
  • the data costs and the messages of the previous layer received by the message computation unit 632 are messages of the previous layer in the current process group and in the previous group in the layer-transformed structure.
  • the message computation unit 632 accesses the local buffer 610 to read the messages or the data costs of the previous layer in the current process group and accesses the layer buffer 600 to read the messages or the data costs of the previous group .
  • the buffer update unit 634 stores, in the layer buffer 600, the messages and the data costs of the nodes within a region, in the current process group, adjacent to a next group, thereby updating the layer buffer 600.
  • the state decision unit 640 decides the hidden states of the final layer by using the messages and the data costs stored in the layer buffer 600 and the local buffer 610.
  • the entire MRF network is constructed as a dynamic Bayesian network, and the dynamic Bayesian network is divided into a plurality of groups, that is, the number of groups G 0 is determined (step S700) .
  • message update is performed on messages of nodes within each group. This process is as follows.
  • a group index g 0 is set to zero (step S702) and a layer index 1 is set to one (step S704) .
  • the group initialization unit 620 initializes a first group by reading the data costs from the data cost module 500 and setting the messages to a preset value, e.g., "0" (step S706) .
  • the message computation unit 632 updates the messages of the nodes at a first layer within the first group
  • step S708 the message computation unit 632 determines whether or not the layer index 1 is smaller than the number of layers L, i.e., whether or not the message update has been completed for all layers within the first group (step S710) .
  • step S712 the layer index 1 is increased by one (step S712) , and the message update is performed on the nodes at a second layer. That is, the steps
  • the state decision unit 640 executes a state estimation function to estimate the MAP hidden states at the final layer (step S714) .
  • the state decision unit 640 determines whether or not one or more groups for which MAP hidden state estimation needs to be performed remains , i.e., whether or not the group index g 0 is smaller than the number of groups G 0 (step S716) .
  • step S716 If it is determined in the step S716 that one or more groups for which MAP hidden state estimation needs to be performed remains, i.e., the group index g 0 is smaller than the number of groups G 0 , the group index g 0 is increased by one (step S718) , and the message update is performed on the nodes within a second group and the MAP hidden state estimation is performed for the second group. That is, the steps S704 to S716 are repeatedly performed until the group index g 0 reaches the number of groups G 0 .
  • the dynamic Bayesian network is divided into a plurality of groups, and iterations are performed within a group .
  • the edge cost Vhs (clh, d.s) / the data cost Dh (dh) , and neighboring messages trihs (d s ) are needed in order to compute the MAP state d p ⁇ L) and the message ⁇ s (d s ).
  • the edge cost V hS (d h ,d a ) is read according to the above-described FBP scanning sequence.
  • the edge cost V hS (d h ,d s ) is a constant parameter function that is not affected by the position of the node, the edge cost may be computed in advance using a given parameter.
  • the edge cost V hs (d h ,d s ) does not need to be read according to the FBP scanning sequence. Therefore, in case of, e.g. , motion estimation or stereo matching where the edge cost V hS (d h ,d s ) is computed by a truncated linear function using parameters ⁇ v and K v , additional memories are not needed.
  • the message computation and group initialization performed by the group initialization unit 620, the message computation unit 632, and the buffer update unit 634 in the FBP module 510 will be described below.
  • a node on the MRF network corresponds to nodes having an offset - [1 0] ⁇ between different iteration layers in the layer-transformed structure, as shown in Equation 5. Accordingly, N b (h)/s is changed to N b (h- [10] ⁇ ) / (s- [10] ⁇ ) by the layer transformation. Access of the message computation unit 632 to the layer buffer 600 and the local buffer 610 will be described below.
  • U 0 of a node is in a range -2 ⁇ U 0 ⁇ H 0 . If a node is within a group, i.e., 0 ⁇ u 0 ⁇ H 0 , the data cost and the message of the node at the previous layer are read from the local buffer 610. If the node is out of the group, i.e., -2 ⁇ U 0 ⁇ 0, the data cost and the message of the node at the previous layer are read from the layer buffer 600.
  • the data cost D h (d) may be read from the previous layer D h _. o ⁇ (d) (see Equation 4) . That is, similarly to the message, if 0 ⁇ h o -l ⁇ H 0 , the data cost is read from the local buffer 610. Meanwhile, if h o -l ⁇ 0, the data cost is read from the layer buffer 600.
  • the messages computed by the message computation unit 632 are stored in the local buffer 610.
  • the update of the layer buffer 600 performed by the buffer update unit 634 will be described below.
  • messages computed in the current process group and required for processing the next group are read from the layer buffer 600. That is, messages satisfying
  • H 0 -2 ⁇ U 0 ⁇ H 0 are read from the layer buffer 600 using the index satisfying -2 ⁇ U 0 ⁇ 0 during the next group processing.
  • the messages computed during the current process group processing need to be stored in the layer buffer 600 for the next group processing.
  • the data costs satisfying H 0 -I ⁇ h 0 ⁇ H 0 are stored in the layer buffer 600 using an index satisfying -1 ⁇ ho ⁇ 0.
  • the group initialization, message computation, and message update can be represented by following functions .
  • the FBP module 510 includes a plurality of local buffers, a plurality of layer buffers, and a plurality of systolic array processors PE which access the buffers .
  • the FBP scanning sequence is performed by the FBP module 510 having a parallel VLSI architecture, as shown in Fig. 10.
  • the systolic array processors PE read the messages from the local buffers and the layer buffers of neighboring processors to perform parallel computation.
  • each of the systolic array processors PE includes the message update unit 630 and the state decision unit 640.
  • the messages of nodes satisfying -2 ⁇ u ⁇ 0 are read from the layer buffer.
  • the size of the layer buffer for all of the messages is 4NiLSB bits. Since the local buffer only stores messages in all directions of the current layer, the size thereof is 4NiH 0 SB bits. Accordingly, the message memory size is 4Ni (H o +L) SB bits .
  • the layer buffer is N x LSB bits
  • the local buffer is NiH 0 SB bits.
  • the size of the data cost memory is Ni(H 0 +L)SB bits.
  • the total memory size according to the present invention is 5Ni(H 0 +L)SB bits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

In a belief propagation based fast systolic array system, a Markov random field network is constructed as a dynamic Bayesian network in consideration of an iteration axis. Further, messages on the dynamic Bayesian network are updated while the Markov random field network is scanned in a specific axis direction.

Description

BELIEF PROPAGATION BASED FAST SYSTOLIC ARRAY SYSTEM AND MESSAGE PROCESSING METHOD USING THE SAME
Field of the Invention
The present invention relates to a technology for processing messages on an MRF (Markov Random Field) network having a regular array structure; and, more particularly, to a BP (Belief Propagation) based fast systolic array system and a message processing method using the same.
Background of the Invention
In general, BP is being widely used in communications systems, image processing, and the like. When a system is modeled as a probability network, a hidden state to be estimated is assigned to each node, and the relationship between nodes is set by a probability model . Then, MAP (Maximum A Posteriori) states can be computed in successive iterations by use of the BP technique.
However, when the BP technique is applied to a low-level vision image processing such as motion estimation, stereo matching, color image segmentation, or the like, a node is allocated to each pixel; and a motion vector, a disparity, a segmentation label, or the like needs to be estimated for each node. Such large number of nodes causes a low error rate, but too much processing time is required.
Further, since the amount of messages to be stored increases in processing a large number of nodes, a large memory is needed.
A conventional BP technique is disclosed, for example, in P. F. Felzenszwalb and D. R. Huttenlocher, "Efficient Belief
Propagation for Early Vision", in Proc . 2004 IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, No. 1, pages 261-268, 2004.
As shown in Fig. 1, in the conventional BP technique, when nodes corresponding to sound or image pixels are regularly connected with each other (in case of sound, one axis represents time) , a 2-dimensional (2D) MRF network is defined. When a 2D vector is represented by x= [x0 Xi] τ using elements X0 and Xi, a position of a node is represented by a 2D vector
P= [po PiI τ- Then, the BP in an Nx x N0 MRF network is defined as follows .
According to a message computation method, the BP is broadly classified into a max-product algorithm that estimates the MAP state and a sum-product algorithm that computes a marginal probability. Herein, a description will be given for the max-product algorithm.
When a data cost Dp(dp) and an edge cost V(dp,dq) are allocated to a hidden state dp of each node and a hidden state dp,dg of each edge on the MRF graph, respectively, an approximation solution for a state that minimizes the sum of all of the costs on the MRF network can be computed by using the BP, as in Equation 1.
[Equation 1] d = argrf min E(d),
E(d)= ∑ V(dp,dq) + ∑Dp(dp) p,qeNb peP
A message computation process is as in Equation 2 .
[Equation 2]
mp'q - a) [
Figure imgf000004_0001
[Equation 3]
Figure imgf000004_0002
When the number of states is S, and the size of the state cost is B bits, the size of the total message memory becomes
4N1N0SB bits, and the size of the data cost memory becomes NiN0SB bits. Accordingly, the total memory size becomes 5NiN0SB bits.
As described above, though the conventional BP technique shows a low error rate, it requires a large memory and a lot of computation time to estimate the hidden states on the MRF network. Further, if a plurality of parallel processors accesses the large memory to increase a processing speed, the bandwidth of the data bus is remarkably increased.
Summary of the Invention
The present invention provides a BP based fast systolic array system and a message processing method using the same, in which a message processing sequence is to scan an MRF network in a direction, instead of a conventional iterative manner, so that a memory size can be reduced and a message can be processed by using a compact distributed memory in a VLSI (Very Large Scale Integration) chip.
In accordance with an aspect of the present invention, there is provided a belief propagation based fast systolic array system, wherein a Markov random field network is constructed as a dynamic Bayesian network in consideration of an iteration axis, and messages on the dynamic Bayesian network are updated while the Markov random field network is scanned in a specific axis direction.
In accordance with another aspect of the present invention, there is provided a message processing method using a belief propagation based fast systolic array system, the method including: constructing a Markov random field network as a dynamic
Bayesian network in consideration of an iteration axis; and scanning the Markov random field network in a specific axis direction to update messages on the dynamic Bayesian network .
In accordance with the present invention, unlike the conventional BP algorithm which requires a large memory and a large computation amount when the number of nodes on the MRF network is large, data costs and edge costs are sequentially input and MAP states are computed at a time in a scan-like manner without repetitive accesses to all of the nodes on the MRF network. Accordingly, fast processing and efficient parallel structure can be realized with a small memory. Further, an MRF network is constructed as a dynamic Bayesian network in consideration of an iteration axis, and messages on the dynamic Bayesian network are updated while scanning the MRF network in a specific axis direction. Thus, the memory size can be reduced, so that restriction on parallel implementation within existing VLSI chips can be overcome and fast processing can be performed with a compact distributed memory in the VLSI chips.
Moreover, since the fast parallel processing can be performed using a small amount of memory resources, the system of the present invention can be manufactured as a compact parallel VLSI chip having a small memory, such as an FPGA
(Field-Programmable Gate Array) or an ASIC
(Application-Specific Integrated Circuit) . Therefore, a complex image processing system can be manufactured as an inexpensive and small device which performs fast real-time processing.
The present invention can be applied to the BP application having a small number of iterations, e.g., a fast converging segmentation method disclosed in Noam Shental, Assaf Zomet, Tomer Hertz, and Yair Weiss, "Pairwise Clustering and Graphical Models" , "Advances in Neural Information Processing Systems 16", MIT Press, 2004. If the present invention is applied to a typical cut-based BP or a GBP (Generalized Belief Propagation) algorithm, in which the entire image can be sufficiently represented by 10 iterations, the memory size can be reduced
44 times when the image is 640 x 480 pixels and H0 = 1 (480/11) .
In particular, the present invention can be widely used as an inexpensive and low-power compact system in various signal processing fields, such as sound processing or image processing, which can be modeled on a 2D MRF network.
Brief Description of the Drawings
The above features of the present invention will become apparent from the following description of embodiments, given in conjunction with the accompanying drawings, in which:
Fig. 1 illustrates an MRF network having a regular array structure and a conventional BP update rule;
Fig. 2 illustrates a layer structure in which a layer corresponds to an iteration of a message at each node shown in Fig. 1;
Figs. 3A to 3C respectively illustrate an explanatory view of a layer-transformed FBP (Fast Belief Propagation) structure of the structure shown in Fig. 2 and a message update sequence;
Fig. 4 illustrates an explanatory view of the message update sequence shown in Figs. 3A to 3C viewed from a different angle ;
Fig. 5 illustrates a block diagram of a BP based fast systolic array system in accordance with the present invention;
Fig. 6 illustrates a block diagram of the internal configuration of the FBP module shown in Fig. 5;
Fig. 7 illustrates a flow chart of a parallel computation process for message update within a group in accordance with the present invention;
Fig. 8 illustrates an explanatory view of the FBP sequence shown in Fig . 7 ;
Fig. 9 illustrates an exemplary view of an FBP parallel matching system in accordance with the present invention; Fig. 10 illustrates an explanatory view of the internal structure and process of the processor shown in Fig. 9;
Fig. 11 illustrates a flow chart of a sequential computation process for message update within a group in accordance with the present invention; Figs . 12A and 12B illustrate the message update processes in the structure shown in Fig. 2 and in the layer-transformed structure shown in Figs. 3A to 3C, respectively; and
Fig. 13 illustrates a data cost reading process in the layer-transformed structure in accordance with the present invention.
Detailed Description of the Invention
Embodiments of the present invention will be described in detail with reference to the accompanying drawings, which form a part hereof .
Fig. 2 illustrates a layer structure in which a layer corresponds to an iteration of a message at each node shown in Fig. 1. Figs. 3A to 3C respectively illustrate an explanatory view of a layer-transformed FBP (Fast Belief Propagation) structure of the structure shown in Fig. 2 and a message update sequence .
In the present invention, the MRF network structure shown in Fig. 1 is constructed as a dynamic Bayesian network in consideration of an iteration axis. That is, as shown in Fig. 2, the MRP network structure shown in Fig. 1 is constructed as a dynamic Bayesian network in which a layer is stacked each time message is repeatedly computed at each node, the number of iterations λt' corresponding to a layer index λl' . When p(l) represents the coordinate of a node at an 1-th iteration layer, a layer transformation equation of the dynamic Bayesian network that tilts the position of a node for each iteration in a scanning axis direction b= [1 0]τ is represented as Equation 4.
[Equation 4] p(l)=p-lb,P0(I)=P0-I,P1(I)=P1
Figure imgf000010_0001
Figs. 3A to 3C show the vertical-rearrangement result of pod) nodes according to Equation 4. The relationship between a node p (1) and a node p (1-1) at a previous iteration layer which correspond to the same node on the MRF network is as in Equation
5.
[Equation 5]
Figure imgf000010_0002
As shown in Equation 5, the node p(l) and the node p(l-l) differ from each other by an offset -[I 0]τ on the layer structure .
In the dynamic Bayesian network structure, nodes are grouped, and nodes at the same layer in a group are then parallel-processed to obtain messages thereof. To be specific, messages of nodes at the previous layer in a group are read from a local buffer of the group, and messages of nodes within the adjacent group are read from a layer buffer in which messages of nodes in the previously processed group are stored.
That is, as shown in Figs. 3A to 3C, messages of nodes at each layer in a previously processed group (hereinafter, referred to as "previous group") 300 are stored in the layer buffer. Here, messages stored in the layer buffer are updated as the previous group 300 is right-shifted, i.e., as the previous group 300 moves in a positive direction of the p0 axis. Meanwhile, messages of nodes in a current process group 310 are computed in parallel and stored in the local buffer. The messages stored in the local buffer are used to process nodes at the next (upper) layer in the current process group. Further, the messages stored in the local buffer are to be stored in the layer buffer for use in processing messages of nodes in the next group .
As shown in Fig. 3A, if the previous group 300 and the current process group 310 include nodes having node indexes 0 and 1 (i.e., po=O and 1) and nodes having node indexes 2, 3, and 4 (i.e., po=2, 3, and 4), respectively, and if the first layer of the current process group 310 has been processed, messages of the nodes having node indexes 0 and 1 are stored in the layer buffer and messages of the nodes having node indexes 2, 3, and 4 and a layer index 0 are stored in the local buffer. Subsequently, the messages of the nodes at the second layer (current layer) in the current process group 310 are computed by using, according to the conventional BP algorithm, the messages of the nodes having the node indexes 2, 3, and 4 at the previous layer (the first layer) stored in the local buffer and the messages stored in the layer buffer. After that, the local buffer is updated by the computed messages of the nodes having the node indexes 2, 3, and 4 and a layer index 1, and then used in processing the top layer (the third layer) . Later, the messages stored in the local buffer are stored in the layer buffer for use in processing the next group.
After the messages of the nodes, at the top layer, i.e. , the third layer, in the current process group 310 have been processed as shown in Fig 3B, the messages of nodes having node indexes 3 and 4 and layer indexes 0, 1, and 2, i.e., nodes in a region 310a adjacent to a next group 320, are stored in the layer buffer, as shown in Fig. 3C.
Subsequently, messages of nodes at each layer in the next group 320 are computed using the layer buffer and the local buffer reset before the start of processing the next group 320. As shown in Fig. 4, multiple nodes at a layer in a group are processed in parallel by executing parallel processors simultaneously, and, after the completion of processing the nodes at the layer, nodes at another layer (upper layer) in the group are processed. The processors for processing each layer within a group are executed in parallel, and the group is sequentially scanned. The messages for each layer in a region adjacent to the next group are stored in the layer buffer
(indicated by a bold line) . The messages stored in the layer buffer are used to process the next group. In this way, MRF hidden states in an L-th iteration layer within the group are obtained.
With a new network scanning sequence of the present invention, processing is performed using the small layer buffer and local buffer while obtaining the same result as that of the conventional BP algorithm.
Fig. 5 illustrates a block diagram of a parallel matching system, which is an FBP scanning system, in accordance with the present invention. This system includes: a data cost module 500 that receives an input image or sound signal and computes data costs on the MRF network; and an FBP module 510 that performs an FBP scanning sequence using the data costs computed by the data cost module 500 and outputs the messages and MRF hidden states fast and in parallel.
The data cost module 500 receives different input signals according to the applications, and computes the data costs on the basis of the received signals. For example, in case of stereo vision, left and right images are input, and differences between absolute pixel values corresponding to each other for each disparity level are computed as the data costs . In case of motion estimation, frame images at different timings are input, and differences between absolute pixel values corresponding to each other for each motion vector state are computed as the data costs. Further, in case of sound recognition, differences between different phonemes are computed as the data costs .
As shown in Fig. 6, the FBP module 510 that performs the FBP scanning sequence in the above-described manner includes: a layer buffer 600 that stores messages and data costs of nodes within a region in a previous group, the region adjacent to a current process group; a local buffer 610 that stores messages and data costs of nodes at a previous layer within the current process group, wherein the previous layer is a layer stacked directly below a currently processed layer (hereinafter, referred to as "current layer"); a group initialization unit 620 that initializes a group before the start of message update for nodes within the group,- a message update unit 630 that updates messages for each layer within a group; and a state decision unit 640 that receives the messages from the message update unit 630 and estimates hidden states at a final layer. The message update unit 630 receives the data costs and the messages from the layer buffer 600 and the local buffer 610 to perform message update for the nodes at the current layer within the current process group on the basis of the received data costs and messages. The message update unit 630 includes: a message computation unit 632 that receives the data costs and the messages of the previous layer within the current process group and the previous group from the local buffer 610 and the layer buffer 600, computes messages of the current layer within the current process group, and stores the computed messages in the local buffer 610; and a buffer update unit 634 that stores in the layer buffer 600 the messages of the previous layer stored in the local buffer 610.
The data costs and the messages of the previous layer received by the message computation unit 632 are messages of the previous layer in the current process group and in the previous group in the layer-transformed structure. During the message computation, the message computation unit 632 accesses the local buffer 610 to read the messages or the data costs of the previous layer in the current process group and accesses the layer buffer 600 to read the messages or the data costs of the previous group .
When the message update of the current process group is completed, the buffer update unit 634 stores, in the layer buffer 600, the messages and the data costs of the nodes within a region, in the current process group, adjacent to a next group, thereby updating the layer buffer 600.
The state decision unit 640 decides the hidden states of the final layer by using the messages and the data costs stored in the layer buffer 600 and the local buffer 610. The FBP scanning sequence performed by the FBP module 510 , in which H0 x Hx processors perform parallel-processing on the Ni x N0 (Ni = Hi) MRF network, using the data costs computed by the data cost module 500 will be described with reference to Figs . 7 and 8. Referring to Figs. 7 and 8, the entire MRF network is constructed as a dynamic Bayesian network, and the dynamic Bayesian network is divided into a plurality of groups, that is, the number of groups G0 is determined (step S700) . Next, message update is performed on messages of nodes within each group. This process is as follows.
First, a group index g0 is set to zero (step S702) and a layer index 1 is set to one (step S704) .
Then, the group initialization unit 620 initializes a first group by reading the data costs from the data cost module 500 and setting the messages to a preset value, e.g., "0" (step S706) .
Next, the message computation unit 632 updates the messages of the nodes at a first layer within the first group
(step S708) . Then, the message computation unit 632 determines whether or not the layer index 1 is smaller than the number of layers L, i.e., whether or not the message update has been completed for all layers within the first group (step S710) .
If it is determined in the step S710 that the layer index
1 is smaller than the number of layers L, the layer index 1 is increased by one (step S712) , and the message update is performed on the nodes at a second layer. That is, the steps
S708 and S710 are repeatedly performed until the layer index
1 reaches the number of layers L.
If it is determined in the step S710 that the message update is completed for all layers within the first group, i.e. , the layer index 1 is equal to the number of layers L, the state decision unit 640 executes a state estimation function to estimate the MAP hidden states at the final layer (step S714) .
Next, the state decision unit 640 determines whether or not one or more groups for which MAP hidden state estimation needs to be performed remains , i.e., whether or not the group index g0 is smaller than the number of groups G0 (step S716) .
If it is determined in the step S716 that one or more groups for which MAP hidden state estimation needs to be performed remains, i.e., the group index g0 is smaller than the number of groups G0, the group index g0 is increased by one (step S718) , and the message update is performed on the nodes within a second group and the MAP hidden state estimation is performed for the second group. That is, the steps S704 to S716 are repeatedly performed until the group index g0 reaches the number of groups G0 .
As described above, in accordance with the present invention, unlike the conventional BP algorithm, the dynamic Bayesian network is divided into a plurality of groups, and iterations are performed within a group .
In Accordance with the present invention, the edge cost Vhs (clh, d.s) / the data cost Dh (dh) , and neighboring messages trihs (ds) are needed in order to compute the MAP state dp^L) and the message <s(ds). The edge cost VhS(dh,da) is read according to the above-described FBP scanning sequence. Alternatively, when the edge cost VhS(dh,ds) is a constant parameter function that is not affected by the position of the node, the edge cost may be computed in advance using a given parameter. In this case, the edge cost Vhs(dh,ds) does not need to be read according to the FBP scanning sequence. Therefore, in case of, e.g. , motion estimation or stereo matching where the edge cost VhS(dh,ds) is computed by a truncated linear function
Figure imgf000018_0001
using parameters αv and Kv, additional memories are not needed. The message computation and group initialization performed by the group initialization unit 620, the message computation unit 632, and the buffer update unit 634 in the FBP module 510 will be described below.
The group initialization 620 reads the data costs within the group from the data cost module 500, and initializes the messages. At this time, the data cost at the node p is computed by p = h + g0H0 [1 0]τ using a group index g0 and a group local index h.
In the message update and state decision performed by the message computation unit 632, it should be noted that a node on the MRF network corresponds to nodes having an offset - [1 0] τ between different iteration layers in the layer-transformed structure, as shown in Equation 5. Accordingly, Nb(h)/s is changed to Nb(h- [10]τ) / (s- [10]τ) by the layer transformation. Access of the message computation unit 632 to the layer buffer 600 and the local buffer 610 will be described below.
In accordance with the present invention, a local index
U0 of a node is in a range -2 ≤ U0 < H0. If a node is within a group, i.e., 0 ≤ u0 < H0, the data cost and the message of the node at the previous layer are read from the local buffer 610. If the node is out of the group, i.e., -2 ≤ U0 < 0, the data cost and the message of the node at the previous layer are read from the layer buffer 600.
The data cost Dh(d) may be read from the previous layer Dh_. oγ(d) (see Equation 4) . That is, similarly to the message, if 0 ≤ ho-l < H0, the data cost is read from the local buffer 610. Meanwhile, if ho-l < 0, the data cost is read from the layer buffer 600.
The messages computed by the message computation unit 632 are stored in the local buffer 610. The update of the layer buffer 600 performed by the buffer update unit 634 will be described below.
As described above, messages computed in the current process group and required for processing the next group are read from the layer buffer 600. That is, messages satisfying
H0-2 ≤ U0 < H0 are read from the layer buffer 600 using the index satisfying -2 < U0 < 0 during the next group processing.
Accordingly, the messages computed during the current process group processing need to be stored in the layer buffer 600 for the next group processing.
Similarly, the data costs satisfying H0-I ≤ h0 < H0 are stored in the layer buffer 600 using an index satisfying -1 ≤ ho < 0.
State estimation outputs d.p(^UVj using the message muh(dh) at the L-th layer. dp(L+\) ^s an ^^ state of a node (p0- (L+l) ,pi) obtained by performing L iterations .
The group initialization, message computation, and message update can be represented by following functions .
1. Group initialization for each parallel processor /ZSf(O5O)3(Tf0-I5H1-I)] Dh(d)=Dp{d)
Figure imgf000020_0001
2. Message update a. Message computation in local buffer for each parallel processor he[(0,0),(HQ-l,Hλ-ϊ)]
Figure imgf000021_0001
end b. Layer buffer update for next group processing a) message part for each layer buffer index he[(-1,0),(-2,H1 -I)]
Figure imgf000021_0002
end b) data cost part for each layer buffer index /z e[(-1,0),(-I3H1 -I)]
Figure imgf000021_0003
end
3. State estimation
Zp(IM)
Figure imgf000021_0004
An exemplary embodiment of the present invention will be described with reference to Figs . 9 to 11.
As shown in Fig.9, the FBP module 510 includes a plurality of local buffers, a plurality of layer buffers, and a plurality of systolic array processors PE which access the buffers . Here, the FBP scanning sequence is performed by the FBP module 510 having a parallel VLSI architecture, as shown in Fig. 10. The systolic array processors PE read the messages from the local buffers and the layer buffers of neighboring processors to perform parallel computation. As also shown in Fig. 6, each of the systolic array processors PE includes the message update unit 630 and the state decision unit 640.
Meanwhile, as shown in Fig. 11, a personal computer (PC) may sequentially read necessary messages from the layer and local buffers one by one. That is, the messages for the nodes from ho = 0 and hx = 0 to h0 = H0 and hi -= Hi are sequentially computed by reading the layer and local buffers one by one, and then the layer buffer is updated.
The computation of the memory size in accordance with the present invention will be described with reference to Figs .12A,
12B, and 13.
The messages of nodes satisfying -2 ≤ u < 0 are read from the layer buffer. As shown in Fig. 12B, when U0 = -1, messages toward three neighboring nodes in three directions are required for N1 nodes in total, and when u0 =-2 , a message in one direction is required. Accordingly, the number of messages that are stored for each layer is 4Ni in total . When the number of states is S and the size of the state cost is B bits, the size of the layer buffer for all of the messages is 4NiLSB bits. Since the local buffer only stores messages in all directions of the current layer, the size thereof is 4NiH0SB bits. Accordingly, the message memory size is 4Ni (Ho+L) SB bits .
As for the data cost size, only a case where h0 = -1 needs to be considered, as shown in Fig. 13. Accordingly, the layer buffer is NxLSB bits, and the local buffer is NiH0SB bits.
Therefore, the size of the data cost memory is Ni(H0+L)SB bits.
As a result, the total memory size according to the present invention is 5Ni(H0+L)SB bits.
Since the size of the conventional BP memory is 5NiN0SB bits, it can be seen that the size of the FBP memory is reduced
N-°- times if the condition N0 <L+H0 is satisfied.
L+Ho
While the invention has been shown and described with respect to the embodiments, it will be understood by those skilled in the art that various changes and modification may be made without departing from the scope of the invention as defined in the following claims.

Claims

What is claimed is:
1. A belief propagation based fast systolic array system, wherein a Markov random field network is constructed as a dynamic Bayesian network in consideration of an iteration axis, and messages on the dynamic Bayesian network are updated while the Markov random field network is scanned in a specific axis direction.
2. The belief propagation based fast systolic array system of claim 1, comprising: a data cost module that computes a data cost of each node based on external input signals; and a fast belief propagation (FBP) module that estimates and outputs a message and a hidden state of each node using the data cost computed by the data cost module.
3. The belief propagation based fast systolic array system of claim 2, wherein the data cost module receives the external input signals and computes the data cost of each node corresponding to the hidden state thereof .
4. The belief propagation based fast systolic array system of claim 2 , wherein the FBP module performs layer transformation on the dynamic Bayesian network to tilt positions of nodes in a scanning axis direction, classifies the nodes on the transformed network into a plurality of groups, and sequentially processes the groups in the specific axis direction on the Markov random field network.
5. The belief propagation based fast systolic array system of claim 4, wherein the FBP module includes: a group initialization unit that initializes a current process group among the groups; a message update unit that updates messages of nodes within the current process group on a layer basis; and a state decision unit that receives the messages from the message update unit and estimates hidden states of the nodes at a final layer.
6. The belief propagation based fast systolic array system of claim 5, wherein the group initialization unit reads the data costs of the nodes within the current process group from the data cost module and initializes the messages of the nodes within the current process group to a specific value.
7. The belief propagation based fast systolic array system of claim 5, wherein the FBP module includes: a layer buffer that stores therein messages and data costs of nodes within a previously processed group adjacent to the current process group; and a local buffer that stores therein messages and data costs of nodes at a previous layer within the current process group, wherein the message update unit receives the messages and the data costs from the layer buffer and the local buffer, and based thereon, updates messages of nodes at a current layer within the current process group.
8. The belief propagation based fast systolic array system of claim 7, wherein the message update unit includes: a message computation unit that receives the data costs and the messages for the previous layer from the local buffer and the layer buffer, computes messages for the current layer within the current process group, and stores the computed messages in the local buffer,- and a buffer update unit that stores in the layer buffer the messages for the previous layer stored in the local buffer.
9. The belief propagation based fast systolic array system of claim 8, wherein the data costs and the messages for the previous layer received by the message computation unit include messages processed at the previous layer in the current process group and in the previous group .
10. The belief propagation based fast systolic array system of claim 8, wherein, during the message computation, the message computation unit accesses the local buffer if the messages or the data costs for the previous layer have been processed in the current process group, and accesses the layer buffer if the messages or the data costs do not have been processed in the current process group.
11. The belief propagation based fast systolic array system of claim 8, wherein, when message update of the current process group is completed, the buffer update unit stores messages and data costs of nodes within a region in the current process group, the region adjacent to a next group.
12. The belief propagation based fast systolic array system of claim 8, wherein the state decision unit decides the hidden states of the nodes at the final layer within the current process group using the messages and the data costs stored in the layer buffer and the local buffer.
13. The belief propagation based fast systolic array system of claim 4, wherein the FBP module includes a local buffer that stores therein messages computed within a current process group among the groups, a layer buffer that stores therein messages for each layer within the current process group, and in-group systolic array processors that access the local buffer and the layer buffer.
14. The belief propagation based fast systolic array system of claim 4, wherein the PBP module includes a local buffer that stores therein messages computed within a current process group among the groups, a layer buffer that stores therein messages for each layer within the current process group, and a processor that sequentially accesses the layer buffer and the local buffer.
15. The belief propagation based fast systolic array system of claim 13, wherein the messages stored in the local buffer are messages for a previous laye^r and necessary for message computation for a current layer, and the messages stored in the layer buffer are necessary for message computation of a next group .
16. The belief propagation based fast systolic array system of claim 14, wherein the messages stored in the local buffer are messages for a previous layer and necessary for message computation for a current layer, and the messages stored in the layer buffer are necessary for message computation of a next group .
17. A message processing method using a belief propagation based fast systolic array system, the method comprising: constructing a Markov random field network as a dynamic
Bayesian network in consideration of an iteration axis; and scanning the Markov random field network in a specific axis direction to update messages on the dynamic Bayesian network.
18. The message processing method of claim 17, further comprising: computing a data cost of each node using external input signals; and estimating a hidden state of each node on the dynamic Bayesian network using the data costs.
19. The message processing method of claim 18, wherein the data cost is a value corresponding to the hidden state .
20. The message processing method of claim 17, wherein scanning the Markov random field network includes: performing layer transformation on the dynamic Bayesian network to tilt positions of nodes in a scanning axis direction; classifying the nodes on the transformed network into a plurality of groups; and sequentially processing the groups in the specific axis direction on the Markov random field network.
21. The message processing method of claim 20, wherein, in performing layer transformation, the dynamic Bayesian network is transformed by vertically rearranging nodes therein.
22. The message processing method of claim 21, wherein the nodes in the dynamic Bayesian network are vertically rearranged according to Equation 1:
[Equation 1]
Pd)=P-Ib,
Figure imgf000030_0001
wherein b equals to [1 0]τ, and p and 1 represent a node and a layer, respectively.
23. The message processing method of claim 20, wherein sequentially processing the groups includes: initializing nodes within a current process group among the groups ; updating messages of nodes within the current process group on a layer basis; and estimating maximum a posteriori states for the current process group at a final layer.
24. The message processing method of claim 23, wherein initializing nodes includes: obtaining a data cost of each node within the current process group; and setting a message of each node within the current process group to a specific value.
25. The message processing method of claim 23, wherein, in updating messages, the messages of the nodes within the current process group are updated by using a layer buffer for storing therein messages and data costs of nodes within a previous group adjacent to the current process group and a local buffer for storing therein messages and data costs of nodes at a previous layer within the current process group.
26. The message processing method of claim 25, wherein, when messages of nodes at a layer within the current process group are computed, the newly computed messages for the layer are stored in the local buffer.
27. The message processing method of claim 25, wherein the messages for the previous layer stored in the local buffer are stored in the layer buffer.
28. The message processing method of claim 25, further comprising: updating, between the completion of message update for the nodes within the current process group and the start of message update for a next group, the layer buffer using the messages and the data costs of nodes within a region in the current process group, the region adjacent to the next group.
PCT/KR2008/003277 2007-06-28 2008-06-12 Belief propagation based fast systolic array system and message processing method using the same Ceased WO2009002031A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020070064351A KR100920229B1 (en) 2007-06-28 2007-06-28 JP high-speed systolic array system and message processing method using the same
KR10-2007-0064351 2007-06-28

Publications (2)

Publication Number Publication Date
WO2009002031A2 true WO2009002031A2 (en) 2008-12-31
WO2009002031A9 WO2009002031A9 (en) 2009-02-26

Family

ID=40186141

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2008/003277 Ceased WO2009002031A2 (en) 2007-06-28 2008-06-12 Belief propagation based fast systolic array system and message processing method using the same

Country Status (2)

Country Link
KR (1) KR100920229B1 (en)
WO (1) WO2009002031A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156964B (en) * 2014-08-14 2017-03-08 陈荣元 A kind of comprehensive MRF and the remote sensing image region segmentation method of Bayesian network
CN106780120A (en) * 2016-12-06 2017-05-31 广东电网有限责任公司电网规划研究中心 Project of transmitting and converting electricity cost index processing method and device
CN108269273A (en) * 2018-02-12 2018-07-10 福州大学 A kind of matched belief propagation method of polar curve during panorama longitudinally roams

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110011463A (en) 2009-07-28 2011-02-08 엘지이노텍 주식회사 Light unit and display device having same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100232096B1 (en) * 1996-09-13 1999-12-01 박래홍 Structure of VLSI for Discrete Wavelet Transform
KR100374784B1 (en) 2000-07-19 2003-03-04 학교법인 포항공과대학교 A system for maching stereo image in real time

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156964B (en) * 2014-08-14 2017-03-08 陈荣元 A kind of comprehensive MRF and the remote sensing image region segmentation method of Bayesian network
CN106780120A (en) * 2016-12-06 2017-05-31 广东电网有限责任公司电网规划研究中心 Project of transmitting and converting electricity cost index processing method and device
CN108269273A (en) * 2018-02-12 2018-07-10 福州大学 A kind of matched belief propagation method of polar curve during panorama longitudinally roams
CN108269273B (en) * 2018-02-12 2021-07-27 福州大学 A Belief Propagation Method for Epipolar Matching in Panoramic Longitudinal Roaming

Also Published As

Publication number Publication date
WO2009002031A9 (en) 2009-02-26
KR100920229B1 (en) 2009-10-05
KR20090000347A (en) 2009-01-07

Similar Documents

Publication Publication Date Title
US12106482B2 (en) Learning-based active surface model for medical image segmentation
CN110837811B (en) Method, device and equipment for generating semantic segmentation network structure and storage medium
CN113826119B (en) Computer vision of pure attention
CN110991513B (en) A system and method for image target recognition with human-like continuous learning ability
CN110874563A (en) Method and apparatus for providing integrated feature maps through multiple image outputs of CNN
CN114299303A (en) Ship target detection method, terminal device and storage medium
Kobayashi Noise robust projection rule for hyperbolic Hopfield neural networks
CN108764244B (en) Potential target area detection method based on convolutional neural network and conditional random field
CN113298097B (en) Feature point extraction method and device based on convolutional neural network and storage medium
CN116109678B (en) Method and system for tracking target based on context self-attention learning depth network
CN114926636A (en) Point cloud semantic segmentation method, device, equipment and storage medium
CN114119749A (en) Monocular 3D vehicle detection method based on dense association
CN113449612A (en) Three-dimensional target point cloud identification method based on sub-flow sparse convolution
CN111339869A (en) Face recognition method, face recognition device, computer readable storage medium and equipment
WO2009002031A2 (en) Belief propagation based fast systolic array system and message processing method using the same
WO2024001283A1 (en) Training method for image processing network, and image processing method and apparatus
CN113971734B (en) Target object detection method, target object detection device, electronic equipment and storage medium
CN117830703A (en) Image recognition method based on multi-scale feature fusion, computer device and computer-readable storage medium
WO2009014314A1 (en) Belief propagation based fast systolic array and method thereof
CN115908874A (en) A Siamese Network Based Target Tracking Model De-redundancy Method
CN114693919A (en) Target detection method, terminal equipment and storage medium
CN116993828B (en) Point cloud matching positioning method, system and product based on laser radar point cloud clustering
CN116805371B (en) Cross-modal scene matching method, device and equipment for laser point cloud and visual image
CN114677380B (en) Video object segmentation method and system based on diversified interaction
CN115908813B (en) A real-time semantic segmentation system and method for road scenes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08766239

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08766239

Country of ref document: EP

Kind code of ref document: A2