Borisyuk et al., 2009 - Google Patents
A neural model of selective attention and object segmentation in the visual scene: An approach based on partial synchronization and star-like architecture of …Borisyuk et al., 2009
View PDF- Document ID
- 4150390021533533670
- Author
- Borisyuk R
- Kazanovich Y
- Chik D
- Tikhanoff V
- Cangelosi A
- Publication year
- Publication venue
- Neural Networks
External Links
Snippet
A brain-inspired computational system is presented that allows sequential selection and processing of objects from a visual scene. The system is comprised of three modules. The selective attention module is designed as a network of spiking neurons of the Hodgkin …
- 230000000007 visual effect 0 title abstract description 60
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/00624—Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30781—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F17/30784—Information retrieval; Database structures therefor; File system structures therefor of video data using features automatically derived from the video content, e.g. descriptors, fingerprints, signatures, genre
- G06F17/30799—Information retrieval; Database structures therefor; File system structures therefor of video data using features automatically derived from the video content, e.g. descriptors, fingerprints, signatures, genre using low-level visual features of the video content
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Boccignone et al. | Modelling gaze shift as a constrained random walk | |
| Vikram et al. | A saliency map based on sampling an image into random rectangular regions of interest | |
| Webster | Evolving concepts of sensory adaptation | |
| Itti et al. | Comparison of feature combination strategies for saliency-based visual attention systems | |
| Geisler et al. | Edge co-occurrence in natural images predicts contour grouping performance | |
| Sun et al. | Object-based visual attention for computer vision | |
| Yu et al. | An object-based visual attention model for robotic applications | |
| Theobald et al. | Dynamics of optomotor responses in Drosophila to perturbations in optic flow | |
| Frintrop | Computational visual attention | |
| Al Mudawi et al. | Machine learning based on body points estimation for sports event recognition | |
| Ogmen et al. | The geometry of visual perception: Retinotopic and nonretinotopic representations in the human visual system | |
| VanRullen et al. | Feed-forward contour integration in primary visual cortex based on asynchronous spike propagation | |
| Borisyuk et al. | A neural model of selective attention and object segmentation in the visual scene: An approach based on partial synchronization and star-like architecture of connections | |
| Hu et al. | A recurrent neural model for proto-object based contour integration and figure-ground segregation | |
| Ouerhani | Visual attention: from bio-inspired modeling to real-time implementation | |
| Xu et al. | Mimicking visual searching with integrated top down cues and low-level features | |
| Molin et al. | How is motion integrated into a proto-object based visual saliency model? | |
| Ozimek et al. | A space-variant visual pathway model for data efficient deep learning | |
| Ge et al. | The application and design of neural computation in visual perception | |
| Ban et al. | A face detection using biologically motivated bottom-up saliency map model and top-down perception model | |
| Shen et al. | Modeling Drosophila vision neural pathways to detect weak moving targets from cluttered backgrounds | |
| Kounte et al. | Bottom Up Approach for Modelling Visual Attention Using Saliency Map in Machine vision: A Computational Cognitive NeuroScience Approach | |
| Díaz-Pernas et al. | Learning and surface boundary feedbacks for colour natural scene perception | |
| White et al. | HDR luminance normalization via contextual facilitation in highly recurrent neuromorphic spiking networks | |
| Kounte et al. | Top-Down Approach for Modelling Visual Attention using Scene Context Features in Machine Vision |