[go: up one dir, main page]

CN117058680B - A scanning and reading method, system and scanning and reading pen based on big data - Google Patents

A scanning and reading method, system and scanning and reading pen based on big data

Info

Publication number
CN117058680B
CN117058680B CN202310859896.8A CN202310859896A CN117058680B CN 117058680 B CN117058680 B CN 117058680B CN 202310859896 A CN202310859896 A CN 202310859896A CN 117058680 B CN117058680 B CN 117058680B
Authority
CN
China
Prior art keywords
module
scanning
image
main control
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310859896.8A
Other languages
Chinese (zh)
Other versions
CN117058680A (en
Inventor
刘福星
周业明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Simware Telecommunication Technology Co ltd
Original Assignee
Guangzhou Simware Telecommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Simware Telecommunication Technology Co ltd filed Critical Guangzhou Simware Telecommunication Technology Co ltd
Priority to CN202310859896.8A priority Critical patent/CN117058680B/en
Publication of CN117058680A publication Critical patent/CN117058680A/en
Application granted granted Critical
Publication of CN117058680B publication Critical patent/CN117058680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/10Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using flat picture-bearing surfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Character Discrimination (AREA)
  • Facsimiles In General (AREA)

Abstract

The invention relates to the technical field of scanning and reading pens, and particularly discloses a scanning and reading method and system based on big data and a scanning and reading pen, wherein the system comprises a scanning module, and the scanning module is used for scanning image information; the scanning pen comprises a main control module, an image preprocessing module, a dual-mode communication module, a cloud server, a control module and a scanning pen terminal, wherein the main control module is used for outputting control instructions, the image preprocessing module is used for preprocessing image information scanned by the scanning module, the dual-mode communication module is used for carrying out wireless data communication and sending preprocessed image information, the dual-mode communication module is used for realizing complex operation on the cloud server in an online mode, hardware configuration of the scanning pen terminal is reduced, the terminal cost is reduced, the LTE CAT1 or Wi-Fi is adopted for communication, a user can use the scanning pen anytime and anywhere, the limitation of a use place is eliminated, and the scanning pen can also carry out autonomous learning, adapt to different fonts and reduce the occurrence of inaccuracy.

Description

Big data-based scanning method, system and scanning pen
Technical Field
The invention relates to the technical field of scanning and reading pens, in particular to a scanning and reading method and system based on big data and a scanning and reading pen.
Background
Big data technologies refer to technologies and tools for processing, analyzing, and managing large-scale data sets. Along with the popularization of the Internet and the Internet of things, the data volume is continuously increased, the processing and analysis demands on the data are also increased, and large data technology is generated. The prior art scanning pen is mainly based on the fact that the image acquisition and recognition mode has a plurality of defects when in use, such as:
firstly, the scanning pen products mostly adopt a solution of combining offline and online, the requirements on hardware configuration such as a processor and a memory are relatively high, so that the product has high selling price and has no competitive advantage in the market, secondly, the scanning pen relying on Wi-Fi network communication is limited in use in places such as the open air and the like without Wi-Fi, and in addition, the image recognition is mainly based on printed publication recognition, dirt exists on the publication relatively, and the character difference affects the recognition.
In order to solve the above problems, we propose a method, system and pen for reading data.
Disclosure of Invention
The invention aims to provide a scanning method, a scanning system and a scanning pen based on big data, which reduce the configuration requirement on an application processor, support LTE CAT1 and Wi-Fi wireless communication and break away from the limitation of a use place. Is beneficial to the popularization of products and improves the use experience of users.
In order to achieve the purpose, the invention provides the following technical scheme that the system comprises a scanning system based on big data:
the scanning module is used for scanning the image information;
the main control module is used for outputting control instructions;
The image preprocessing module is used for preprocessing the image information scanned by the scanning module;
the dual-mode communication module is used for carrying out wireless data communication and sending the preprocessed image information;
the cloud server is connected with the dual-mode communication unit and is used for providing big data service, identifying the preprocessed image and feeding back sounding data;
and the sounding module is used for sounding according to sounding data of the cloud server feedback information.
As a preferred embodiment of the present invention, there is further provided:
the local identification module is used for identifying scanned image information when offline and comparing the information with local information to search sound information;
the storage module is used for storing local data, and the local data comprises dictionary data packets.
As a preferred embodiment of the present invention, the image preprocessing module includes a cropping module for cropping an image and a format conversion module for performing format conversion.
As a preferred embodiment of the present invention, the dual-mode communication module includes an LTE CAT1 wireless communication module and a Wi-Fi module.
As a preferred embodiment of the present invention, the cloud server includes:
the image processing module is used for carrying out gray level processing, brightness contrast adjustment, image clipping and image under color removal on the image;
the OCR module is used for identifying the processed image and outputting an identification result;
The data packet generation module is used for translating, word segmentation and TTS processing of the identified text;
And the database is used for storing word stock data and providing references for the OCR module.
The scanning pen based on big data comprises a main control board, a Wi-Fi module connected with the main control board, a scanning head connected with the main control board, a loudspeaker connected with the main control board, a microphone connected with the main control board, an LCD display screen connected with the main control board, a battery connected with the main control board and a TF card connected with the main control board, wherein the main control board comprises an LTE CAT1 wireless communication unit for data communication between the scanning pen and a background server, the TF card is used for storing resource files required for point reading, the LTE CAT1 wireless communication unit and the Wi-Fi module are used for communicating with a cloud server, and the LTE CAT1 wireless communication unit comprises a USIM unit and an antenna.
The invention is a preferable implementation scheme, which also comprises a light supplementing lamp connected with the main control board and an infrared light supplementing lamp connected with the main control board;
As a preferred embodiment of the present invention, the read pen further includes an OTA remote software upgrade module and a data line online software upgrade module.
A method of scanning based on big data, the method comprising the steps of:
s1, when a scanning pen scans characters, determining a scanning base point by using a first identifiable point of data, and generating a first scanning image and a first base point moving image;
Step S2, detecting a directional operation of a user on a target object through a first base point moving image, wherein the directional operation is used for determining the identified click-to-read intention;
Step S3, image recognition, namely recognizing a scanned image, retrieving corresponding voice information from a local database, uploading the voice information to a cloud end if the scanned image cannot be recognized, recognizing the information through the cloud end database, and recording and retaining the image information and uploading the image information to a cloud end server if the scanned image cannot be recognized;
And S4, scanning the text again, repeating the steps S1-S3 until the text information is confirmed to be identified, establishing a connection between the image information uploaded to the cloud server and the voice information corresponding to the text information, analyzing the image characteristics, generating a new identification packet, and transmitting the new identification packet to the scanning pen for data upgrading.
In the method, after the scanning and photographing is completed, only a series of image data cutting and format conversion processes, image recognition, feature analysis and generation of new data packets are completed by the on-line cloud server.
Compared with the prior art, the invention has the beneficial effects that:
The dual-mode communication method is realized by putting complex operation on a cloud server, hardware configuration of the scanning pen terminal is reduced, terminal cost is reduced, the LTE CAT1 or Wi-Fi is adopted for communication, a user can use the dual-mode communication method at any time and any place, the limitation of a use place is eliminated, and the scanning pen can also perform autonomous learning, adapt to different fonts and reduce the occurrence of inaccurate conditions.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 is a first block diagram of a system according to the present invention;
FIG. 2 is a second block diagram of the system of the present invention;
FIG. 3 is a block diagram of an image preprocessing module of the system of the present invention;
FIG. 4 is a diagram of a cloud server architecture of the system of the present invention;
FIG. 5 is a block diagram of a stylus according to the present invention;
fig. 6 is a flow chart of the method of the present invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1, the technical scheme of the present invention is described in detail for achieving the above purpose.
The invention provides a big data-based scanning system, which comprises the following steps:
A scanning module 100 for scanning image information;
The main control module 200 is used for outputting control instructions;
The image preprocessing module 300 is used for preprocessing the image information scanned by the scanning module;
the dual-mode communication module 400 is used for performing wireless data communication and sending pre-processing image information;
the cloud server 500 is connected with the dual-mode communication unit and is used for providing big data service, identifying the preprocessed image and feeding back sounding data;
And the sounding module 600 is used for sounding according to sounding data of the cloud server feedback information.
When the intelligent voice-producing device is used, firstly, the device is started through the control device of the main control module, the device is connected to the cloud server through the dual-mode communication module, then the image information is scanned through the scanning module and fed back to the image preprocessing module for image cutting and format conversion preprocessing, in order to conveniently identify the image information, the image information is numbered and named, then the image information is uploaded to the cloud server, the cloud server further processes the image information and identifies the text information, the text information is fed back to the main control module, and the main control module controls the voice-producing module to produce voice according to the information.
Further, referring to fig. 2, the method further includes:
The local recognition module 700 is used for recognizing scanned image information when offline and comparing the scanned image information with local information to retrieve sound information;
A storage module 800 for storing local data, the local data comprising dictionary data packets;
Specifically, the method comprises the steps of starting a control device through a main control module, enabling equipment to be connected to a cloud server through a dual-mode communication module, scanning image information through a scanning module, feeding the image information back to an image preprocessing module for image cutting and format conversion preprocessing, numbering and naming the image information for conveniently identifying the image information, identifying characters through a local identification module and combining information stored in a storage module, enabling the main control module to control a sounding module to sound according to the information, and enabling the user to select full offline use requirements through the arrangement of the local identification module.
Further, referring to fig. 3, the image preprocessing module 300 includes a cropping module 310 for cropping an image and a format conversion module 320 for converting a format.
Further, the dual-mode communication module 400 includes an LTE CAT1 wireless communication module and a Wi-Fi module, and uses the dual-mode communication to be online, and uses LTE CAT1 or Wi-Fi for communication, so that the user can use the dual-mode communication module at any time and any place, and the limitation of the use place is eliminated.
Further, referring to fig. 4, the cloud server 600 includes:
The image processing module 610 is used for performing gray level processing, brightness contrast adjustment, image clipping and image under color removal processing on the image;
an OCR recognition module 620 for recognizing the processed image and outputting a recognition result;
A data packet generation module 630, configured to translate, word-segment, and TTS process (text-to-speech) the identified text;
A database module 640 for storing word stock data for providing references to the OCR module;
The complex image processing is completed through the image processing module in the cloud server, OCR recognition is realized by analyzing cloud data, and complex operation is realized by placing the complex operation on the cloud server, so that the hardware configuration of the scanning pen terminal is reduced, and the terminal cost is reduced.
Example two
Referring to fig. 5, the technical scheme of the present invention is described in detail for achieving the above purpose.
The scanning pen based on big data comprises a main control board 1, a Wi-Fi module 2 connected with the main control board, a scanning head 3 connected with the main control board, a loudspeaker 4 connected with the main control board, a microphone 5 connected with the main control board, an LCD display screen 6 connected with the main control board, a battery 7 connected with the main control board and a TF card 8 connected with the main control board, wherein the main control board comprises an LTE CAT1 wireless communication unit 11 for data communication between the scanning pen and a background server, the TF card is used for storing resource files required for point reading, the LTE CAT1 wireless communication unit 11 and the Wi-Fi module 2 are used for communicating with a cloud server, the LTE CAT1 wireless communication unit 11 comprises a USIM unit and an antenna, and further comprises a light supplementing lamp connected with the main control board and an infrared light supplementing lamp connected with the main control board, and further comprises an OTA remote software upgrading module and a data line online software upgrading module;
the main control board is provided with a single-core processor, the processor can adopt ARM Cortex-R5 with a main frequency of 832MHz, and an antenna is further arranged on an LTE CAT1 wireless communication unit of the scanning and reading pen for optimizing data transmission effect.
Example III
Referring to fig. 6, the technical scheme of the present invention is described in detail for achieving the above purpose.
A method of scanning based on big data, the method comprising the steps of:
s1, when a scanning pen scans characters, determining a scanning base point by using a first identifiable point of data, and generating a first scanning image and a first base point moving image;
Step S2, detecting a directional operation of a user on a target object through a first base point moving image, wherein the directional operation is used for determining the identified click-to-read intention;
Step S3, image recognition, namely recognizing a scanned image, retrieving corresponding voice information from a local database, uploading the voice information to a cloud end if the scanned image cannot be recognized, recognizing the information through the cloud end database, and recording and retaining the image information and uploading the image information to a cloud end server if the scanned image cannot be recognized;
And S4, scanning the text again, repeating the steps S1-S3 until the text information is confirmed to be identified, establishing a connection between the image information uploaded to the cloud server and the voice information corresponding to the text information, analyzing the image characteristics, generating a new identification packet, and transmitting the new identification packet to the scanning pen for data upgrading.
In the method, the target area can be identified by identifying the base point moving image, namely the area of the moving relative straight line is taken as the target area, the target area can be conveniently determined, the scanning intention is determined, and the later image splicing and the image area selection are facilitated.
Furthermore, in the method, after the scanning and photographing of the scanning and reading pen is completed, only a series of image data cutting and format conversion processing are needed, and the image recognition, the feature analysis and the generation of a new data packet are completed by the cloud server on line.
In summary, the invention uses a dual-mode communication online mode to realize complex operation on the cloud server, thereby reducing the hardware configuration of the stylus terminal and the terminal cost; the LTE CAT1 or Wi-Fi is adopted for communication, so that a user can use the device at any time and any place, the limit of a use place is eliminated, and the scanning pen can also perform autonomous learning, adapt to different fonts and reduce the occurrence of inaccurate conditions.
The processor fetches instructions from the Memory, analyzes the instructions, then completes corresponding operation according to the instruction requirement, generates a series of control commands, makes each part of the computer automatically, continuously and coordinately act to become an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit, wherein the Memory comprises a Read-Only Memory (ROM) which is used for storing computer programs, and a protection device is arranged outside the Memory.
For example, a computer program may be split into one or more modules, one or more modules stored in memory and executed by a processor to perform the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
It will be appreciated by those skilled in the art that the foregoing description of the service device is merely an example and is not meant to be limiting, and may include more or fewer components than the foregoing description, or may combine certain components, or different components, such as may include input-output devices, network access devices, buses, etc.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device described above, and which connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used for storing computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area which may store an operating system, an application program required for at least one function (such as an information collection template display function, a product information distribution function, etc.), etc., and a storage data area which may store data created according to the use of the berth status display system (such as a product information collection template corresponding to different product types, product information required to be distributed by different product providers, etc.), etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The modules/units integrated in the terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may implement all or part of the modules/units in the system of the above-described embodiments, or may be implemented by instructing the relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the functions of the respective system embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include any entity or device capable of carrying computer program code, recording medium, USB flash disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media, among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention thereto. Any modifications, equivalent substitutions, improvements, etc. within the principles and practice of the present invention are intended to be included within the scope of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. A big data based swipe system, the system comprising:
the scanning module is used for scanning the image information;
the main control module is used for outputting control instructions;
The image preprocessing module is used for preprocessing the image information scanned by the scanning module;
the dual-mode communication module is used for carrying out wireless data communication and sending the preprocessed image information;
the cloud server is connected with the dual-mode communication unit and is used for providing big data service, identifying the preprocessed image and feeding back sounding data;
the sounding module is used for sounding according to sounding data of the cloud server feedback information;
the method also comprises a scanning method, and the method comprises the following steps:
s1, when a scanning pen scans characters, determining a scanning base point by using a first identifiable point of data, and generating a first scanning image and a first base point moving image;
Step S2, detecting a directional operation of a user on a target object through a first base point moving image, wherein the directional operation is used for determining the identified click-to-read intention;
Step S3, image recognition, namely recognizing a scanned image, retrieving corresponding voice information from a local database, uploading the voice information to a cloud end if the scanned image cannot be recognized, recognizing the information through the cloud end database, and recording and retaining the image information and uploading the image information to a cloud end server if the scanned image cannot be recognized;
And S4, scanning the text again, repeating the steps S1-S3 until the text information is confirmed to be identified, establishing a connection between the image information uploaded to the cloud server and the voice information corresponding to the text information, analyzing the image characteristics, generating a new identification packet, and transmitting the new identification packet to the scanning pen for data upgrading.
2. The big data based swipe system of claim 1, further comprising:
the local identification module is used for identifying scanned image information when offline and comparing the information with local information to search sound information;
the storage module is used for storing local data, and the local data comprises dictionary data packets.
3. The big data based read system of claim 1, wherein the image preprocessing module includes a cropping module for cropping an image and a format conversion module for converting a format.
4. The big data based swipe system of claim 1, wherein the dual-mode communication module comprises an LTE CAT1 wireless communication module and a Wi-Fi module.
5. The big data based swipe system of claim 1, wherein the cloud server comprises:
The image processing module is used for performing image stitching, gray level processing, brightness contrast adjustment, image clipping and image under-color removal on the image;
the OCR module is used for identifying the processed image and outputting an identification result;
The data packet generation module is used for translating, word segmentation and TTS processing of the identified text;
And the database is used for storing word stock data and providing references for the OCR module.
6. The big data based scanning system of claim 1, wherein after the scanning pen finishes scanning and photographing, only a series of image data cutting and format conversion processes, image recognition, feature analysis and generation of new data packets are completed by the on-line cloud server.
7. The big data-based reading pen is used for the big data-based reading system according to any one of claims 1-6, and comprises a main control board, a Wi-Fi module connected with the main control board, a scanning head connected with the main control board, a loudspeaker connected with the main control board, a microphone connected with the main control board, an LCD display screen connected with the main control board, a battery connected with the main control board and a TF card connected with the main control board, wherein the main control board comprises an LTE CAT1 wireless communication unit used for data communication between the reading pen and a background server, the TF card is used for storing resource files required for point reading, the LTE CAT1 wireless communication unit and the Wi-Fi module are used for communicating with a cloud server, and the LTE CAT1 wireless communication unit comprises a USIM unit and an antenna.
8. The big data based read pen of claim 7, further comprising a light supplement lamp connected to the main control board and an infrared light supplement lamp connected to the main control board.
9. The big data based read pen of claim 8, further comprising an OTA remote software upgrade module and a data line online software upgrade module.
CN202310859896.8A 2023-07-13 2023-07-13 A scanning and reading method, system and scanning and reading pen based on big data Active CN117058680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310859896.8A CN117058680B (en) 2023-07-13 2023-07-13 A scanning and reading method, system and scanning and reading pen based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310859896.8A CN117058680B (en) 2023-07-13 2023-07-13 A scanning and reading method, system and scanning and reading pen based on big data

Publications (2)

Publication Number Publication Date
CN117058680A CN117058680A (en) 2023-11-14
CN117058680B true CN117058680B (en) 2025-08-08

Family

ID=88652577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310859896.8A Active CN117058680B (en) 2023-07-13 2023-07-13 A scanning and reading method, system and scanning and reading pen based on big data

Country Status (1)

Country Link
CN (1) CN117058680B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767501B1 (en) * 2013-11-07 2017-09-19 Amazon Technologies, Inc. Voice-assisted scanning
CN116342855A (en) * 2021-12-15 2023-06-27 广州市森锐科技股份有限公司 An intelligent scanning pen system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206021627U (en) * 2016-07-29 2017-03-15 北京志光伯元科技有限公司 A kind of talking pen and point-of-reading system
CN113096655B (en) * 2021-03-29 2024-01-26 读书郎教育科技有限公司 System and method for marking key points by scanning pen according to voice

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767501B1 (en) * 2013-11-07 2017-09-19 Amazon Technologies, Inc. Voice-assisted scanning
CN116342855A (en) * 2021-12-15 2023-06-27 广州市森锐科技股份有限公司 An intelligent scanning pen system

Also Published As

Publication number Publication date
CN117058680A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
JP6214738B2 (en) Method and apparatus for acquiring semantic tag of digital image
US11749255B2 (en) Voice question and answer method and device, computer readable storage medium and electronic device
US9852349B2 (en) Scanning system, terminal device and scanning method
US7734092B2 (en) Multiple image input for optical character recognition processing systems and methods
CN113362026B (en) Text processing method and device
CN101855640A (en) Image analysis method, in particular for mobile radio devices
CN112765165A (en) Data entry method and device, equipment and computer readable storage medium
CN109993075B (en) Chat application session content storage method, system and device
US11380082B2 (en) Electronic device, method and non-transitory storage medium for optical character recognition
CN111723653B (en) Method and device for reading drawing book based on artificial intelligence
CN117058680B (en) A scanning and reading method, system and scanning and reading pen based on big data
CN111079726B (en) Image processing method and electronic device
CN111259882A (en) Bill identification method and device and computer equipment
CN113283231B (en) Method for acquiring signature bit, setting system, signature system and storage medium
CN111881900B (en) Corpus generation method, corpus translation model training method, corpus translation model translation method, corpus translation device, corpus translation equipment and corpus translation medium
CN117115819B (en) Target field extraction method, system, terminal and medium
CN118334663B (en) One-stop artificial intelligent image processing model construction method and device
CN111428569B (en) Visual recognition method and device for drawing book or teaching material based on artificial intelligence
CN110377167B (en) Font generation method and font generation device
CN110796137A (en) Method and device for identifying image
CN113486171B (en) Image processing method and device and electronic equipment
CN114049639B (en) Image processing method and device
CN112149679A (en) Method and device for extracting document elements based on OCR character recognition
CN118298433A (en) Auxiliary checking system and method based on camera
CN120496107A (en) Invoice identification method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant