Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present disclosure and are not to be construed as limiting the present disclosure. On the contrary, the embodiments of the disclosure include all alternatives, modifications, and equivalents as may be included within the spirit and scope of the appended claims.
Fig. 1 is a schematic structural diagram of an autopilot system according to one embodiment of the present disclosure.
It should be noted that, the embodiment of the present disclosure supports the use of the camera assembly 103 to acquire the initial cockpit image, the process is acquired after the related authorization, and the acquiring process meets the rules of related laws and regulations, and does not violate the popular regulations.
As shown in fig. 1, the autopilot system 10 includes an intelligent domain controller ADD101, an intelligent domain controller DCD102, an imaging assembly 103 electrically connected to the ADD101, wherein,
The camera assembly 103 is used for acquiring an initial cockpit image and respectively providing the initial cockpit image to the intelligent cockpit controller ADD101 and the intelligent cockpit controller DCD102;
An intelligent driving area controller ADD101 for receiving an initial driving cab image from the camera assembly 103 and outputting an image processing result;
The intelligent capsule domain controller DCD102 is configured to output an image recognition result, where the image processing result and the image recognition result are used together for automatic driving control.
The intelligent driving domain controller ADD101 is a domain controller for controlling a vehicle to automatically drive, and the intelligent driving domain controller ADD101 can be configured with functions of multi-sensor fusion, positioning, path planning, decision control, wireless communication, high-speed communication and the like, and the functions of image recognition, data processing and the like are completed by externally connecting a plurality of cameras, millimeter wave radars, laser radars and the like.
The intelligent cabin domain controller DCD102 is a domain controller responsible for functions of an electronic system of a vehicle cabin, and the intelligent cabin domain controller DCD102 can integrate functions of a vehicle-mounted information system (instrument), a vehicle-mounted entertainment system and the like, and can integrate functions of a driver monitoring system, a look-around system, a vehicle recorder, an air conditioner controller and the like.
The image pickup module 103 is an image pickup module installed in the vehicle cabin, and one image pickup module may be used as the image pickup module 103, or a plurality of groups of image pickup modules may be disposed in the cabin to be used as the image pickup module 103 together, which is not limited thereto.
The initial cockpit image may be, but not limited to, picture data or video data in the cockpit captured by the imaging unit 103.
For example, the data in the joint image (Joint Photographic Experts Group, JPEG) format captured by the camera assembly 103 may be used as the initial cockpit image, or the data in the audio-video interlaced (Audio Video Interleaved, AVI) format recorded by the camera assembly 103 may be used as the initial cockpit image, which may, of course, be any other possible format of image data.
In the embodiment of the disclosure, the image capturing component 103 may periodically capture image data as an initial cockpit image, or may continuously record a video file of an image in the cockpit, and use the video file as the initial cockpit image, or may perform adaptive processing according to conditions in the cockpit, for example, when a change in the cockpit is detected, trigger the image capturing component 103 to collect the initial cockpit image, which is not limited.
In the embodiment of the disclosure, when the autopilot system is started, the camera assembly 103 may be immediately controlled to be powered on to control the camera assembly 103 to start and collect the initial cockpit image, or the camera assembly 103 may be kept continuously started (for example, a power supply is separately configured for the camera assembly 103, which is not limited in this regard), so that when the autopilot system is started, the camera assembly 103 is directly controlled to collect the initial cockpit image, which is not limited in this regard.
In some embodiments, the camera assembly 103 acquires the initial cockpit image, and may send an acquisition instruction to the camera assembly 103 after the automatic driving system is started, and the camera assembly 103 receives the acquisition instruction to acquire the initial cockpit image.
In other embodiments, the camera assembly 103 may collect the initial cockpit image, or the camera assembly 103 may continuously collect the initial cockpit image, which is not limited.
In the embodiment of the present disclosure, after the camera assembly 103 acquires the initial cockpit image, the acquired initial cockpit image is provided to the intelligent driving domain controller ADD101 and the intelligent driving domain controller DCD102, respectively.
Wherein the intelligent driving area controller ADD101 is configured to receive an initial cockpit image from the image capturing assembly 103 and output an image processing result.
The image processing result may specifically be, for example, an autopilot safety detection result, such as a line-of-sight detection result, an in-place detection result, a fatigue detection result, a distraction detection result, a dangerous behavior detection result, or the like, which is not limited.
In some embodiments, a deep neural network (Deep Neural Networks, DNN) may be provided in the intelligent driving area controller ADD101 to perform corresponding feature extraction on the initial cockpit image using the deep neural network, and to determine an image processing result according to the feature extraction result, by receiving the initial cockpit image from the camera assembly 103 and outputting the image processing result.
In other embodiments, the image processing module may be used to receive the initial cockpit image from the camera module 103 and output the image processing result, and the image processing result may be obtained by inputting the initial cockpit image, performing analysis and recognition processing through the image processing module, and outputting the image processing result, which may, of course, be obtained by using any other possible implementation, for example, long Short-Term Memory (LSTM) network, artificial intelligence model based on big data, digital image processing, or other technologies.
The intelligent capsule domain controller DCD102 is configured to output an image recognition result.
The image recognition result may specifically be, for example, a driver identification result such as Face identification (Face ID), iris identification, fingerprint identification, or the like, which is not limited.
In the embodiment of the disclosure, taking face recognition as a specific example, a face recognition algorithm based on a visible light image may be configured in the intelligent cockpit controller DCD102, an initial cockpit image may be received, and an image recognition result may be output according to the face recognition algorithm, or a face recognition system based on infrared spectrum may be configured to output the image recognition result by using the system, or an artificial intelligent model based on image recognition may be used to output the image recognition result, which is not limited.
In the embodiment of the disclosure, the image processing result and the image recognition result are commonly used for automatic driving control.
The automatic driving control is used for controlling the running of the vehicle and the running of related devices (such as an air conditioner, a voice microphone, a display screen and the like) in the vehicle according to the image processing result and the image recognition result.
For example, the safety of the driving behavior of the driver may be determined according to the result of the image processing, for example, the fatigue state of the driver is detected, whether there is dangerous behavior, etc., and according to the safety of the driver behavior, the device in the vehicle is selected to be controlled to remind the driver or stop the operation of the vehicle, or according to the result of the image recognition, whether the driver is a specified driver may be detected, and according to the record of the operation habit of the preset specified driver, the state of the device in the cockpit is configured, for example, the position of the seat is controlled, the air conditioner is adjusted, etc., without limitation.
In this embodiment, since the initial cockpit image is provided to the intelligent driving domain controller ADD and the intelligent cockpit domain controller DCD respectively, and the intelligent driving domain controller ADD is used to receive the initial cockpit image from the camera component and output the image processing result, and the intelligent cockpit domain controller DCD is used to output the image recognition result, the requirements of automatic driving efficiency and driving behavior safety can be effectively considered, the efficiency of automatic driving control is effectively improved, and driving safety is enhanced.
Fig. 2 is a schematic structural diagram of an autopilot system according to another embodiment of the present disclosure.
As shown in fig. 2, the intelligent drive domain controller ADD101 includes a driver monitoring subsystem DMS1011, wherein,
The camera assembly 103 provides an initial cockpit image to the driver monitoring subsystem DMS1011;
the driver monitor subsystem DMS1011 processes the initial cockpit image to obtain an image processing result.
The driver monitoring subsystem DMS1011 is a functional system for realizing line-of-sight detection, in-place detection, fatigue detection, distraction detection, dangerous behavior, and the like.
In the embodiment of the present disclosure, the driver monitoring subsystem DMS1011 may be configured in the intelligent driving area controller ADD101, and the image processing result is obtained by processing the initial cockpit image using the driver monitoring subsystem DMS1011, which is not limited thereto.
Optionally, in some embodiments of the present disclosure, as shown in fig. 2, the intelligent driving domain controller ADD101 further includes a first deserializer 1012 and a serializer 1013, wherein,
The camera assembly 103 provides the initial cockpit image to a first deserializer 1012;
The first deserializer 1012 processes the initial cockpit image to obtain a first cockpit image and a second cockpit image and provides the first cockpit image to the driver monitoring subsystem DMS1011 and the second cockpit image to the serializer 1013;
the serializer 1013 provides the second cockpit image to the intelligent cockpit domain controller DCD102.
The serializer and the deserializer are interface circuits in high-speed data communication, and can be used for realizing high-speed data transmission.
In the disclosed embodiment, the first deserializer 1012 supports dividing one data frame into two data frames, that is, the initial cockpit image may be divided into a first cockpit image and a second cockpit image by the first deserializer 1012, and the first cockpit image and the second cockpit image may be the same or different, which is not limited thereto.
Wherein the first cockpit image is a cockpit image for transmission to the driver monitoring subsystem DMS1011 and the second cockpit image is a cockpit image for provision to the serializer 1013.
In the embodiment of the disclosure, the initial cockpit image may be copied into two copies through the first deserializer 1012, and used as the first cockpit image and the second cockpit image, or the initial cockpit image may be analyzed and classified to obtain the first cockpit image and the second cockpit image, that is, the first cockpit image and the second cockpit image may be the same or different, which is not limited.
Optionally, in some embodiments of the present disclosure, as shown in fig. 2, the intelligent driving domain controller ADD101 further includes a motor control unit 1014 electrically connected to the first deserializer 1012 and the serializer 1013, respectively, wherein,
A motor control unit 1014 for transmitting a first transmission control instruction to the first deserializer 1012 and the serializer 1013, respectively;
A first deserializer 1012 for providing a first cockpit image to the driver monitoring subsystem DMS1011 in response to a first transmission control instruction;
The serializer 1013 is configured to provide the second cockpit image to the intelligent cockpit area controller DCD102 in response to the first transmission control instruction.
The motor control unit 1014 may be specifically, for example, a micro control unit (MicroController Unit, MCU), and the micro control unit MCU has the characteristic of quick power-on start, so that the efficiency of automatic driving control can be effectively improved by using the micro control unit MCU, which is not limited.
The first transmission control instruction is instruction information for controlling the operation of the first deserializer 1012 and the serializer 1013, and the first transmission control instruction may be used to control the data transmission condition of the first deserializer 1012 and the serializer 1013, which is not limited.
In the embodiment of the present disclosure, by configuring the motor control unit 1014, it can be ensured that the first deserializer 1012 and the serializer 1013 can be controlled to be rapidly powered up via the motor control unit 1014, so that the starting speeds of the first deserializer 1012 and the serializer 1013 are effectively improved, and further the efficiency of automatic driving control is effectively improved.
Optionally, in some embodiments of the present disclosure, as shown in fig. 2, the intelligent capsule domain controller DCD102 includes an image recognition component 1021, wherein,
The serializer 1013 provides the second cockpit image to the image recognition component 1021;
The image recognition component 1021 processes the second cockpit image for image recognition.
The image recognition component 1021 may specifically be, for example, a face recognition component, that is, the disclosure supports processing the second cockpit image by using the face recognition component to recognize the facial features of the driver in the second cockpit image, and uses the facial features as the image recognition result, and of course, the image recognition component 1021 may also specifically be, for example, an iris recognition component, a fingerprint recognition component, or the like, which is not limited.
Optionally, in some embodiments of the present disclosure, as shown in fig. 2, the intelligent cabin domain controller DCD102 further comprises a second deserializer 1022, wherein,
The serializer 1013 provides the second cockpit image to the second deserializer 1022;
The second deserializer 1022 receives the second cockpit image and provides the second cockpit image to the image recognition component 1021.
The second deserializer 1022 is, among other things, a deserializer arranged in the intelligent cabin domain controller DCD 102.
In the embodiment of the present disclosure, after the second cockpit image is transmitted through the serializer 1013, a data change may be generated (for example, a certain serialization process is performed on the parallel transmitted data), so the second deserializer 1022 may be configured to receive and process the second cockpit image transmitted by the serializer 1013, and provide the second cockpit image to the image recognition component 1021.
Optionally, in some embodiments of the present disclosure, as shown in fig. 2, the intelligent driving domain controller ADD101 further comprises a system on chip 1015 electrically connected to the first deserializer 1012 and the serializer 1013, respectively, wherein,
A system-in-chip 1015 for transmitting image processing instructions to the driver-monitoring subsystem DMS 1011;
The driver monitor subsystem DMS1011, in response to the image processing instruction, processes the first cockpit image to obtain an image processing result.
The System-On-Chip 1015 may specifically be, for example, a System-On-Chip (SOC) configured in the intelligent driving area controller ADD101, and by using the System-On-Chip 1015, various systems including the driver monitoring subsystem DMS1011 are integrated to realize the relevant functions of the intelligent driving area controller ADD 101.
The image processing instruction is instruction information for controlling the driver monitor subsystem DMS1011 to process the first cockpit image.
In the embodiment of the present disclosure, the image processing instruction may be generated by the system-in-chip 1015, so that the driver monitor subsystem DMS1011 can process the initial cockpit image based on the image processing instruction to obtain an image processing result, which is not limited.
Optionally, in some embodiments of the present disclosure, the system-on-chip 1015 is further configured to detect an elapsed time period after the motor control unit 1014 sends the first transmission control instruction, send the second transmission control instruction to the first deserializer 1012 and the serializer 1013, respectively, when the elapsed time period reaches a time period threshold, and perform transmission control on the first deserializer 1012 and the serializer 1013, respectively, based on the second transmission control instruction.
The time threshold is a threshold of an elapsed time after the first transmission control instruction is sent, and when the elapsed time exceeds the threshold, the target control is triggered to be performed on the first deserializer 1012 and the serializer 1013 based on the second transmission control instruction, respectively.
The target control is a method of controlling the first deserializer 1012 and the serializer 1013, and may be, for example, transmission of control data, power-up conditions of the first deserializer 1012 and the serializer 1013, and the like, which is not limited thereto.
That is, both the system-in-chip 1015 and the motor control unit 1014 support control of the first deserializer 1012 and the serializer 1013, the motor control unit 1014 controls the first deserializer 1012 and the serializer 1013 based on the first transmission control instruction, the system-in-chip 1015 controls the first deserializer 1012 and the serializer 1013 based on the second transmission control instruction, the motor control unit 1014 can be used to quickly power up when the automatic driving system is started, and the first deserializer 1012 and the serializer 1013 are quickly controlled based on the first transmission control instruction, and then when the time reaches the duration threshold, the control rights of the first deserializer 1012 and the serializer 1013 are switched to the system-in-chip 1015, and the first deserializer 1012 and the serializer 1013 are controlled based on the second transmission control instruction.
In the embodiment of the present disclosure, a control selection switch may be set, and when the elapsed time reaches the time threshold, the first deserializer 1012 and the serializer 1013 are controlled based on the second transmission control instruction by switching the control selection switch using the system-in-chip 1015, so that the control logic can be simplified while the quick start is satisfied, and the control efficiency is effectively improved.
In this embodiment, since the initial cockpit image is provided to the intelligent driving domain controller ADD and the intelligent cockpit domain controller DCD respectively, and the intelligent driving domain controller ADD is used to receive the initial cockpit image from the camera component and output the image processing result, and the intelligent cockpit domain controller DCD is used to output the image recognition result, the requirements of automatic driving efficiency and driving behavior safety can be effectively considered, the efficiency of automatic driving control is effectively improved, and driving safety is enhanced.
To sum up, as shown in fig. 3, fig. 3 is a schematic diagram of an autopilot system architecture according to another embodiment of the present disclosure, and the autopilot system 10 includes an intelligent driving domain controller ADD101, an intelligent cabin domain controller DCD102, and an image capturing component 103. The intelligent driving domain ADD101 includes a driver monitoring subsystem DMS1011, a first deserializer 1012, a serializer 1013, a motor control unit 1014, and a system-in-chip 1015, and the intelligent driving domain controller DCD102 includes an image recognition component 1021 and a second deserializer 1022. When the automatic driving system 10 is started, an initial cockpit image is acquired by the image pickup assembly 103, meanwhile, the first deserializer 1012 and the serializer 1013 are controlled by the motor control unit 1014 to be quickly powered on, the initial cockpit image is divided into a first cockpit image and a second cockpit image based on the first deserializer 1012, the first cockpit image is sent to the driver monitoring subsystem DMS1011 and is processed by the driver monitoring subsystem DMS1011 to obtain an image processing result, the second cockpit image is transmitted to the image recognition assembly 1021 through the serializer 1013 and the second deserializer 1022, the image recognition result is obtained through the image recognition assembly 1021, and the image processing result and the image recognition result are jointly used for automatic driving control. In some embodiments, the control selector is supported to be set in the intelligent driving domain ADD101, and the duration threshold is configured, when the duration threshold is reached, the control rights of the first deserializer 1012 and the serializer 1013 may be switched to the system-in-chip 1015 to simplify the control logic of the first deserializer 1012 and the serializer 1013, and in some embodiments of the present disclosure, the image processing result is also supported to be provided to the intelligent cabin domain controller DCD102, and the image recognition result is provided to the intelligent driving domain controller ADD101 (for example, using a physical transmission interface, a wireless transmission, etc.), so as to implement the automatic driving control of the multi-angle, multi-domain controller linkage.
Fig. 4 is a flowchart of an automatic driving control method according to an embodiment of the disclosure.
The main execution body of the automatic driving control method of the present embodiment is an automatic driving control device, which may be implemented in software and/or hardware, and which may be configured in a vehicle.
As shown in fig. 4, the automatic driving control method includes:
s401, acquiring an initial cockpit image, and respectively providing the initial cockpit image to the intelligent driving domain controller ADD and the intelligent cabin domain controller DCD.
In the embodiment of the disclosure, when the automatic driving system is started, the camera assembly can be immediately powered on to control the camera assembly to start and collect the initial cockpit image, or the camera assembly can be kept continuously powered on (for example, a power supply is independently configured for the camera assembly, and the camera assembly is not limited in this way), so that when the automatic driving system is started, the camera assembly is directly controlled to collect the initial cockpit image, and the method is not limited in this way.
In the embodiment of the disclosure, the initial cockpit image may be provided to the intelligent cockpit controller ADD and the intelligent cockpit controller DCD, for example, the initial cockpit image may be copied into two parts, or the initial cockpit image may be parsed, and the initial cockpit image may be divided into two parts according to the parsing result, and then the initial cockpit image may be provided to the intelligent cockpit controller ADD and the intelligent cockpit controller DCD, which is not limited.
S402, controlling the intelligent driving area controller ADD to receive the initial driving cab image and outputting an image processing result.
In the embodiment of the disclosure, the driver monitoring module DMS can be controlled to receive the initial cockpit image acquired by the camera module, and the initial cockpit image is analyzed and processed to obtain an image processing result.
And S403, controlling the intelligent cockpit domain controller DCD to output an image recognition result, wherein the image processing result and the image recognition result are jointly used for automatic driving control.
In the embodiment of the disclosure, the intelligent cockpit domain controller DCD may be controlled to receive the second cockpit image, and the image recognition component in the intelligent cockpit domain controller DCD analyzes and recognizes the initial cockpit image to obtain an image recognition result.
In the embodiment of the disclosure, the automatic driving control can be performed based on the image processing result and the image recognition result.
In this embodiment, the initial cockpit image is collected and provided to the intelligent driving domain controller ADD and the intelligent cabin domain controller DCD respectively, and then the intelligent driving domain controller ADD is controlled to receive the initial cockpit image and output an image processing result, and the intelligent cabin domain controller DCD is controlled to output an image recognition result, where the image processing result and the image recognition result are used for automatic driving control together.
Fig. 5 is a schematic structural diagram of an autopilot control apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the automatic driving control device 50 includes:
the acquisition module 501 is configured to acquire an initial cockpit image, and provide the initial cockpit image to the intelligent cockpit controller ADD and the intelligent cockpit controller DCD respectively;
The first control module 502 is configured to control the intelligent driving domain controller ADD to receive an initial driving cabin image and output an image processing result;
a second control module 503, configured to control the intelligent capsule domain controller DCD to output an image recognition result, where the image processing result and the image recognition result are used together for automatic driving control.
Corresponding to the autopilot control method provided in the embodiment of fig. 4, the present disclosure also provides an autopilot control apparatus, and since the autopilot control apparatus provided in the embodiment of the present disclosure corresponds to the autopilot control method provided in the embodiment of fig. 4, the implementation of the autopilot control method is also applicable to the autopilot control apparatus provided in the embodiment of the present disclosure, which is not described in detail in the embodiment of the present disclosure.
In this embodiment, the initial cockpit image is collected and provided to the intelligent driving domain controller ADD and the intelligent cabin domain controller DCD respectively, and then the intelligent driving domain controller ADD is controlled to receive the initial cockpit image and output an image processing result, and the intelligent cabin domain controller DCD is controlled to output an image recognition result, where the image processing result and the image recognition result are used for automatic driving control together.
In order to realize the embodiment, the disclosure further provides a vehicle, which comprises a processor and a memory for storing instructions executable by the processor, wherein when the processor executes the instructions, the automatic driving control method as provided in the previous embodiment of the disclosure can be realized.
In order to implement the above-described embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an automatic driving control method as proposed by the foregoing embodiments of the present disclosure.
To achieve the above embodiments, the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the autopilot control method of the foregoing embodiments of the present disclosure.
Fig. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 6 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 12 is in the form of a general purpose computing device. The components of the electronic device 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that connects the various system components, including the system memory 28 and the processing units 16. Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECTION; hereinafter PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard disk drive").
Although not shown in fig. 6, a disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile disk read only memory (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 12 may also communicate with one or more external devices 15 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks, such as a local area network (Local Area Network; hereinafter: LAN), a wide area network (WIDE AREA NET work; hereinafter: WAN), and/or a public network, such as the Internet, through the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the automatic driving control method mentioned in the foregoing embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof.
It should be noted that in the description of the present disclosure, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.