WO2023018309A1 - A method and an apparatus for generating localized vibrations - Google Patents
A method and an apparatus for generating localized vibrations Download PDFInfo
- Publication number
- WO2023018309A1 WO2023018309A1 PCT/KR2022/012144 KR2022012144W WO2023018309A1 WO 2023018309 A1 WO2023018309 A1 WO 2023018309A1 KR 2022012144 W KR2022012144 W KR 2022012144W WO 2023018309 A1 WO2023018309 A1 WO 2023018309A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- identified
- contents
- vibration frequency
- electronic device
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1656—Details related to functional adaptations of the enclosure, e.g. to provide protection against EMI, shock, water, or to host detachable peripherals like a mouse or removable expansions units like PCMCIA cards, or to provide access to internal components for maintenance or to removable storage supports like CDs or DVDs, or to mechanically mount accessories
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
Definitions
- the disclosure relates to a method and an apparatus for generating localized vibrations.
- the disclosure relates to a method and an apparatus for generating a localized haptic feedback effect in an electronic device.
- vibration motor to generate vibrations to notify users in response an event.
- the vibration generated by these electronic devices is of very high intensity. Vibrations are used for tactile feedback in touch-based electronic devices.
- touch-based smart phones users have not been familiar with tapping on a smooth display instead of pushing physical buttons. For this purpose, small vibration feedback was added during the tapping. Further technological use of vibrations began to be applied relatively recently.
- the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device.
- the method includes identifying, by a processor, one or more contents on a display screen of the electronic device, dividing the display screen into a plurality of grids, determining one or more haptic event locations associated with the identified one or more contents on the plurality of grids, classifying the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations, determining an optimized vibration frequency based on the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound source by one or more audio sources of the electronic device, and generating, based on the determined optimized vibration frequency, a calibrated optimized vibration frequency by calibration of the optimized vibration frequency and an amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more
- FIGS. 1A, 1B, 1C, and 1D illustrate an example illustration of a vibration effect
- FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure
- FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure
- FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure
- FIGS. 5A, 5B, 5C, and 5D illustrate examples of a content identification process, in accordance with an embodiment of the disclosure
- FIG. 6A, 6B, 6C and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure
- FIGS. 7A and 7B illustrates an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure
- FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure
- FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure
- FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit, in accordance with an embodiment of the disclosure
- FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure
- FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure
- FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure
- FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure
- FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine 406, in accordance with an embodiment of the disclosure
- FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure
- FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure
- FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure
- FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure
- FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure
- FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure
- FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
- FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure.
- any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
- the vibrations used for tactile feedback are being produced using various motors including a DC motor that is effective in creating a buzzing and motional sensation in a device.
- the vibrations produced by the DC motor are usually centralized on a touch-based display of electronic devices and have a damping effect from the place where the DC motor is positioned in the touch-based electronic devices.
- the DC motor produces the vibration, the whole electronic device gets a vibration effect and the vibration effect is not specific to a particular location on the electronic device.
- DC motors bear some extra cost and utilize the power of the electronic device to produce the desired effect.
- the entire phone vibrates when the DC motor produced the vibration. It clearly shows an absence of localized vibration.
- the vibrations produced by DC motors during an incoming call and video playback are centralized.
- the electronic device vibrates at multiple points due to an effect of the vibrations produced by the DC motors.
- the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device
- FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure.
- FIG. 2 illustrates a electronic device 2000 to generate the localized haptic feedback effect.
- the electronic device 2000 includes an Application Framework 2101, a Hardware Layer 2300, and a processor 2200 which further includes an Identification Engine 2202, an Approximation Engine 2204, a Calibration Engine 2206, and a Generation Engine 2208.
- the aforementioned components of the electronic device are coupled with each other.
- Each of the Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are communicatively coupled to the Hardware Layer 212 and the Application Framework 210.
- the Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are implemented as the processor 2200.
- the Hardware Layer 2300 of the electronic device 2000 includes a display 2310, an audio unit 2320, and a sensor 2330.
- the display 2310 includes a touch screen panel 2311 and a graphics engine 2312 coupled with the touch screen panel 2311.
- the display 2310 displays an image or video content. Examples of the display 2310 may include but not limited to, a television screen, a smartphone screen, a smart television screen, and a tablet screen.
- the display 2310 may be Light Emitting Diode (LED), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED), Active-Matrix Organic Light Emitting Diode (AMOLED), or Super Active-Matrix Organic Light Emitting Diode (SAMOLED) screen.
- the display 2310 may have varied resolutions. It will be understood to a person of ordinary skill in the art that the disclosure is not limited to any type or any resolution of the display 2310.
- the audio unit 2320 includes a speaker unit 2321 including one or more speakers, and at least one microphone 2322.
- the sensor 2330 may include one or more sensors.
- the sensor 2330 includes, but is not limited to, a grip sensor 2331, an accelerometer 2332, a gyroscope sensor 2333, and a humidity sensor 2339.
- the sensor 2330 may include sensors different from those described above.
- the Identification Engine 2202 identifies one or more contents displayed on the display 2310. After identifying the one or more contents displayed on the display 2310, the Identification Engine 2202 divides the display screen into a plurality of grids and determines occurrences and corresponding locations of one or more haptic events on the plurality of grids. The locations of one or more haptic events are associated with the identified one or more contents.
- the one or more haptic event locations can also be referred to as "one or more vibration locations" without deviating from the scope of the disclosure.
- the Approximation Engine 2204 classifies one or more haptic events associated with the identified content.
- the Approximation Engine 2204 classifies the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
- the one or more haptic events can also be referred to as "one or more vibration events” without deviating from the scope of the disclosure
- the level of haptic feedback can also be referred to as "a frequency level of the one or more vibration events” without deviating from the scope of the disclosure.
- the Calibration Engine 2206 determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents.
- the Calibration Engine 2206 determines the optimized vibration frequency based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320.
- the Generation Engine 2208 generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations.
- the Generation Engine 2208 generates the calibrated optimized vibration frequency by using at least one of the one or more speakers of the speaker unit 2321 and/or the at least one microphone 2322 included in the audio unit 2320.
- FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure. The operations of FIG. 3 may be performed by the processor 2200 of the electronic device 2000.
- the processor 2200 may identify one or more contents displayed on a display screen of the display 2310 in operation 302.
- the processor 2200 may divide the display screen into a plurality of grids.
- the processor 2200 may determine one or more haptic events locations on the plurality of grids.
- the one or more haptic event locations are associated with the identified one or more contents.
- the Identification Engine 2202 of the processor 2200 may perform each of the operations 302, 304, and 306 of the method 300.
- the method 300 comprises classifying one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
- the Approximation Engine 2204 of the processor 2200 performs the operation 308.
- the method 300 comprises determining an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents, based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000.
- the Calibration Engine 206 of the processor 2200 may perform the operation 310.
- the method 300 comprises generating a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations, using at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000.
- FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure.
- the electronic device 400 includes an Identification Engine 402, an Approximation Engine 404, a Calibration Engine 406, a Generation Engine 408.
- Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 corresponds to the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 of the electronic device 400, respectively.
- Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 performs operations similar to operations performed by the processor 2200 or the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 as described above, respectively.
- the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 may be implemented as the processor 2200.
- the electronic device 400 corresponds to the electronic device 2000 of FIG. 2.
- FIGS. 4 through 17C For a detailed description of the operations performed by the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408, a detailed explanation will be made with reference to FIGS. 4 through 17C.
- the Identification Engine 402 of the electronic device 400 includes a Mode Selection Unit 402A, a Fast Region-Based Convolutional Neural Networks (R-CNN) based Object detection unit 402B, and an Environment Identification unit 402C.
- a Mode Selection Unit 402A a Fast Region-Based Convolutional Neural Networks (R-CNN) based Object detection unit 402B
- an Environment Identification unit 402C an Environment Identification unit 402C.
- the Mode Selection Unit 402A of the Identification Engine 402 determines whether a current mode of the electronic device 400 is a multimedia mode and checks a requirement of identification of the one or more contents in case it is determined that the current mode is the multimedia mode based on application data of the electronic device 400 and the one or more contents displayed on the display screen. In an embodiment, the Mode Selection Unit 402A of the Identification Engine 402 identifies whether the one or more contents displayed on the display screen is at least one of multimedia content, image, and a user interface (UI) element.
- UI user interface
- the R-CNN based Object detection unit 402B of the Identification Engine 202 identifies one or more objects based on a result of the identification of the one or more contents displayed on the display screen. In order to identify the one or more objects, firstly the R-CNN based Object detection unit 402B divides an input frame of the multimedia mode that is displayed on the display screen into the plurality of grids. Secondly, the R-CNN based Object detection unit 402B determines the one or more haptic event locations on the divided plurality of grids. The one or more haptic event locations are associated with the identified one or more contents.
- the R-CNN based Object detection unit 402B determines a content position of the identified one or more contents based on determined one or more haptic event locations.
- the one or more haptic event locations may correspond to a location of objects in a multimedia scene, a location of on object in a content displayed on the displayed screen, a location on the display screen on which a tap operation is performed by a user, a location of a focused view object identified on the display screen.
- the aforementioned examples of the one or more haptic event locations are not limited to these as described above.
- the one or more haptic event locations can be a location on the display screen other than the above-described examples.
- the R-CNN based Object detection unit 402B obtains corresponding coordinates of the identified one or more objects based on the content position of the identified one or more contents.
- the one or more haptic event locations corresponds to the obtained coordinates and can be defined as the coordinates for the haptic event locations.
- FIG. 5A, 5B, 5C and 5D illustrates examples of a content identification process, in accordance with an embodiment of the disclosure.
- FIG. 5A, 5B, 5C and 5D illustrate examples of the determination of the current mode by the Mode Selection Unit 402A and the identification of the one or more contents by the R-CNN based Object detection unit 402B of FIG. 4, in accordance with an embodiment of the disclosure.
- each of a first terminal device 500, a second terminal device 514, and a third terminal device 516 is displaying a first multimedia scene 502, a second multimedia scene 518, and a chatting window with Keyboard 520, respectively.
- a focused view display screen 522 is also shown in FIG. 5D.
- the Mode Selection Unit 402A of the Identification Engine 402 determines that the current mode of the first terminal device 500 is the multimedia mode based on the display of the multimedia scene on the display screen. Then, the R-CNN based Object detection unit 402B identifies a set of objects 506 in the multimedia scene displayed on the display screen. Further, the R-CNN based Object detection unit 402B divides the multimedia scene into grids 504 and determines the one or more haptic event locations on the divided grids 504. Furthermore, the R-CNN based Object detection unit 402B determines a respective position of the identified set of objects 506 on the divided grids 504 as locations of haptic events.
- the respective position of the identified set of objects 506 determined by the R-CNN based Object detection unit 402B can be represented in the form of object coordinates.
- Table 1 illustrates an example representation of the object coordinates determined by the R-CNN based Object detection unit 402B.
- the region of interests (ROI) corresponds to the set of objects 506 (i.e., three vehicles (Jeep) as shown in FIG. 5A).
- the right side of table 1 indicates bounding box coordinates corresponding to the coordinates of the set of objects 506.
- the aforementioned Table 1 is merely an example and not limited to the above-described example. It can include different data based on the multimedia mode of the electronic device 400 and can be arranged in any other format based on system requirements.
- the R-CNN based Object detection unit 402B may identify an object 508 of the second Multimedia scene 518 displayed on the second terminal device 514 in FIG. 5B, a tap input location 510 corresponding to a tap operation in the chatting window with Keyboard 520 displayed on the third terminal device 516 in FIG. 5C, and a focused view location 512 in the focused view display screen 522 in FIG. 5D.
- the Mode Selection Unit 402A of the Identification Engine 402 may also determine touch coordinates of a user tap input on the display screen and transfers the touch coordinates to the Approximation Engine 404.
- the Environment Identification unit 402C of the Identification Engine 402 detects a plurality of environmental parameters of the electronic device 400.
- the Environment Identification unit 402C receives, from the one or more sensors, sensed data - environmental data - detected by the one or more sensors.
- the Environment Identification unit 402C may receive sensed data from at least one of the grip sensor 2311, the accelerometer 2332, the gyroscope sensor 2333, or the humidity sensor 2339.
- the Environment Identification unit 402C detects the plurality of environmental parameters that correspond to at least one of a state of the electronic device 400, a surface on which the electronic device 400 is placed, and an orientation of the electronic device 400.
- the Environment Identification unit 402C receives the sensor data from the sensor 2330 and calculates surface values associated with the surface on which the electronic device 400 is placed based on the received sensor data.
- An example of such sensor data is shown below in Table 2.
- Table 2 illustrates the sensor data for detection of the plurality of environmental surfaces on which the electronic device 400 is placed.
- the aforementioned Table 2 is merely an example and not limited to the above-described example. It can include sensor data different from the sensor data of table 2 and can be arranged in any other format based on the requirement by the electronic device 400. Further, the Environment Identification unit 402C transfers the detected plurality of environmental parameters to the Calibration Engine 406.
- FIG. 6A, 6B, 6C, and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure.
- FIG. 6A illustrates an example of the environmental parameters detected by Environment Identification unit 402C of FIG. 4, in accordance with an embodiment of the disclosure. Depicted are a location of a phone in water in FIG. 6A, a phone on a hard surface in FIG. 6B, and a phone in a hand of a user in FIG. 6C.
- the Environment Identification unit 402C receives the sensor data including information about at least one of the locations of the phone in the water, on the hard surface, or in the hand of the user.
- the Environment Identification unit 402C also identifies a current mode of the electronic device 400.
- the Environment Identification unit 402C device identifies one of a vibration mode, a ringing mode, or a silent mode of the electronic device 400 using the sensor data. Further, subsequent to the reception of the sensor data and the identification of the current mode, the Environment Identification unit 402C calculates the environmental parameters based on sensor data and the current mode of the electronic device 400.
- the Approximation Engine 404 of the electronic device 400 includes an Audio Processing Engine 404A, a speaker position determination unit 404B, a scene understanding unit 404D, a Frequency Mapping Unit 404E, a Size based Scaling Unit 404F.
- the Audio Processing Engine 404A identifies one or more audio sources associated with the identified one or more contents and determines an audio source position of the identified one or more audio sources based on audio sources available in the electronic device 400. In order to determine the audio source position of the identified one or more audio sources, firstly the Audio Processing Engine 404A extracts built-in audio sources information including speaker information and microphone information associated with the speaker unit 2321 and the at least one microphone 2322.
- the built-in audio sources information may include information associated with application programming interfaces (APIs) and hardware IDs of the electronic device 400.
- the speaker information includes coordinates of the one or more speakers included in the speaker unit 2321 and features of the one or more speakers. As an example, the features of the one or more speakers may correspond to a surround sound feature.
- the Audio Processing Engine 404A determines whether a position or coordinates of at least one of the one or more speakers or the at least one microphone 2322 is present in the extracted built-in audio sources information.
- the Audio Processing Engine 404A transfers the positions or the coordinates of at least one of the one or more speakers or the at least one microphone 2322 to the Calibration Engine 406. If a result of the determination at the block 404C is No, then the speaker position determination unit 404B calculates the corresponding coordinates by a triangulation technique. The corresponding coordinates correspond to the audio source position of the identified one or more audio sources.
- FIGS. 7A and 7B illustrate an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure.
- FIG. 7A illustrates an example of information associated with the one or more audio sources available in the electronic device 400.
- the syntax TYPE_BUILTIN_EARPIECE indicates an earpiece source (Sa) and the syntax TYPE_BUILTIN_SPEAKER indicates a speaker source (Sb).
- respective syntax TYPE_BUILTIN_MIC in FIG. 7A indicates a first microphone source (a) and a second microphone source (b).
- the speaker position determination unit 404B calculates corresponding coordinates of the speaker source, the earpiece source, the first microphone source, and the second microphone source in a 2-dimensional (2D) coordinate system by a triangulation technique.
- the speaker position determination unit 404B may also calculate the corresponding coordinates in a 3-dimensional (3D) coordinate system.
- the speaker position determination unit 404B calculates a specific coordinate of each of the speaker source, the earpiece source, the first microphone source, and the second microphone source on the X-Y axis of the 2D coordinate system axis using the triangulation technique described in FIG. 7B.
- two speaker coordinates (-s,0) and (S,0) is on the X-axis of the 2D coordinate system. Also, it is disclosed a coordinate (0, m) of a reference microphone on the Y-axis of the 2D coordinate system.
- the speaker position determination unit 404B may use the below equations (1) and (2) to calculate a specific coordinate (x, y) of a target microphone in the 2D coordinate system.
- R1 and R2 correspond to a radial distance from each the speaker and the earpiece to the target microphone.
- the speaker position determination unit 404B may also store the calculated coordinates in a database.
- An example of the calculated coordinates is described below in Table 3.
- Table 3 illustrates an example of a sample table including coordinates information of the audio sources.In Table 3, Speaker 1 may correspond to the earpiece and Speaker 2 to the speaker, respectively.
- the speaker position determination unit 404B transfers the calculated coordinates of the one or more audio sources to the Calibration Engine 406.
- the Scene Understanding Unit 404D may acquire audio information and video information associated with the multimedia scene and extracts a plurality of features of the identified one or more contents to classify the one or more haptic events associated with the identified one or more contents into a plurality of action classes.
- the plurality of features of the identified one or more contents is extracted from the acquired audio information and video information associated with the multimedia scene.
- the plurality of features of the identified one or more contents includes at least one of a size of the identified one or more objects and an audio amplitude of the identified one or more objects.
- the Scene Understanding Unit 404D extracts the plurality of features of the identified one or more contents by use of a Convolutional Neural Network (CNN). An example of such extraction will be described with reference to FIGS. 8A and 8B.
- CNN Convolutional Neural Network
- FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure.
- FIG. 8A discloses the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3).
- the Scene Understanding Unit 404D analyzes each of the video frames included in the acquired video information and also analyzes each of the audio frames included in the acquired audio information. Thereafter, the Scene Understanding Unit 404D determine feature vectors for actions in the video frames using the CNN network.
- each of a plurality of video content (video 1, video 2, ⁇ ., video m) is sampled into a plurality of frames. Each of the sampled frames is analyzed to identify a presence of objects using the CNN network and thereafter feature vectors for actions are recognized and averaged as per action categories.
- K in FIG. 8B corresponds to a number of the action categories.
- the Frequency Mapping Unit 404E estimates a scene intensity with respect to each of the identified one or more contents based on at least one of the audio amplitude of the identified one or more objects, the size of the identified one or more objects, and a mapping of the identified one or more contents over the plurality of action classes of the one or more contents.
- the Frequency Mapping Unit 404E estimates intensity of vibrations associated with the objects in multimedia scenes in the video information based on at least one of an intensity of audio associated with the identified one or more objects and mapping of frequency of the objects in the multimedia scenes with their action classes.
- the scene intensity corresponds to the level of haptic feedback to be generated at the one or more haptic event locations. Accordingly, the Frequency Mapping Unit 404E classifies the one or more haptic events with respect to the level of haptic feedback based on the estimated scene intensity.
- the one or more haptic events is associated with the identified set of objects 506, and the plurality of action classes is associated with a class of action corresponding to each of the identified set of objects 506.
- the operations performed by the Frequency Mapping Unit 404E will now be described with the help of an example with reference to FIG. 8A.
- the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3).
- the Scene Understanding Unit 404D determines feature vectors for actions in the video frames using the CNN network. Therefore, as an exemplary jeeps in FIG. 8 with blast will have higher intensity than jeeps moving normally in the multimedia scene. Accordingly, the Frequency Mapping Unit 404E maps the frequencies of the objects in the multimedia scenes with their action classes based on an amplitude of the audio related to objects in the multimedia scenes and further estimates the intensity of the vibrations of the objects in multimedia scenes.
- the Frequency Mapping Unit 404E sets default vibration intensity values.
- the default vibration intensity values are intensity values that are included in the settings of the electronic device 400. For example, any button or checkbox in a display content that needs to be highlighted should have device default vibration intensity values.
- the Frequency Mapping Unit 404E categorizes the estimated scene intensity with respect to each of the identified one or more contents as one of a high, a medium, a low, or a default scene intensity value.
- Table 4 illustrates a categorization example of the estimated scene intensity.
- FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure.
- the process starts with the operation 900, the Frequency Mapping Unit 404E checks for the presence of at least one object in the multimedia scene, the location of the object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
- the Frequency Mapping Unit 404E checks for a surface on which the electronic device 400 is placed and locations of the one or more audio sources available in the electronic device 400 using the sensor data and the coordinates of the audio sources stored in a database 908.
- the Frequency Mapping Unit 404E determines whether the vibration is needed for the at least one object recognized in operation 900. In case the result of the determination in operation 904 is No, then the operation of the Frequency Mapping Unit 404E is stopped. Further, in case a result of the determination at the step 904 is Yes(vibration needed), the Frequency Mapping Unit 404E maps the frequency of the at least one object recognized in step 900 with a variable frequency associated with an action class of the at least one object.
- the Frequency Mapping Unit 404E stores each of the mapped frequencies values, estimated scene intensity values, and their coordinates in the database.
- An example of the estimated scene intensity values is shown in Table 5 with reference to FIG. 8A.
- the Size based Scaling Unit 404F scales the intensity estimated by the Frequency Mapping Unit 404E.
- the Size based Scaling Unit 404F scales the estimated intensity based on the size of the identified one or more objects.
- the Size based Scaling Unit 404F classifies a size of each of the one or more objects identified by the Identification Engine 402. For example, if an object identified by the Identification Engine 402 has a large size, then the Size based Scaling Unit 404F classifies the object as having a greater intensity and scales an audio frequency of the object according to the size of the object identified by the Identification Engine 402.
- FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit 404F of FIG. 4, in accordance with an embodiment of the disclosure.
- the process starts with the operation 910, the Size based Scaling Unit 404F checks for the presence of at least one object in the multimedia scene, the location of the at least one object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
- the Size based Scaling Unit 404F matches the audio frequency of the at least one object recognized in operation 910 with a size of the at least one object recognized in operation 910.
- the Size based Scaling Unit 404F determines whether frequency scaling of the audio associated with the at least one object recognized at the step 910 is required based on a result of the match process in operation 912. In case if the result of the determination in operation 914 is No, then the operation of the Size based Scaling Unit 404F is stopped. Further in a case if a result of the determination in operation 914 is Yes, then the Size based Scaling Unit 404F scales the audio frequency of the at least one object recognized in operation 910 based on the size of the at least one object on the display screen.
- the Size based Scaling Unit 404F generates a list of the scaled audio frequency values of the one or more objects by performing the operations 910 through 916 and stores the scaled audio frequency values in the database in operation 908.
- An example of the scaled audio frequency values with reference to FIG. 8A is shown below in Table 6. As shown in Table 6 the frequency column includes scaled audio frequency values of the one or more objects (jeeps).
- the Calibration Engine 406 of the electronic device 400 includes an Amplitude Calculation Unit 406A, a Frequency Calculation Unit 406B, a Coordinate & Frequency Alignment Unit 406C, a Source Frequency Selection Unit 406E, a Noise Estimation Unit 406F, an Amplitude Rectification unit 406G, and a Sound Source Amplitude Selection Unit 406H.
- the Amplitude Calculation Unit 406A calculates an audio amplitude of each of the identified one or more objects by using a Fast Fourier Transform (FFT).
- the audio amplitude of the identified one or more objects is calculated based on the audio information corresponding to the identified one or more contents.
- the Amplitude Calculation Unit 406A calculates the amplitude of a sound wave related to the identified one or more objects.
- the amplitude for a smartphone can vary between 0-90 Db(decibel). This is only an example, and not limited to the described example only, amplitude can vary between any other ranges, too.
- the audio amplitudes calculated by the Amplitude Calculation Unit 406A can be stored in the database in operation 908. An example of the stored data is shown below in Table 7 with reference to FIG. 8A.
- the fourth column indicates the calculated audio amplitude values of the one or more objects (Jeep 1, Jeep 2, and Jeep 3 of FIG. 8A).
- the audio amplitude values of the one or more objects may vary based on a type of object and the action class associated with the object in the multimedia scene.
- the Amplitude Calculation Unit 406A sets default amplitude values for the objects for which no action class, scenes, and audio information is present in the displayed multimedia scene or any other display content to be displayed.
- the default amplitude values correspond to the amplitude values included in the settings of the electronic device 400.
- the Amplitude Calculation Unit 406A determines a sound intensity range of each of the identified one or more objects based on the calculated audio amplitude and maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range. Now functions and operations of the Amplitude Calculation Unit 406A will be explained with reference to FIG. 10A of the drawings.
- FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure.
- the process starts with the operation 1000, the Amplitude Calculation Unit 406A checks a level of the estimated intensity of the vibration of respective one or more objects.
- the Amplitude Calculation Unit 406A identifies the coordinates of the one or more objects using the output of the Identification Engine 402.
- the Amplitude Calculation Unit 406A determines the sound intensity range of each of the identified one or more objects based on the level of the estimated intensity of the vibration of the respective one or more objects. In operation 1006, after the determination of the sound intensity range, the Amplitude Calculation Unit 406A maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range stored in an intensity frequency database (DB) in operation 1008.
- the intensity frequency DB includes information including the intensity frequency range corresponding to a plurality of levels of the estimated intensity of the vibration. As an example, the information included in the intensity frequency DB is shown in Table 8 below.
- Intensity Range Frequency Range ⁇ Low 0 ⁇ 200 kHz Low to Medium 200 ⁇ 300 kHz Medium to High 300 ⁇ 500 kHz
- the left column of the Table 8 indicates the sound intensity range of each of the identified one or more objects and the right column of the Table 8 indicates the predefined intensity frequency range corresponding to the respective sound intensity range.
- the Frequency Calculation Unit 406B determines a vibration frequency of each of the one or more objects based on the audio amplitude of each of the one or more objects and the estimated scene intensity associated with the one or more objects. As an example, the Frequency Calculation Unit 406B determines the vibration frequency for each of the one or more objects in the multimedia scene based on the mapping of the sound intensity range of each of the one or more objects over the corresponding predefined intensity frequency range stored in the intensity frequency DB.
- the Frequency Calculation Unit 406B generates a list of vibration frequencies of the one or more objects based on the determined vibration frequency for each of the one or more objects.
- An example list of the vibration frequencies of the one or more objects is shown below as an example Table 9 with reference to objects shown in FIG. 10B.
- FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure.
- the amplitude of the respective objects (jeep 1, jeep 2, and jeep 3) are 78Db, 70Db, and 28Db, respectively.
- the Frequency Calculation Unit 406B determines the vibration frequencies of the jeep 1, the jeep 2, and the jeep 3 based on corresponding audio amplitudes of the jeep 1, the jeep 2, and the jeep 3 in the multimedia scene and the mapping of the sound intensity range of each of amplitudes the jeep 1, the jeep 2, and the jeep 3 over the corresponding predefined intensity frequency range stored in the intensity frequency DB 1008. Therefore, as a result, desired vibration frequencies fa, fb, and fc are determined for the jeep 1, the jeep 2, and the jeep 3, respectively.
- the desired vibration frequency fa for the jeep 1 is determined as 420 kHz
- the desired vibration frequency fb for the jeep 2 is determined as 366 kHz
- the desired vibration frequency fc for the jeep 3 is determined as 200 kHz.
- the example for determining the vibration frequency of each of the one or more objects is only not limited to the aforementioned example.
- the vibration frequency will be determined according to a change in the multimedia scenes and a display of the one or more contents on the display screen.
- the Coordinate & Frequency Alignment Unit 406C determines a capability of generating the determined vibration frequency of each of the identified objects from the one or more audio sources available in the electronic device 400 based on at least one of the determined audio source position, identified object position, and the detected plurality of environmental parameters, thereby determining the capability of generation of sound source by the one or more audio sources available in the electronic device 400.
- the process of determining the capability of generating the determined vibration frequency will be described with reference to FIG. 11.
- FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure.
- the process of determining the capability of generating the determined vibration frequency of each of the identified objects starts with the operation 1100, the Coordinate & Frequency Alignment Unit 406C calculates constructive and destructive interference parameters using the coordinates of the one or more objects and their frequencies in the multimedia scene.
- the Coordinate & Frequency Alignment Unit 406C calculates a number of audio sources (i.e., speakers or mic) and their coordinates needed for generating the determined vibration frequency using coordinates of the one or more objects identified by the Identification Engine 402.
- the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the electronic device 400 can generate the determined vibration frequency. In a case, if a result of the determination in operation 1104 is Yes, then the process flow proceeds now to the operation 1106. In operation 1106, the Coordinate & Frequency Alignment Unit 406C associates the determined vibration frequency to desired audio sources for the generation of the determined vibration frequency.
- the Coordinate & Frequency alignment unit 406C determines an optimized vibration frequency for each of the identified one or more objects based on the audio amplitude for each of the identified one or more objects and the estimated scene intensity. Now, an example of the process of determining the optimized vibration frequency will be explained with reference to FIGS. 12A and 12B.
- FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure.
- FIG. 12A discloses two vibration frequencies f1 and f2 and a resultant frequency fa.
- the vibration frequency f1 corresponds to a frequency desired to be generated by an audio source s1 and the vibration frequency f2 corresponds to a frequency desired to be generated by an audio source s2.
- the resultant frequency fa corresponds to a vibration frequency desired to be produced by the audio sources s1 and s2.
- the Coordinate & Frequency Alignment Unit 406C may determine that there are two audio sources available to generate the resultant frequency fa.
- the Coordinate & Frequency Alignment Unit 406C calculates the vibration frequencies (f1 and f2) of the available two audio sources (s1 and s2). In operation 1204, the Coordinate & Frequency Alignment Unit 406C calibrates the vibration frequencies of the one or more objects such that the audio sources available in the electronic device 400 such that the average of the calibrated frequencies lies around the desired frequency.
- the vibration frequencies of the one or more objects are calibrated using coordinates of the audio sources available in the electronic device 400 and a regressive learning mechanism generated by a learning and feedback engine to be described later. The calibration of the vibration frequencies of the one or more objects will now be explained with an example.
- the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the system can generate the calibrated frequency. If the result of the determination in operation 1206 is Yes, then the process of determining the optimized vibration frequency will come to an end. However in a case, if a result of the determination in operation 1206 is No, then the Coordinate & Frequency Alignment Unit 406C recalibrates the determined vibration frequency such that the audio sources available in the electronic device 400 can generate the desired frequency for vibration at the coordinates of each of the identified one or more objects identified by the Identification Engine 402.
- the Coordinate & Frequency Alignment Unit 406C determines a list of the optimized vibration frequency for the identified one or more objects.
- An example of the list of the optimized vibration frequency with refrerence to FIG. 10B and 12A is shown in Table 10 below.
- the Source Frequency Selection Unit 406E selects one optimized vibration frequency from the list of the optimized vibration frequency using the regressive learning mechanism. After the selection of the optimized vibration frequency, the Source Frequency Selection Unit 406E determines a required vibration frequency required by at least one audio source of the one or more audio sources so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects identified by the Identification Engine 402.
- the Source Frequency Selection Unit 406E uses a wave mechanism so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects.
- a resultant sound wave x 2Xcos( ⁇ fBt)cos(2 ⁇ favgt) has a frequency of f avg , where f avg is equal to an average frequency of the one or more audio sources available in the electronic device 400.
- the vibration frequency f1 of the audio source s1 will be 400kHz and the vibration frequency f2 of the audio source s2 will be 348kHz.
- the Noise estimation Unit 406F calculates the amplitude value for the determined required vibration frequency based on a comparison of amplitude values of audio content with vibration and amplitude values of actual audio content.
- An example of amplitude values calculated for the determined required vibration is shown below in Table 11.
- the Noise estimation Unit 406F further determines extra noise in the audio content with the determined required vibration frequency and compares the determined extra noise with a predefined noise threshold value.
- the Amplitude Rectification unit 406G calibrates an amplitude of the required vibration frequency in a case if the determined extra noise is greater than the predefined noise threshold value as a result of the comparison.
- the amplitude of the required vibration frequency is calibrated with respect to a required frequency to minimize the extra noise due to the determined required vibration frequency.
- the Amplitude Rectification unit 406G generates a list of calibrated amplitude values respective to the one or more objects identified by the Identification Engine 402. An example of the calibrated amplitude values is shown below in Table 12 with reference to FIGS. 10B and 12A.
- FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine, in accordance with an embodiment of the disclosure.
- FIG. 13B discloses a plurality of amplitude values A1, A2, A3, ... , An corresponding to which some extra noise is present with reference to the determined required vibration frequency.
- the Amplitude Rectification unit 406G calibrates each of the amplitude values A1, A2, A3, ... , An such that the extra noises at A1, A2, A3, ... , An is removed.
- the amplitude values A1, A2, A3, ... , An are calibrated without any change in the determined required vibration frequency.
- the Amplitude Rectification unit 406G sets a final optimal amplitude value for the determined required vibration frequency having the minimized extra noise.
- the Sound Source Amplitude Selection Unit 406H selects calibrated amplitude values corresponding to the available audio sources from the list of the calibrated amplitude values generated by the Amplitude Rectification unit 406G. As an example with reference to the Table 12 and FIG. 10B, the Sound Source Amplitude Selection Unit 406H selects a calibrated amplitude value (15 Db) corresponding to the determined required vibration frequency for the jeep 1 and selects a calibrated amplitude value (10 Db) corresponding to the determined required vibration frequency for the jeep 3.
- the Generation Engine 408 of the electronic device 400 includes a Source Selection & initialization unit 408A, a Frequency and Amplitude Assignment Unit 408B, a Production Unit 408C, and a Feedback & Learning Engine 408D.
- the Source Selection & initialization unit 408A selects at least one audio source associated with the at least one content from the identified one or more contents.
- the Source Selection & initialization unit 408A selects the at least one audio source from the audio sources available in the electronic device 400 for generation of the calibrated optimized vibration frequency.
- the Frequency and Amplitude Assignment Unit 408B assigns the required frequency and the calibrated amplitude to the selected at least one audio source associated with the at least one content.
- the required frequency can also be referred to as "the optimized vibration frequency” without deviating from the scope of the disclosure.
- the Production Unit 408C generates first sound wavelets from the selected at least one audio source associated with the at least one content to generate the calibrated optimized vibration frequency at the content position of the identified one or more contents for the generation of the localized haptic feedback effect by using the one or more audio sources.
- the first sound wavelets from the selected at least one audio source are generated by the Production Unit 408 using a constructive interference technique.
- An example of the constructive interference technique will be explained with reference to FIGS. 14A and 14B.
- FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure.
- a plurality of sound wavelets is produced from the audio source s1 and another plurality of sound wavelets is produced from the audio source s2 in order to form a point of intersection to generate the localized haptic feedback effect at the requisite vibration point 1400.
- the similar process of generating the sound wavelets can be performed for the generation of localized haptic feedback effect at the coordinates of the one or more objects. For example, as shown in FIG.
- a plurality of sound wavelets having wavelengths ( ⁇ 1, ⁇ 2, and ⁇ 3) is generated respectively using three audio sources s1, s2, and s3 for the generation of the localized haptic feedback effect at the coordinates (x1,y1), (x2,y2), and (x3,y3) of the balloons displayed on the display screen corresponding to the locations 1, 2, and 3.
- a sound wave produced from the audio source s1 should interfere constructively with sound wavelets produced by the audio source s2. Therefore, for a constructive interference at the requisite vibration point 1400, it is required a single wavelet of length similar to the requisite vibration point 1400 from the audio source s1 and multiple wavelets of small wavelengths from the audio source s2.
- the Production Unit 408C may also generate second sound wavelets from the selected at least one audio source associated with the at least one content to generate the optimized vibration frequency at the content position of the identified one or more contents to cancel multiple coherence points of vibration due to the generation of the first sound wavelets for the generation of the localized haptic feedback effect.
- the second sound wavelets from the selected at least one audio source is generated by the Production Unit 408 using a destructive interference technique.
- FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure.
- FIG. 15 there are two sound wavelets 1500 and 1502.
- there are only two coherence points required (required surface intersection points 1506) for the generation of the localized haptic feedback effect.
- other coherence points Extra vibration points 1504 are also present that can generate the localized haptic feedback effect which is not necessary or required.
- the Production Unit 408C To eliminate or cancel the effect of the extra vibration points 1504, the Production Unit 408C generates sound wavelets different from these sound wavelets using such that to cancel the unnecessary the extra vibration points 1504 such that the localized haptic feedback effect is only generated at the required surface intersection points 1506.
- the Approximation Engine 404 extracts audio content from the identified one or more contents identified by the Identification engine 408. After the extraction of the audio content, the Generation Engine 408 generates a vibration pattern based on the required frequency and the calibrated amplitude. After the generation of the vibration pattern, the Generation Engine 408 calculates a point of coherence from the generated vibration pattern and further removes the calculated point of coherence from the generated vibration pattern.
- the Generation Engine 408 obtains a unique vibration frequency wave based on the removal of the calculated point of coherence and merges the obtained vibration frequency with a current sound wave. Finally, the Generation Engine 408 generates the determined optimized vibration frequency at the content position of the identified one or more contents by merging of the obtained unique vibration frequency wave with the audio content from the identified one or more contents.
- FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure.
- FIG. 16A discloses an example of a normal sound wave for the audio content and
- FIG. 16B discloses an example of a low-frequency sound wave for the generation of the vibration pattern.
- FIG. 16C discloses a digital representation of the normal sound wave for the audio content and
- FIG. 16D discloses a digital representation of the low-frequency sound wave.
- the Approximation Engine 404 extracts the normal sound wave for the audio content. Further using the low-frequency sound wave, the Generation Engine 408 generates the vibration pattern based on the required frequency, the calibrated amplitude, and scene analysis of the one or more contents displayed on the display screen.
- FIG. 16E discloses points of coherence calculated by the Generation Engine 408 using merged digital waveform of the normal sound wave and the low-frequency sound wave. After calculating the points of coherence, the Generation Engine 408 updates the merged digital waveform to remove the calculated point of coherence from the merged digital waveform as shown in FIG. 16F. Accordingly, as an outcome of the updated digital waveform, the Generation Engine 408 obtains the unique vibration frequency wave.
- the localized haptic feedback effect on the determined one or more haptic event locations will be generated by the Generation Engine 408 using outputs of the Identification Engine 402, the Approximation Engine 402, and the Calibration Engine 406 by the process explained above with reference to the absence of the audio content.
- the Feedback & Learning Engine 408D monitors over a predetermined period of time, the generated generation of the localized haptic feedback effect on the determined one or more haptic event locations.
- the Feedback & Learning Engine 408D may also perform an intelligent learning and feedback process to understand any optimization in the amplitude values to reduce the extra noise or frequency correction to accurately locate one or more haptic event locations.
- the Feedback & Learning Engine 408D may also send feedback to the Calibration Engine 406 based on the intelligent learning and feedback process to calibrate frequency and amplitude values.
- FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure.
- FIG. 17A discloses a first example of the generation of the localized haptic feedback effect.
- three localized haptic feedback effects (Localized vibration 1700, Localized vibration 1702, and Localized vibration 1704) are generated with varied intensity in accordance with the embodiments of the disclosure.
- Each of the Localized Vibrations 1700, 1702, and 1704 has an intensity in accordance with the action classes of corresponding objects in the displayed multimedia scene. Further as shown in FIG.
- FIG. 17B a second example of the generation of the localized haptic feedback effect are generated.
- the localized vibrations are produced using the audio content associated with the one or more contents using the process explained above with reference to FIGS. 16A through 16F.
- FIG. 17C discloses a third example of the generation of the localized haptic feedback effect.
- the localized vibration 1706 is generated using audio source 1708.
- the localized vibrations can be produced at multiple points of the one or more contents displayed on the display screen without any DC motor.
- the electronic device 400 of the disclosure results in the generation of the localized haptic feedback effect due to which vibration occurs only at a specific event location.
- the localized haptic feedback effect generated by the electronic device 400 can also help a clumsy or incompetent to grasp event locations regarding an event.
- the localized haptic feedback effect generated by the electronic device 400 may also enhance the user experience by providing a real-time localized haptic feedback effect to the user.
- the method 200 and the electronic device 2000 or 400 can enhance the user experience.
- the localized vibration effect can provide Dolby Sound comparable vibration effect for users and hence can be a beloved user experience.
- Another exemplary advantage of the electronic device 2000 or 400 of the disclosure is cost-reduction or cost-effective due to the removal of the DC motor. Since the application of the DC motor is removed, the power of the system can also be optimized.
- the localized haptic feedback effect generated by the method 200 and the electronic device 2000 or 400 provides intelligent accessibility of the one or more contents displayed on the display screen.
- FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure.
- the device in FIG. 18 has two sound sources S1 and S2 as the available audio sources with the frequencies (f1 & f2 and amplitudes (wavelength ⁇ 1 & ⁇ 2), respectively.
- Sound source S1 emits a wave of wavelength ⁇ 1
- sound source S2 emits a wave of wavelength ⁇ 2 with respective phases such that they interfere at the desired location in the respective region of interests.
- each time the resultant sound waves x1 and x2 are calculated for the audio sources S1 and S2 when there is a change in the content displayed on the display screen.
- the respective calculated resultant sound waves x1 and x2 are used for generating vibrations at specific coordinates of the display screen with reference to the objects (man 1800, trolley 1802, and Swipe arrows 1804(depicted as black-colored-arrow)).
- the calibrated frequency value required for the vibration is 470kHz
- such real-time vibrations may notify the user regarding an input operation by generating the vibrations at the specific coordinates of the display screen while playing the game and may enhance the experience of the user while playing the game.
- FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
- FIG. 19 discloses an example use case for generating real-time localized haptic feedback effect while watching video content.
- the Generation Engine 408 when a car blast scene is displayed over the display screen then the Generation Engine 408 generates real-time localized haptic feedback effect (Localized Vibration 1900) at the location of the car in the car blast scene with a high vibration intensity such that the user can experience the blast along with the reproduction of the video content. Accordingly, such real-time localized haptic feedback effect may enhance the user experience while watching the video content.
- real-time localized haptic feedback effect may enhance the user experience while watching the video content.
- FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure.
- FIG. 20 discloses an example use case for generating real-time localized haptic feedback effect during online payment.
- a localized vibration 2000 is generated at the location of the objects (copy icon 2002 and delete icon 2004) using sound wavelets of sound sources S1 and S2. Accordingly, at a time of online payments user can get notified with real-time vibration effect that at which icon the user should refer to perform the further operations for doing the online payment, and hence the user experience will be enhanced with such vibration effect.
- the electronic device 400 of the disclosure can also generate multidimensional vibrations during online book reading or online document reading to specify or highlight a specific location of the content. Accordingly, the user experience during online reading can be enhanced.
- the disclosure can also be applied to display devices used in vehicles.
- the disclosure is not limited to the use case examples described above with regard to display-based electronic devices. It can also be applied to other technological fields, for example, health care systems, audio devices, and any other electronic devices including audio sources.
- FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure.
- a Central Processing Unit (CPU) 2102 a Central Processing Unit (CPU) 2102, a ROM (Read Only Memory) 2104, and a RAM (Random Access Memory) 2106 are connected by a Bus 2122.
- the CPU 2102 may be implemented as a processor.
- the input unit 2110 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 2112 includes a display, a speaker, and the like.
- the storage unit 2114 includes a nonvolatile memory and the like.
- the communication unit 2116 includes a network interface or the like.
- the drive 2118 drives a removable medium 2120 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like.
- the CPU 2102, the ROM 2104, and the RAM 2106 are communicatively coupled with the input unit 2110, the output unit 2112, the storage unit 2114, the communication unit 2116, and the drive 2118 via the input/output interface 2108.
- the electronic device 2100 may also include one or more processors to execute the series of processes described hereinabove with reference to the electronic device 200 and the electronic device 400.
- the disclosure refers to a method for generating a localized haptic feedback effect in an electronic device.
- the method comprises identifying, by an identification engine, one or more contents on a display screen of the electronic device and dividing, by the identification engine, the display screen into a plurality of grids.
- the method further comprises determining, by the identification engine, one or more haptic event locations, associated with the identified one or more contents, on the plurality of grids, and classifying, by an approximation engine, one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
- the method comprises determining, by a calibration engine, an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device. Additionally, after the determination of the optimized vibration frequency, the method comprises generating, by a generation engine, a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources available in the electronic device.
- the disclosure refers to an electronic device for generating a localized haptic feedback effect in an electronic device.
- the electronic device includes an identification engine that identifies one or more contents on a display screen of the electronic device, divides the display screen into a plurality of grids, and determines one or more haptic event locations associated with the identified one or more contents on the plurality of grids.
- the electronic device further includes an approximation engine that classifies one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations.
- the electronic device further includes a calibration engine that determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device.
- the electronic device further includes a generation engine that generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined at least one haptic event location by using the one or more audio sources available in the electronic device.
- Some example embodiments disclosed herein may be implemented using processing circuitry.
- some example embodiments disclosed herein may be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Provided is a method and an electronic device for generating a localized haptic feedback effect in an electronic device. The electronic device includes a display for displaying one or more contents and a processor that identifies one or more contents on the display of the electronic device, divides the screen of the display into a plurality of grids, and determines one or more haptic event locations associated with the identified one or more contents on the plurality of grids. The electronic device further classifies one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations, determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device, generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources available in the electronic device.
Description
The disclosure relates to a method and an apparatus for generating localized vibrations. In particular, the disclosure relates to a method and an apparatus for generating a localized haptic feedback effect in an electronic device.
Initially, modern smart electronic devices or terminal devices are equipped with a vibration motor to generate vibrations to notify users in response an event. However, the vibration generated by these electronic devices is of very high intensity. Vibrations are used for tactile feedback in touch-based electronic devices. Before an advent of touch-based smart phones, users have not been familiar with tapping on a smooth display instead of pushing physical buttons. For this purpose, small vibration feedback was added during the tapping. Further technological use of vibrations began to be applied relatively recently.
During playback of the video and games that include sound and DC motor is not capable of generating a localized vibration in response to sound and game event. Vibrations at multiple points cannot be produced using the DC motor. Also, a clumsy or incompetent person needs to rely on haptic feedback which is cumbersome and frustrating due to multiple touches and inputs.
Therefore, there is a need for a method and an apparatus that can provide a localized haptic feedback effect to a user. Accordingly, the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device.
Provided is a method for generating a localized haptic feedback effect in an electronic device. The method includes identifying, by a processor, one or more contents on a display screen of the electronic device, dividing the display screen into a plurality of grids, determining one or more haptic event locations associated with the identified one or more contents on the plurality of grids, classifying the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations, determining an optimized vibration frequency based on the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound source by one or more audio sources of the electronic device, and generating, based on the determined optimized vibration frequency, a calibrated optimized vibration frequency by calibration of the optimized vibration frequency and an amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources of the electronic device.
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
FIGS. 1A, 1B, 1C, and 1D illustrate an example illustration of a vibration effect;
FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure;
FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure;
FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure;
FIGS. 5A, 5B, 5C, and 5D illustrate examples of a content identification process, in accordance with an embodiment of the disclosure;
FIG. 6A, 6B, 6C and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure;
FIGS. 7A and 7B illustrates an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure;
FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure;
FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure;
FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit, in accordance with an embodiment of the disclosure;
FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure;
FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure;
FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure;
FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure;
FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine 406, in accordance with an embodiment of the disclosure;
FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure;
FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure;
FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure;
FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure;
FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure;
FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure;
FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure; and
FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present disclosure may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The term "some" as used herein is defined as "none, or one, or more than one, or all." Accordingly, the terms "none," "one," "more than one," "more than one, but not all" or "all" would all fall under the definition of "some." The term "some embodiments" may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term "some embodiments" is defined as meaning "no embodiment, or one embodiment, or more than one embodiment, or all embodiments."
The terminology and structure employed herein is for describing, teaching, and illuminating some embodiments and their specific features and elements and does not limit, restrict, or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to "includes," "comprises," "has," "consists," and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language "MUST comprise" or "NEEDS TO include."
Whether or not a certain feature or element was limited to being used only once, either way, it may still be referred to as "one or more features" or "one or more elements" or "at least one feature" or "at least one element." Furthermore, the use of the terms "one or more" or "at least one" feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as "there NEEDS to be one or more . . . " or "one or more element is REQUIRED."
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
Embodiments of the disclosure will be described below in detail with reference to the accompanying drawings.
The vibrations used for tactile feedback are being produced using various motors including a DC motor that is effective in creating a buzzing and motional sensation in a device. However, the vibrations produced by the DC motor are usually centralized on a touch-based display of electronic devices and have a damping effect from the place where the DC motor is positioned in the touch-based electronic devices. When the DC motor produces the vibration, the whole electronic device gets a vibration effect and the vibration effect is not specific to a particular location on the electronic device. Also, DC motors bear some extra cost and utilize the power of the electronic device to produce the desired effect.
As shown in FIG. 1A of the drawings, the entire phone vibrates when the DC motor produced the vibration. It clearly shows an absence of localized vibration. Further, as shown in FIGS. 1B and 1C, the vibrations produced by DC motors during an incoming call and video playback are centralized. Furthermore, as shown in FIG. 1D, the electronic device vibrates at multiple points due to an effect of the vibrations produced by the DC motors.
During the playback of the video and games that include sound and DC motor is not capable of generating a localized vibration in response to sound and game event. Vibrations at multiple points cannot be produced using one DC motor. Also, a clumsy or incompetent person needs to rely on haptic feedback which is cumbersome and frustrating due to multiple touches and inputs.
Therefore, there is a need for a method and an apparatus that can provide a localized haptic feedback effect to a user. Accordingly, the disclosure provides the method and the apparatus for generating the localized haptic feedback effect in the electronic device
FIG. 2 illustrates a block diagram of a device for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure. FIG. 2 illustrates a electronic device 2000 to generate the localized haptic feedback effect. The electronic device 2000 includes an Application Framework 2101, a Hardware Layer 2300, and a processor 2200 which further includes an Identification Engine 2202, an Approximation Engine 2204, a Calibration Engine 2206, and a Generation Engine 2208. The aforementioned components of the electronic device are coupled with each other. Each of the Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are communicatively coupled to the Hardware Layer 212 and the Application Framework 210. The Identification Engine 202, the Approximation Engine 204, the Calibration Engine 206, and the Generation Engine 208 are implemented as the processor 2200.
The Hardware Layer 2300 of the electronic device 2000 includes a display 2310, an audio unit 2320, and a sensor 2330. The display 2310 includes a touch screen panel 2311 and a graphics engine 2312 coupled with the touch screen panel 2311. The display 2310 displays an image or video content. Examples of the display 2310 may include but not limited to, a television screen, a smartphone screen, a smart television screen, and a tablet screen. The display 2310 may be Light Emitting Diode (LED), Liquid Crystal Display (LCD), Organic Light Emitting Diode (OLED), Active-Matrix Organic Light Emitting Diode (AMOLED), or Super Active-Matrix Organic Light Emitting Diode (SAMOLED) screen. The display 2310 may have varied resolutions. It will be understood to a person of ordinary skill in the art that the disclosure is not limited to any type or any resolution of the display 2310.
The audio unit 2320 includes a speaker unit 2321 including one or more speakers, and at least one microphone 2322. The sensor 2330 may include one or more sensors. As an example, the sensor 2330 includes, but is not limited to, a grip sensor 2331, an accelerometer 2332, a gyroscope sensor 2333, and a humidity sensor 2339. The sensor 2330 may include sensors different from those described above.
In an embodiment, the Identification Engine 2202 identifies one or more contents displayed on the display 2310. After identifying the one or more contents displayed on the display 2310, the Identification Engine 2202 divides the display screen into a plurality of grids and determines occurrences and corresponding locations of one or more haptic events on the plurality of grids. The locations of one or more haptic events are associated with the identified one or more contents. Here, the one or more haptic event locations can also be referred to as "one or more vibration locations" without deviating from the scope of the disclosure.
In an embodiment, the Approximation Engine 2204 classifies one or more haptic events associated with the identified content. The Approximation Engine 2204 classifies the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations. Here, the one or more haptic events can also be referred to as "one or more vibration events" without deviating from the scope of the disclosure, and the level of haptic feedback can also be referred to as "a frequency level of the one or more vibration events" without deviating from the scope of the disclosure.
In an embodiment, the Calibration Engine 2206 determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents. The Calibration Engine 2206 determines the optimized vibration frequency based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320.
In an embodiment, the Generation Engine 2208 generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations. The Generation Engine 2208 generates the calibrated optimized vibration frequency by using at least one of the one or more speakers of the speaker unit 2321 and/or the at least one microphone 2322 included in the audio unit 2320.
FIG. 3 illustrates a flowchart for generating a localized haptic feedback effect, in accordance with an embodiment of the disclosure. The operations of FIG. 3 may be performed by the processor 2200 of the electronic device 2000.
The processor 2200 may identify one or more contents displayed on a display screen of the display 2310 in operation 302.
In operation 304, subsequent to the identification of the one or more contents, the processor 2200 may divide the display screen into a plurality of grids.
In operation 306, subsequent to the division of the display screen into the plurality of grids, the processor 2200 may determine one or more haptic events locations on the plurality of grids. The one or more haptic event locations are associated with the identified one or more contents.
The Identification Engine 2202 of the processor 2200 may perform each of the operations 302, 304, and 306 of the method 300.
In operation 308, subsequent to the determination of the one or more haptic events locations, the method 300 comprises classifying one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations. The Approximation Engine 2204 of the processor 2200 performs the operation 308.
In operation 310, subsequent to the classification of the one or more haptic events, the method 300 comprises determining an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents, based on at least one of a plurality of environmental parameters and a capability of generating sound by at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000. The Calibration Engine 206 of the processor 2200 may perform the operation 310.
In operation 312, subsequent to the determination of the optimized vibration frequency, the method 300 comprises generating a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations, using at least one of the speaker unit 2321, and the at least one microphone 2322 of the audio unit 2320 included in the electronic device 2000.
FIG. 4 illustrates a detailed architecture of the device including operational processes, in accordance with an embodiment of the disclosure. The electronic device 400 includes an Identification Engine 402, an Approximation Engine 404, a Calibration Engine 406, a Generation Engine 408. Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 corresponds to the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 of the electronic device 400, respectively. Each of the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 performs operations similar to operations performed by the processor 2200 or the Identification Engine 2202, the Approximation Engine 2204, the Calibration Engine 2206, and the Generation Engine 2208 as described above, respectively. The Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408 may be implemented as the processor 2200. Further, the electronic device 400 corresponds to the electronic device 2000 of FIG. 2. For a detailed description of the operations performed by the Identification Engine 402, the Approximation Engine 404, the Calibration Engine 406, and the Generation Engine 408, a detailed explanation will be made with reference to FIGS. 4 through 17C.
The Identification Engine 402 of the electronic device 400 includes a Mode Selection Unit 402A, a Fast Region-Based Convolutional Neural Networks (R-CNN) based Object detection unit 402B, and an Environment Identification unit 402C.
The Mode Selection Unit 402A of the Identification Engine 402 determines whether a current mode of the electronic device 400 is a multimedia mode and checks a requirement of identification of the one or more contents in case it is determined that the current mode is the multimedia mode based on application data of the electronic device 400 and the one or more contents displayed on the display screen. In an embodiment, the Mode Selection Unit 402A of the Identification Engine 402 identifies whether the one or more contents displayed on the display screen is at least one of multimedia content, image, and a user interface (UI) element.
Further, the R-CNN based Object detection unit 402B of the Identification Engine 202 identifies one or more objects based on a result of the identification of the one or more contents displayed on the display screen. In order to identify the one or more objects, firstly the R-CNN based Object detection unit 402B divides an input frame of the multimedia mode that is displayed on the display screen into the plurality of grids. Secondly, the R-CNN based Object detection unit 402B determines the one or more haptic event locations on the divided plurality of grids. The one or more haptic event locations are associated with the identified one or more contents. Thirdly, the R-CNN based Object detection unit 402B determines a content position of the identified one or more contents based on determined one or more haptic event locations. The one or more haptic event locations may correspond to a location of objects in a multimedia scene, a location of on object in a content displayed on the displayed screen, a location on the display screen on which a tap operation is performed by a user, a location of a focused view object identified on the display screen. The aforementioned examples of the one or more haptic event locations are not limited to these as described above. The one or more haptic event locations can be a location on the display screen other than the above-described examples. Lastly, the R-CNN based Object detection unit 402B obtains corresponding coordinates of the identified one or more objects based on the content position of the identified one or more contents. The one or more haptic event locations corresponds to the obtained coordinates and can be defined as the coordinates for the haptic event locations. Now, the example description of the R-CNN based Object detection unit 402B will be explained with reference to FIG. 5.
FIG. 5A, 5B, 5C and 5D illustrates examples of a content identification process, in accordance with an embodiment of the disclosure. Referring to FIG. 5A through 5D, FIG. 5A, 5B, 5C and 5D illustrate examples of the determination of the current mode by the Mode Selection Unit 402A and the identification of the one or more contents by the R-CNN based Object detection unit 402B of FIG. 4, in accordance with an embodiment of the disclosure. As shown in FIG. 5A through 5C, each of a first terminal device 500, a second terminal device 514, and a third terminal device 516 is displaying a first multimedia scene 502, a second multimedia scene 518, and a chatting window with Keyboard 520, respectively. In addition, a focused view display screen 522 is also shown in FIG. 5D.
Accordingly, the Mode Selection Unit 402A of the Identification Engine 402 determines that the current mode of the first terminal device 500 is the multimedia mode based on the display of the multimedia scene on the display screen. Then, the R-CNN based Object detection unit 402B identifies a set of objects 506 in the multimedia scene displayed on the display screen. Further, the R-CNN based Object detection unit 402B divides the multimedia scene into grids 504 and determines the one or more haptic event locations on the divided grids 504. Furthermore, the R-CNN based Object detection unit 402B determines a respective position of the identified set of objects 506 on the divided grids 504 as locations of haptic events. As an example, the respective position of the identified set of objects 506 determined by the R-CNN based Object detection unit 402B can be represented in the form of object coordinates. Table 1 illustrates an example representation of the object coordinates determined by the R-CNN based Object detection unit 402B.
| ROI | Bounding Box Coordinates |
| Jeep 1 (Vehicle) | 10, 10 - 20, 20 |
| Jeep 2 (Vehicle) | 25, 15 - 35, 25 |
| Jeep 3 (Vehicle) | 40, 10 - 55, 25 |
As shown on the left side of table 1 the region of interests (ROI) corresponds to the set of objects 506 (i.e., three vehicles (Jeep) as shown in FIG. 5A). Further, the right side of table 1 indicates bounding box coordinates corresponding to the coordinates of the set of objects 506. The aforementioned Table 1 is merely an example and not limited to the above-described example. It can include different data based on the multimedia mode of the electronic device 400 and can be arranged in any other format based on system requirements.
Similarly, by implementing the above-described examples, the R-CNN based Object detection unit 402B may identify an object 508 of the second Multimedia scene 518 displayed on the second terminal device 514 in FIG. 5B, a tap input location 510 corresponding to a tap operation in the chatting window with Keyboard 520 displayed on the third terminal device 516 in FIG. 5C, and a focused view location 512 in the focused view display screen 522 in FIG. 5D.
The Mode Selection Unit 402A of the Identification Engine 402 may also determine touch coordinates of a user tap input on the display screen and transfers the touch coordinates to the Approximation Engine 404.
The Environment Identification unit 402C of the Identification Engine 402 detects a plurality of environmental parameters of the electronic device 400. In order to detect the plurality of environmental parameters, firstly the Environment Identification unit 402C receives, from the one or more sensors, sensed data - environmental data - detected by the one or more sensors. As an example, the Environment Identification unit 402C may receive sensed data from at least one of the grip sensor 2311, the accelerometer 2332, the gyroscope sensor 2333, or the humidity sensor 2339. Secondly, the Environment Identification unit 402C detects the plurality of environmental parameters that correspond to at least one of a state of the electronic device 400, a surface on which the electronic device 400 is placed, and an orientation of the electronic device 400. As an example, the Environment Identification unit 402C receives the sensor data from the sensor 2330 and calculates surface values associated with the surface on which the electronic device 400 is placed based on the received sensor data. An example of such sensor data is shown below in Table 2. Table 2 illustrates the sensor data for detection of the plurality of environmental surfaces on which the electronic device 400 is placed.
| Mode | Accelerometer |
Humidity
Sensor |
Ambient
Sensor |
Grip
Sensor |
Surface |
| Vibration | Sudden Change | Normal | No/Yes | False | Hard |
| Vibration | No/Minor Change | Normal | No/Yes | False | Soft |
| Normal | No/Minor Change | Abnormal | No/Yes | False | Water |
| Any | No/Minor Change | Any | Any | True | Hand |
The aforementioned Table 2 is merely an example and not limited to the above-described example. It can include sensor data different from the sensor data of table 2 and can be arranged in any other format based on the requirement by the electronic device 400. Further, the Environment Identification unit 402C transfers the detected plurality of environmental parameters to the Calibration Engine 406.
The plurality of environmental parameters detected by the Environment Identification unit 402C can be described with reference to FIG. 6A through 6D. FIG. 6A, 6B, 6C, and 6D illustrate examples of detecting environmental parameters, in accordance with an embodiment of the disclosure. Referring to FIG. 6A, FIG. 6A illustrates an example of the environmental parameters detected by Environment Identification unit 402C of FIG. 4, in accordance with an embodiment of the disclosure. Depicted are a location of a phone in water in FIG. 6A, a phone on a hard surface in FIG. 6B, and a phone in a hand of a user in FIG. 6C. As an example, the Environment Identification unit 402C receives the sensor data including information about at least one of the locations of the phone in the water, on the hard surface, or in the hand of the user. The Environment Identification unit 402C also identifies a current mode of the electronic device 400. As an example, the Environment Identification unit 402C device identifies one of a vibration mode, a ringing mode, or a silent mode of the electronic device 400 using the sensor data. Further, subsequent to the reception of the sensor data and the identification of the current mode, the Environment Identification unit 402C calculates the environmental parameters based on sensor data and the current mode of the electronic device 400.
Referring back to FIG. 4, the Approximation Engine 404 of the electronic device 400 includes an Audio Processing Engine 404A, a speaker position determination unit 404B, a scene understanding unit 404D, a Frequency Mapping Unit 404E, a Size based Scaling Unit 404F.
The Audio Processing Engine 404A identifies one or more audio sources associated with the identified one or more contents and determines an audio source position of the identified one or more audio sources based on audio sources available in the electronic device 400. In order to determine the audio source position of the identified one or more audio sources, firstly the Audio Processing Engine 404A extracts built-in audio sources information including speaker information and microphone information associated with the speaker unit 2321 and the at least one microphone 2322. The built-in audio sources information may include information associated with application programming interfaces (APIs) and hardware IDs of the electronic device 400. The speaker information includes coordinates of the one or more speakers included in the speaker unit 2321 and features of the one or more speakers. As an example, the features of the one or more speakers may correspond to a surround sound feature. At the block 404C of FIG. 4, the Audio Processing Engine 404A determines whether a position or coordinates of at least one of the one or more speakers or the at least one microphone 2322 is present in the extracted built-in audio sources information.
If a result of the determination at the block 404C is yes, then the Audio Processing Engine 404A transfers the positions or the coordinates of at least one of the one or more speakers or the at least one microphone 2322 to the Calibration Engine 406. If a result of the determination at the block 404C is No, then the speaker position determination unit 404B calculates the corresponding coordinates by a triangulation technique. The corresponding coordinates correspond to the audio source position of the identified one or more audio sources.
Now, the calculation of the corresponding coordinates by the triangulation technique will be explained with reference to FIGS. 7A and 7B. FIGS. 7A and 7B illustrate an example of a triangulation technique for calculating audio source coordinates, in accordance with an embodiment of the disclosure. FIG. 7A illustrates an example of information associated with the one or more audio sources available in the electronic device 400. The syntax TYPE_BUILTIN_EARPIECE indicates an earpiece source (Sa) and the syntax TYPE_BUILTIN_SPEAKER indicates a speaker source (Sb). Further, respective syntax TYPE_BUILTIN_MIC in FIG. 7A indicates a first microphone source (a) and a second microphone source (b). As an example, the speaker position determination unit 404B calculates corresponding coordinates of the speaker source, the earpiece source, the first microphone source, and the second microphone source in a 2-dimensional (2D) coordinate system by a triangulation technique. The speaker position determination unit 404B may also calculate the corresponding coordinates in a 3-dimensional (3D) coordinate system. The speaker position determination unit 404B calculates a specific coordinate of each of the speaker source, the earpiece source, the first microphone source, and the second microphone source on the X-Y axis of the 2D coordinate system axis using the triangulation technique described in FIG. 7B.
For example, as disclosed in FIG. 7B, two speaker coordinates (-s,0) and (S,0) is on the X-axis of the 2D coordinate system. Also, it is disclosed a coordinate (0, m) of a reference microphone on the Y-axis of the 2D coordinate system. The speaker position determination unit 404B may use the below equations (1) and (2) to calculate a specific coordinate (x, y) of a target microphone in the 2D coordinate system.
Where, R1 and R2 correspond to a radial distance from each the speaker and the earpiece to the target microphone.
Further, the speaker position determination unit 404B may also store the calculated coordinates in a database. An example of the calculated coordinates is described below in Table 3. Table 3 illustrates an example of a sample table including coordinates information of the audio sources.In Table 3, Speaker 1 may correspond to the earpiece and Speaker 2 to the speaker, respectively.
| | Location |
| Speaker | |
| 1 | 2, 2 |
| |
4, 2 |
| |
4, 0 |
Further, the speaker position determination unit 404B transfers the calculated coordinates of the one or more audio sources to the Calibration Engine 406.
The Scene Understanding Unit 404D may acquire audio information and video information associated with the multimedia scene and extracts a plurality of features of the identified one or more contents to classify the one or more haptic events associated with the identified one or more contents into a plurality of action classes. The plurality of features of the identified one or more contents is extracted from the acquired audio information and video information associated with the multimedia scene. The plurality of features of the identified one or more contents includes at least one of a size of the identified one or more objects and an audio amplitude of the identified one or more objects. The Scene Understanding Unit 404D extracts the plurality of features of the identified one or more contents by use of a Convolutional Neural Network (CNN). An example of such extraction will be described with reference to FIGS. 8A and 8B.
FIGS. 8A and 8B illustrates an example of features extraction process, in accordance with an embodiment of the disclosure. FIG. 8A discloses the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3). The Scene Understanding Unit 404D analyzes each of the video frames included in the acquired video information and also analyzes each of the audio frames included in the acquired audio information. Thereafter, the Scene Understanding Unit 404D determine feature vectors for actions in the video frames using the CNN network. As an example, as shown in FIG. 8B, each of a plurality of video content (video 1, video 2, 쪋., video m) is sampled into a plurality of frames. Each of the sampled frames is analyzed to identify a presence of objects using the CNN network and thereafter feature vectors for actions are recognized and averaged as per action categories. (K) in FIG. 8B corresponds to a number of the action categories.
The Frequency Mapping Unit 404E estimates a scene intensity with respect to each of the identified one or more contents based on at least one of the audio amplitude of the identified one or more objects, the size of the identified one or more objects, and a mapping of the identified one or more contents over the plurality of action classes of the one or more contents. As an example, The Frequency Mapping Unit 404E estimates intensity of vibrations associated with the objects in multimedia scenes in the video information based on at least one of an intensity of audio associated with the identified one or more objects and mapping of frequency of the objects in the multimedia scenes with their action classes.
The scene intensity corresponds to the level of haptic feedback to be generated at the one or more haptic event locations. Accordingly, the Frequency Mapping Unit 404E classifies the one or more haptic events with respect to the level of haptic feedback based on the estimated scene intensity. The one or more haptic events is associated with the identified set of objects 506, and the plurality of action classes is associated with a class of action corresponding to each of the identified set of objects 506. The operations performed by the Frequency Mapping Unit 404E will now be described with the help of an example with reference to FIG. 8A.
As shown in FIG. 8A, the multimedia scene including three objects (i.e., Jeep 1, Jeep 2, Jeep 3). Also, as described above the Scene Understanding Unit 404D determines feature vectors for actions in the video frames using the CNN network. Therefore, as an exemplary jeeps in FIG. 8 with blast will have higher intensity than jeeps moving normally in the multimedia scene. Accordingly, the Frequency Mapping Unit 404E maps the frequencies of the objects in the multimedia scenes with their action classes based on an amplitude of the audio related to objects in the multimedia scenes and further estimates the intensity of the vibrations of the objects in multimedia scenes. Further, if there are no action classes, scenes, and audio information is present in the displayed multimedia scene or any other display content to be displayed, the Frequency Mapping Unit 404E sets default vibration intensity values. The default vibration intensity values are intensity values that are included in the settings of the electronic device 400. For example, any button or checkbox in a display content that needs to be highlighted should have device default vibration intensity values.
Also, the Frequency Mapping Unit 404E categorizes the estimated scene intensity with respect to each of the identified one or more contents as one of a high, a medium, a low, or a default scene intensity value. As an example, Table 4 illustrates a categorization example of the estimated scene intensity.
| Object | Coordinates | Scene Action | |
| Jeep | |||
| 1 | 10, 10 - 20, 20 | Explosion | High |
| Dolphin | 25, 15 - 35, 25 | | High |
| Jeep | |||
| 3 | 40, 10 - 55, 25 | Crash | Low |
| Button View | 15, 35 - 17, 37 | NA | N/A |
As shown in Table 4, Jeep 1 and Jeep 3 of FIG. 8A have action classes of explosion and crash, respectively. Therefore, the estimated scene intensity for Jeep 1 is categorized as high and the estimated scene intensity for Jeep 3 is categorized as low. Similarly, in case multimedia including a dolphin with an action class of diving is present, the estimated scene intensity can be categorized as high. Further, in case there is an absence of an action class or audio information for content displayed on the display screen then the estimated scene intensity will be set to default vibration intensity values in the settings of the electronic device 400. The aforementioned Table 4 is merely an example and not limited to the above-described examples. It can include different data based on the multimedia mode of the electronic device 400 and can be arranged in any other format based on the system requirements.
FIG. 9A illustrates a process of operations performed by the Frequency Mapping Unit, in accordance with an embodiment of the disclosure. The process starts with the operation 900, the Frequency Mapping Unit 404E checks for the presence of at least one object in the multimedia scene, the location of the object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
In operation 902, the Frequency Mapping Unit 404E checks for a surface on which the electronic device 400 is placed and locations of the one or more audio sources available in the electronic device 400 using the sensor data and the coordinates of the audio sources stored in a database 908.
In operation 904, the Frequency Mapping Unit 404E determines whether the vibration is needed for the at least one object recognized in operation 900. In case the result of the determination in operation 904 is No, then the operation of the Frequency Mapping Unit 404E is stopped. Further, in case a result of the determination at the step 904 is Yes(vibration needed), the Frequency Mapping Unit 404E maps the frequency of the at least one object recognized in step 900 with a variable frequency associated with an action class of the at least one object.
Finally, the Frequency Mapping Unit 404E stores each of the mapped frequencies values, estimated scene intensity values, and their coordinates in the database. An example of the estimated scene intensity values is shown in Table 5 with reference to FIG. 8A.
| ROI | Bounding Box Coordinates |
Intensity |
| Jeep 1 (Vehicle) | 10, 10 - 20, 20 | High |
| Jeep 2 (Vehicle) | 25, 15 - 35, 25 | High |
| Jeep 3 (Vehicle) | 40, 10 - 55, 25 | Low |
The Size based Scaling Unit 404F scales the intensity estimated by the Frequency Mapping Unit 404E. The Size based Scaling Unit 404F scales the estimated intensity based on the size of the identified one or more objects. As an example, the Size based Scaling Unit 404F classifies a size of each of the one or more objects identified by the Identification Engine 402. For example, if an object identified by the Identification Engine 402 has a large size, then the Size based Scaling Unit 404F classifies the object as having a greater intensity and scales an audio frequency of the object according to the size of the object identified by the Identification Engine 402.
FIG. 9B illustrates a process of operations performed by the Size based Scaling Unit 404F of FIG. 4, in accordance with an embodiment of the disclosure. The process starts with the operation 910, the Size based Scaling Unit 404F checks for the presence of at least one object in the multimedia scene, the location of the at least one object based on the coordinates corresponding to the at least one object, and any audio information associated with the at least one object using data from the Mode Selection Unit 402A and the R-CNN based Object detection unit 402B.
In operation 912, the Size based Scaling Unit 404F matches the audio frequency of the at least one object recognized in operation 910 with a size of the at least one object recognized in operation 910.
In operation 914, the Size based Scaling Unit 404F determines whether frequency scaling of the audio associated with the at least one object recognized at the step 910 is required based on a result of the match process in operation 912. In case if the result of the determination in operation 914 is No, then the operation of the Size based Scaling Unit 404F is stopped. Further in a case if a result of the determination in operation 914 is Yes, then the Size based Scaling Unit 404F scales the audio frequency of the at least one object recognized in operation 910 based on the size of the at least one object on the display screen.
Finally, the Size based Scaling Unit 404F generates a list of the scaled audio frequency values of the one or more objects by performing the operations 910 through 916 and stores the scaled audio frequency values in the database in operation 908. An example of the scaled audio frequency values with reference to FIG. 8A is shown below in Table 6. As shown in Table 6 the frequency column includes scaled audio frequency values of the one or more objects (jeeps).
| ROI | Frequency | Intensity |
| Jeep 1 (Vehicle) | 420 | High |
| Jeep 2 (Vehicle) | 420 | High |
| Jeep 3 (Vehicle) | 300 | Low |
Referring back to FIG. 4, the Calibration Engine 406 of the electronic device 400 includes an Amplitude Calculation Unit 406A, a Frequency Calculation Unit 406B, a Coordinate & Frequency Alignment Unit 406C, a Source Frequency Selection Unit 406E, a Noise Estimation Unit 406F, an Amplitude Rectification unit 406G, and a Sound Source Amplitude Selection Unit 406H.
The Amplitude Calculation Unit 406A calculates an audio amplitude of each of the identified one or more objects by using a Fast Fourier Transform (FFT). The audio amplitude of the identified one or more objects is calculated based on the audio information corresponding to the identified one or more contents. As an example, The Amplitude Calculation Unit 406A calculates the amplitude of a sound wave related to the identified one or more objects. For example, the amplitude for a smartphone can vary between 0-90 Db(decibel). This is only an example, and not limited to the described example only, amplitude can vary between any other ranges, too. The audio amplitudes calculated by the Amplitude Calculation Unit 406A can be stored in the database in operation 908. An example of the stored data is shown below in Table 7 with reference to FIG. 8A.
| ROI | Coordinates | Scene Intensity |
Audio |
| Jeep | |||
| 1 | 10, 10 - 20, 20 | High | High (78 Db) |
| |
25, 15 - 35, 25 | High | High (70 Db) |
| |
40, 10 - 55, 25 | Low | Low (28 Db) |
As shown in example Table 7, the fourth column indicates the calculated audio amplitude values of the one or more objects (Jeep 1, Jeep 2, and Jeep 3 of FIG. 8A). The audio amplitude values of the one or more objects may vary based on a type of object and the action class associated with the object in the multimedia scene.
The Amplitude Calculation Unit 406A sets default amplitude values for the objects for which no action class, scenes, and audio information is present in the displayed multimedia scene or any other display content to be displayed. The default amplitude values correspond to the amplitude values included in the settings of the electronic device 400.
Further, the Amplitude Calculation Unit 406A determines a sound intensity range of each of the identified one or more objects based on the calculated audio amplitude and maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range. Now functions and operations of the Amplitude Calculation Unit 406A will be explained with reference to FIG. 10A of the drawings.
FIG. 10A illustrates an example process of operations performed by the Amplitude Calculation Unit, in accordance with an embodiment of the disclosure. The process starts with the operation 1000, the Amplitude Calculation Unit 406A checks a level of the estimated intensity of the vibration of respective one or more objects. In operation 1002, the Amplitude Calculation Unit 406A identifies the coordinates of the one or more objects using the output of the Identification Engine 402.
In operation 1004, the Amplitude Calculation Unit 406A determines the sound intensity range of each of the identified one or more objects based on the level of the estimated intensity of the vibration of the respective one or more objects. In operation 1006, after the determination of the sound intensity range, the Amplitude Calculation Unit 406A maps the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range stored in an intensity frequency database (DB) in operation 1008. The intensity frequency DB includes information including the intensity frequency range corresponding to a plurality of levels of the estimated intensity of the vibration. As an example, the information included in the intensity frequency DB is shown in Table 8 below.
| Intensity Range | Frequency Range |
| <Low | 0~200 kHz |
| Low to |
200~300 kHz |
| Medium to |
300~500 kHz |
The left column of the Table 8 indicates the sound intensity range of each of the identified one or more objects and the right column of the Table 8 indicates the predefined intensity frequency range corresponding to the respective sound intensity range.
The Frequency Calculation Unit 406B determines a vibration frequency of each of the one or more objects based on the audio amplitude of each of the one or more objects and the estimated scene intensity associated with the one or more objects. As an example, the Frequency Calculation Unit 406B determines the vibration frequency for each of the one or more objects in the multimedia scene based on the mapping of the sound intensity range of each of the one or more objects over the corresponding predefined intensity frequency range stored in the intensity frequency DB.
The Frequency Calculation Unit 406B generates a list of vibration frequencies of the one or more objects based on the determined vibration frequency for each of the one or more objects. An example list of the vibration frequencies of the one or more objects is shown below as an example Table 9 with reference to objects shown in FIG. 10B.
| ROI | Coordinates | Scene Intensity |
Audio Amplitude |
Vibration |
| Jeep | ||||
| 1 | 10,10 - 20,20 | High | High (78 Db) | 420 kHz |
| |
25,15 - 35,25 | High | High (70 Db) | 366 kHz |
| |
40,10 - 55,25 | Low | Low (28 Db) | 200 kHz |
FIG. 10B illustrates an example frequency identification by the Frequency Calculation Unit, in accordance with an embodiment of the disclosure. There are shown three objects (jeep 1, jeep 2, and jeep 3) in FIG. 10B. Further, according to the Table 9, the amplitude of the respective objects (jeep 1, jeep 2, and jeep 3) are 78Db, 70Db, and 28Db, respectively. Accordingly, the Frequency Calculation Unit 406B determines the vibration frequencies of the jeep 1, the jeep 2, and the jeep 3 based on corresponding audio amplitudes of the jeep 1, the jeep 2, and the jeep 3 in the multimedia scene and the mapping of the sound intensity range of each of amplitudes the jeep 1, the jeep 2, and the jeep 3 over the corresponding predefined intensity frequency range stored in the intensity frequency DB 1008. Therefore, as a result, desired vibration frequencies fa, fb, and fc are determined for the jeep 1, the jeep 2, and the jeep 3, respectively. According to the Table 9, the desired vibration frequency fa for the jeep 1 is determined as 420 kHz, the desired vibration frequency fb for the jeep 2 is determined as 366 kHz, and the desired vibration frequency fc for the jeep 3 is determined as 200 kHz. The example for determining the vibration frequency of each of the one or more objects is only not limited to the aforementioned example. The vibration frequency will be determined according to a change in the multimedia scenes and a display of the one or more contents on the display screen.
The Coordinate & Frequency Alignment Unit 406C determines a capability of generating the determined vibration frequency of each of the identified objects from the one or more audio sources available in the electronic device 400 based on at least one of the determined audio source position, identified object position, and the detected plurality of environmental parameters, thereby determining the capability of generation of sound source by the one or more audio sources available in the electronic device 400. The process of determining the capability of generating the determined vibration frequency will be described with reference to FIG. 11.
FIG. 11 illustrates an example process of operations performed by Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure. The process of determining the capability of generating the determined vibration frequency of each of the identified objects starts with the operation 1100, the Coordinate & Frequency Alignment Unit 406C calculates constructive and destructive interference parameters using the coordinates of the one or more objects and their frequencies in the multimedia scene.
In operation 1102, the Coordinate & Frequency Alignment Unit 406C calculates a number of audio sources (i.e., speakers or mic) and their coordinates needed for generating the determined vibration frequency using coordinates of the one or more objects identified by the Identification Engine 402.
In operation 1104, the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the electronic device 400 can generate the determined vibration frequency. In a case, if a result of the determination in operation 1104 is Yes, then the process flow proceeds now to the operation 1106. In operation 1106, the Coordinate & Frequency Alignment Unit 406C associates the determined vibration frequency to desired audio sources for the generation of the determined vibration frequency.
If a result of the determination in operation 1104 is No, then the Coordinate & Frequency alignment unit 406C determines an optimized vibration frequency for each of the identified one or more objects based on the audio amplitude for each of the identified one or more objects and the estimated scene intensity. Now, an example of the process of determining the optimized vibration frequency will be explained with reference to FIGS. 12A and 12B.
FIGS. 12A and 12B illustrate an example process of operations performed by the Coordinate & Frequency Alignment Unit, in accordance with an embodiment of the disclosure. FIG. 12A discloses two vibration frequencies f1 and f2 and a resultant frequency fa. The vibration frequency f1 corresponds to a frequency desired to be generated by an audio source s1 and the vibration frequency f2 corresponds to a frequency desired to be generated by an audio source s2. Thus, in operation 1200 of FIG. 12B, the resultant frequency fa corresponds to a vibration frequency desired to be produced by the audio sources s1 and s2. The Coordinate & Frequency Alignment Unit 406C may determine that there are two audio sources available to generate the resultant frequency fa.
In operation 1202, the Coordinate & Frequency Alignment Unit 406C calculates the vibration frequencies (f1 and f2) of the available two audio sources (s1 and s2). In operation 1204, the Coordinate & Frequency Alignment Unit 406C calibrates the vibration frequencies of the one or more objects such that the audio sources available in the electronic device 400 such that the average of the calibrated frequencies lies around the desired frequency. The vibration frequencies of the one or more objects are calibrated using coordinates of the audio sources available in the electronic device 400 and a regressive learning mechanism generated by a learning and feedback engine to be described later. The calibration of the vibration frequencies of the one or more objects will now be explained with an example.
For example, it is assumed that the calibrated frequency is fa' to generate the desired resultant frequency fa. If coordinates of the audio source S1 is (0,10) and the audio source S2 is (100, 30), then for a location (L) of Jeep 1 (as shown in FIG. 8A and 12A) will be (10,10) - (20,20) = L. Now, the Coordinate & Frequency Alignment Unit 406C determines a high-intensity vibration range to be generated at the location L based on the coordinates of the audio sources available in the electronic device 400 and output data generated by the learning and feedback engine (to be described later) using the regressive learning mechanism. As an example, the Coordinate & Frequency Alignment Unit 406C determines the high-intensity vibration range to be generated at the location L are (327kHz, 352 kHz, 374 kHz, 426 kHz, 458 kHz, 479 kHz, and 490 kHz). Therefore, nearby frequencies to be generated at the location L of the Jeep 1 should be calibrated to the high-intensity vibration range. Accordingly, the calibrated frequency is fa' for the jeep 1 at the location L can be given as fa' = Calibration (fa, S1, S2, L). So, as an example let suppose if fa = 420 kHz then the nearest calibrated frequency near 420KHZ will be 374KHZ. Here, the nearest calibrated frequency corresponds to the determined optimized frequency for vibration at the location L of the jeep 1.
In operation 1206, the Coordinate & Frequency Alignment Unit 406C determines whether the audio sources available in the system can generate the calibrated frequency. If the result of the determination in operation 1206 is Yes, then the process of determining the optimized vibration frequency will come to an end. However in a case, if a result of the determination in operation 1206 is No, then the Coordinate & Frequency Alignment Unit 406C recalibrates the determined vibration frequency such that the audio sources available in the electronic device 400 can generate the desired frequency for vibration at the coordinates of each of the identified one or more objects identified by the Identification Engine 402.
As an outcome of the process of determination of the optimized vibration frequency, the Coordinate & Frequency Alignment Unit 406C determines a list of the optimized vibration frequency for the identified one or more objects. An example of the list of the optimized vibration frequency with refrerence to FIG. 10B and 12A is shown in Table 10 below.
| ROI | Coordinates | Scene Intensity |
Audio Amplitude |
Optimized |
| Jeep | ||||
| 1 | 10,10 - 20,20 | High | High | 374 |
| Jeep | ||||
| 2 | 40,10 - 55,25 | Low | Low | 239 kHz |
The Source Frequency Selection Unit 406E selects one optimized vibration frequency from the list of the optimized vibration frequency using the regressive learning mechanism. After the selection of the optimized vibration frequency, the Source Frequency Selection Unit 406E determines a required vibration frequency required by at least one audio source of the one or more audio sources so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects identified by the Identification Engine 402.
In order to determine the required vibration frequency, the Source Frequency Selection Unit 406E uses a wave mechanism so as to generate the selected optimized vibration frequency at the coordinates of the one or more objects. As an example, a resultant sound wave x = 2Xcos(πfBt)cos(2πfavgt) has a frequency of favg, where favg is equal to an average frequency of the one or more audio sources available in the electronic device 400. For example, favg for the audio sources s1 and s2 will be = (f1+f2)/2. Hence, for example, to generate a vibration at the location L of the jeep 1, the vibration frequency f1 of the audio source s1 will be 400kHz and the vibration frequency f2 of the audio source s2 will be 348kHz.
The Noise estimation Unit 406F calculates the amplitude value for the determined required vibration frequency based on a comparison of amplitude values of audio content with vibration and amplitude values of actual audio content. An example of amplitude values calculated for the determined required vibration is shown below in Table 11.
| ROI | Coordinates | Intensity | Audio Amplitude |
Frequency | Vibration |
| Jeep | |||||
| 1 | 10,10 - 20,20 | High | High | 374 kHz | 30 |
| Jeep | |||||
| 2 | 40,10 - 55,25 | Low | Low | 239 kHz | 20 Db |
The Noise estimation Unit 406F further determines extra noise in the audio content with the determined required vibration frequency and compares the determined extra noise with a predefined noise threshold value.
After the comparison of the determined noise with the predefined noise threshold value, the Amplitude Rectification unit 406G calibrates an amplitude of the required vibration frequency in a case if the determined extra noise is greater than the predefined noise threshold value as a result of the comparison. The amplitude of the required vibration frequency is calibrated with respect to a required frequency to minimize the extra noise due to the determined required vibration frequency. As an outcome of the calibration of the amplitude of the required vibration frequency, the Amplitude Rectification unit 406G generates a list of calibrated amplitude values respective to the one or more objects identified by the Identification Engine 402. An example of the calibrated amplitude values is shown below in Table 12 with reference to FIGS. 10B and 12A.
| ROI | Coordinates | Intensity | Audio Amplitude |
Frequency | Calibrated |
| Jeep | |||||
| 1 | 10,10 - 20,20 | High | High | 374 kHz | 15 |
| Jeep | |||||
| 3 | 40,10 - 55,25 | Low | Low | 239 kHz | 10 Db |
The calibration of the amplitude of the required vibration frequency will be explained with an example with reference to FIGS. 13A and 13B of the drawings.
FIGS. 13A and 13B illustrate an example of amplitude calibration process performed by the Calibration Engine, in accordance with an embodiment of the disclosure. Referring to FIG. 13A, a Requisite vibration point 1300 for which the Noise estimation Unit 406F calculates the amplitude value for the determined required vibration frequency at the Requisite vibration point 1300 and also determines extra noise in the audio content. FIG. 13B discloses a plurality of amplitude values A1, A2, A3, ... , An corresponding to which some extra noise is present with reference to the determined required vibration frequency. The amplitude values of a resultant sound wave from two sources vary as a function of time and can be given as A = 2Xcos(πfBt), where = fB|f1-f2|. FIG. 13B also discloses extra noise at A1, extra noise at A2, extra noise at A3, ... , and extra noise at An is present corresponding to each amplitude value with the determined required vibration frequency. To minimize the extra noise due to the determined required vibration frequency, the Amplitude Rectification unit 406G calibrates each of the amplitude values A1, A2, A3, ... , An such that the extra noises at A1, A2, A3, ... , An is removed. The amplitude values A1, A2, A3, ... , An are calibrated without any change in the determined required vibration frequency. The Amplitude Rectification unit 406G sets a final optimal amplitude value for the determined required vibration frequency having the minimized extra noise.
The Sound Source Amplitude Selection Unit 406H selects calibrated amplitude values corresponding to the available audio sources from the list of the calibrated amplitude values generated by the Amplitude Rectification unit 406G. As an example with reference to the Table 12 and FIG. 10B, the Sound Source Amplitude Selection Unit 406H selects a calibrated amplitude value (15 Db) corresponding to the determined required vibration frequency for the jeep 1 and selects a calibrated amplitude value (10 Db) corresponding to the determined required vibration frequency for the jeep 3.
Referring back to FIG. 4, the Generation Engine 408 of the electronic device 400 includes a Source Selection & initialization unit 408A, a Frequency and Amplitude Assignment Unit 408B, a Production Unit 408C, and a Feedback & Learning Engine 408D.
The Source Selection & initialization unit 408A selects at least one audio source associated with the at least one content from the identified one or more contents. The Source Selection & initialization unit 408A selects the at least one audio source from the audio sources available in the electronic device 400 for generation of the calibrated optimized vibration frequency.
The Frequency and Amplitude Assignment Unit 408B assigns the required frequency and the calibrated amplitude to the selected at least one audio source associated with the at least one content. Here, the required frequency can also be referred to as "the optimized vibration frequency" without deviating from the scope of the disclosure.
The Production Unit 408C generates first sound wavelets from the selected at least one audio source associated with the at least one content to generate the calibrated optimized vibration frequency at the content position of the identified one or more contents for the generation of the localized haptic feedback effect by using the one or more audio sources. The first sound wavelets from the selected at least one audio source are generated by the Production Unit 408 using a constructive interference technique. An example of the constructive interference technique will be explained with reference to FIGS. 14A and 14B.
FIGS. 14A and 14B illustrate an example process of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure. In the example shown in FIG. 14A, there are two audio sources s1 and s2 having frequencies f1 and f2, and wavelengths λ1 and λ2, respectively. A plurality of sound wavelets is produced from the audio source s1 and another plurality of sound wavelets is produced from the audio source s2 in order to form a point of intersection to generate the localized haptic feedback effect at the requisite vibration point 1400. The similar process of generating the sound wavelets can be performed for the generation of localized haptic feedback effect at the coordinates of the one or more objects. For example, as shown in FIG. 14B, a plurality of sound wavelets having wavelengths (λ1, λ2, and λ3) is generated respectively using three audio sources s1, s2, and s3 for the generation of the localized haptic feedback effect at the coordinates (x1,y1), (x2,y2), and (x3,y3) of the balloons displayed on the display screen corresponding to the locations 1, 2, and 3.
Considering a display screen with dimensions d/x/w and 2 audio sources as shown in FIG. 14A, a sound wave produced from the audio source s1 should interfere constructively with sound wavelets produced by the audio source s2. Therefore, for a constructive interference at the requisite vibration point 1400, it is required a single wavelet of length similar to the requisite vibration point 1400 from the audio source s1 and multiple wavelets of small wavelengths from the audio source s2.
The Production Unit 408C may also generate second sound wavelets from the selected at least one audio source associated with the at least one content to generate the optimized vibration frequency at the content position of the identified one or more contents to cancel multiple coherence points of vibration due to the generation of the first sound wavelets for the generation of the localized haptic feedback effect. The second sound wavelets from the selected at least one audio source is generated by the Production Unit 408 using a destructive interference technique.
FIG. 15 illustrates another example of generating sound wavelets by Production Unit, in accordance with an embodiment of the disclosure. As shown in FIG. 15 there are two sound wavelets 1500 and 1502. According to FIG. 15, there are only two coherence points required (required surface intersection points 1506) for the generation of the localized haptic feedback effect. However, due to the presence of multiple coherence points between the sound wavelets 1500 and 1502, other coherence points (Extra vibration points 1504) are also present that can generate the localized haptic feedback effect which is not necessary or required. To eliminate or cancel the effect of the extra vibration points 1504, the Production Unit 408C generates sound wavelets different from these sound wavelets using such that to cancel the unnecessary the extra vibration points 1504 such that the localized haptic feedback effect is only generated at the required surface intersection points 1506.
According to an embodiment of the disclosure, to generate the determined optimized vibration frequency for the generation of the localized haptic feedback effect on the determined one or more haptic event locations, firstly, the Approximation Engine 404 extracts audio content from the identified one or more contents identified by the Identification engine 408. After the extraction of the audio content, the Generation Engine 408 generates a vibration pattern based on the required frequency and the calibrated amplitude. After the generation of the vibration pattern, the Generation Engine 408 calculates a point of coherence from the generated vibration pattern and further removes the calculated point of coherence from the generated vibration pattern. As a result of the removal of the calculated point of coherence from the generated vibration pattern, the Generation Engine 408 obtains a unique vibration frequency wave based on the removal of the calculated point of coherence and merges the obtained vibration frequency with a current sound wave. Finally, the Generation Engine 408 generates the determined optimized vibration frequency at the content position of the identified one or more contents by merging of the obtained unique vibration frequency wave with the audio content from the identified one or more contents. Now, an example process of obtaining the unique vibration frequency for the generation of the determined optimized vibration frequency will be described with reference to FIGS. 16A-16F.
FIGS. 16A, 16B, 16C, 16D, 16E, and 16F illustrate an example process of obtaining a unique vibration frequency, in accordance with an embodiment of the disclosure. FIG. 16A discloses an example of a normal sound wave for the audio content and FIG. 16B discloses an example of a low-frequency sound wave for the generation of the vibration pattern. FIG. 16C discloses a digital representation of the normal sound wave for the audio content and FIG. 16D discloses a digital representation of the low-frequency sound wave. The Approximation Engine 404 extracts the normal sound wave for the audio content. Further using the low-frequency sound wave, the Generation Engine 408 generates the vibration pattern based on the required frequency, the calibrated amplitude, and scene analysis of the one or more contents displayed on the display screen. FIG. 16E discloses points of coherence calculated by the Generation Engine 408 using merged digital waveform of the normal sound wave and the low-frequency sound wave. After calculating the points of coherence, the Generation Engine 408 updates the merged digital waveform to remove the calculated point of coherence from the merged digital waveform as shown in FIG. 16F. Accordingly, as an outcome of the updated digital waveform, the Generation Engine 408 obtains the unique vibration frequency wave.
According to an embodiment of the disclosure, if there is an absence of the audio content in the one or more contents displayed on the display screen, then the localized haptic feedback effect on the determined one or more haptic event locations will be generated by the Generation Engine 408 using outputs of the Identification Engine 402, the Approximation Engine 402, and the Calibration Engine 406 by the process explained above with reference to the absence of the audio content.
The Feedback & Learning Engine 408D monitors over a predetermined period of time, the generated generation of the localized haptic feedback effect on the determined one or more haptic event locations. The Feedback & Learning Engine 408D may also perform an intelligent learning and feedback process to understand any optimization in the amplitude values to reduce the extra noise or frequency correction to accurately locate one or more haptic event locations. The Feedback & Learning Engine 408D may also send feedback to the Calibration Engine 406 based on the intelligent learning and feedback process to calibrate frequency and amplitude values.
FIGS. 17A, 17B, and 17C illustrate examples of the generating a localized haptic feedback effect on the determined one or more haptic event locations, in accordance with an embodiment of the disclosure. FIG. 17A discloses a first example of the generation of the localized haptic feedback effect. As can be seen from FIG. 17A, three localized haptic feedback effects (Localized vibration 1700, Localized vibration 1702, and Localized vibration 1704) are generated with varied intensity in accordance with the embodiments of the disclosure. Each of the Localized Vibrations 1700, 1702, and 1704 has an intensity in accordance with the action classes of corresponding objects in the displayed multimedia scene. Further as shown in FIG. 17B, a second example of the generation of the localized haptic feedback effect are generated. Here, the localized vibrations are produced using the audio content associated with the one or more contents using the process explained above with reference to FIGS. 16A through 16F. FIG. 17C discloses a third example of the generation of the localized haptic feedback effect. Here, the localized vibration 1706 is generated using audio source 1708.
Now comparing the examples shown in FIGS. 17A through 17C with the solution disclosed in FIG. 1A, it is evident that with the electronic device 2000 or the electronic device 400, the localized vibrations (the localized haptic feedback effect) can be produced at multiple points of the one or more contents displayed on the display screen without any DC motor. Also, as an example, in comparison with the existing solutions of FIG. 1, the electronic device 400 of the disclosure results in the generation of the localized haptic feedback effect due to which vibration occurs only at a specific event location. The localized haptic feedback effect generated by the electronic device 400 can also help a clumsy or incompetent to grasp event locations regarding an event. The localized haptic feedback effect generated by the electronic device 400 may also enhance the user experience by providing a real-time localized haptic feedback effect to the user.
According to the embodiment of the disclosure, the method 200 and the electronic device 2000 or 400 can enhance the user experience. For example, the localized vibration effect can provide Dolby Sound comparable vibration effect for users and hence can be a delightful user experience. Another exemplary advantage of the electronic device 2000 or 400 of the disclosure is cost-reduction or cost-effective due to the removal of the DC motor. Since the application of the DC motor is removed, the power of the system can also be optimized. The localized haptic feedback effect generated by the method 200 and the electronic device 2000 or 400 provides intelligent accessibility of the one or more contents displayed on the display screen.
FIG. 18 illustrates a first use case of providing a real-time localized haptic feedback effect to a user, in accordance with an embodiment of the disclosure. The device in FIG. 18 has two sound sources S1 and S2 as the available audio sources with the frequencies (f1 & f2 and amplitudes (wavelength λ1 & λ2), respectively. Sound source S1 emits a wave of wavelength λ1 and sound source S2 emits a wave of wavelength λ2 with respective phases such that they interfere at the desired location in the respective region of interests. As the game displayed in FIG. 18 progresses, each time the resultant sound waves x1 and x2 are calculated for the audio sources S1 and S2 when there is a change in the content displayed on the display screen. The respective calculated resultant sound waves x1 and x2 are used for generating vibrations at specific coordinates of the display screen with reference to the objects (man 1800, trolley 1802, and Swipe arrows 1804(depicted as black-colored-arrow)). As an example, if it is determined that the calibrated frequency value required for the vibration is 470kHz, then the sound source S1 will produce a wave with f1 = 440kHz and the sound source S2 will produce a wave with f2 = 500kHz. Accordingly, such real-time vibrations may notify the user regarding an input operation by generating the vibrations at the specific coordinates of the display screen while playing the game and may enhance the experience of the user while playing the game.
FIG. 19 illustrates a second use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure. FIG. 19 discloses an example use case for generating real-time localized haptic feedback effect while watching video content. As an example, when a car blast scene is displayed over the display screen then the Generation Engine 408 generates real-time localized haptic feedback effect (Localized Vibration 1900) at the location of the car in the car blast scene with a high vibration intensity such that the user can experience the blast along with the reproduction of the video content. Accordingly, such real-time localized haptic feedback effect may enhance the user experience while watching the video content.
FIG. 20 illustrates a third use case of providing a real-time localized haptic feedback effect to the user, in accordance with an embodiment of the disclosure. FIG. 20 discloses an example use case for generating real-time localized haptic feedback effect during online payment. As can be seen from FIG. 20, a localized vibration 2000 is generated at the location of the objects (copy icon 2002 and delete icon 2004) using sound wavelets of sound sources S1 and S2. Accordingly, at a time of online payments user can get notified with real-time vibration effect that at which icon the user should refer to perform the further operations for doing the online payment, and hence the user experience will be enhanced with such vibration effect.
Similarly, the electronic device 400 of the disclosure can also generate multidimensional vibrations during online book reading or online document reading to specify or highlight a specific location of the content. Accordingly, the user experience during online reading can be enhanced. The disclosure can also be applied to display devices used in vehicles.
The disclosure is not limited to the use case examples described above with regard to display-based electronic devices. It can also be applied to other technological fields, for example, health care systems, audio devices, and any other electronic devices including audio sources.
FIG. 21 illustrates a block diagram of an electronic device that executes the processes of FIG. 2 and FIG. 4, in accordance with an embodiment of the disclosure. In the electronic device 2100, a Central Processing Unit (CPU) 2102, a ROM (Read Only Memory) 2104, and a RAM (Random Access Memory) 2106 are connected by a Bus 2122. The CPU 2102 may be implemented as a processor.
The input unit 2110 includes a keyboard, a mouse, a microphone, and the like. The output unit 2112 includes a display, a speaker, and the like. The storage unit 2114 includes a nonvolatile memory and the like. The communication unit 2116 includes a network interface or the like. The drive 2118 drives a removable medium 2120 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like.
The CPU 2102, the ROM 2104, and the RAM 2106 are communicatively coupled with the input unit 2110, the output unit 2112, the storage unit 2114, the communication unit 2116, and the drive 2118 via the input/output interface 2108.
The electronic device 2100 may also include one or more processors to execute the series of processes described hereinabove with reference to the electronic device 200 and the electronic device 400.
In an embodiment, the disclosure refers to a method for generating a localized haptic feedback effect in an electronic device. The method comprises identifying, by an identification engine, one or more contents on a display screen of the electronic device and dividing, by the identification engine, the display screen into a plurality of grids. The method further comprises determining, by the identification engine, one or more haptic event locations, associated with the identified one or more contents, on the plurality of grids, and classifying, by an approximation engine, one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations. Furthermore, the method comprises determining, by a calibration engine, an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device. Additionally, after the determination of the optimized vibration frequency, the method comprises generating, by a generation engine, a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources available in the electronic device.
In an embodiment, the disclosure refers to an electronic device for generating a localized haptic feedback effect in an electronic device. The electronic device includes an identification engine that identifies one or more contents on a display screen of the electronic device, divides the display screen into a plurality of grids, and determines one or more haptic event locations associated with the identified one or more contents on the plurality of grids. The electronic device further includes an approximation engine that classifies one or more haptic events associated with the identified content with respect to a level of haptic feedback to be generated at the one or more haptic event locations. The electronic device further includes a calibration engine that determines an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound by one or more audio sources available in the electronic device. The electronic device further includes a generation engine that generates a calibrated optimized vibration frequency based on the determined optimized vibration frequency by calibration of the optimized vibration frequency and amplitude for the generation of the localized haptic feedback effect on the determined at least one haptic event location by using the one or more audio sources available in the electronic device.
Some example embodiments disclosed herein may be implemented using processing circuitry. For example, some example embodiments disclosed herein may be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.
Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
Claims (15)
- The method for generating a localized haptic feedback effect in an electronic device, the method comprising:identifying, by a processor, one or more contents on a display screen of the electronic device;dividing the display screen into a plurality of grids;determining one or more haptic event locations associated with the identified one or more contents on the plurality of grids;classifying the one or more haptic events with respect to a level of haptic feedback to be generated at the one or more haptic event locations;determining an optimized vibration frequency based on the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound source by one or more audio sources of the electronic device; andgenerating, based on the determined optimized vibration frequency, a calibrated optimized vibration frequency by calibration of the optimized vibration frequency and an amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations by using the one or more audio sources of the electronic device.
- The method of claim 1, wherein the identifying of the one or more contents on the display screen of the electronic device comprisesidentifying whether the one or more contents on the display screen is at least one of a multimedia content, at least one image, and at least one user interface (UI) element;identifying one or more objects based on a result of the identification; andobtaining corresponding coordinates defined as the one or more haptic event locations of the identified one or more objects.
- The method of claim 1, further comprising:detecting a plurality of environmental parameters of the electronic device, wherein the detection of the plurality of environmental parameters comprisesreceiving environmental data from one or more sensors of the electronic device; anddetecting, based on the received environmental data, the plurality of environmental parameters that corresponds to at least one of a state of the electronic device, a surface on which the electronic device is placed, and an orientation of the electronic device derivable from the environmental data.
- The method of claim 1, further comprising:identifying the one or more audio sources associated with the identified one or more contents; anddetermining an audio source position of the identified one or more audio sources based on the one or more audio sources of the electronic device, wherein the determination of the audio source position of the identified one or more audio sources comprisesdetermining whether the audio source position of the identified one or more audio sources is available based on built-in audio sources information; andcalculating corresponding coordinates defined as the audio source position of the identified one or more audio sources by using a triangulation technique based on the determination of an unavailability of the built-in audio sources information.
- The method of claim 2, further comprising:extracting a plurality of features of the identified one or more contents to classify the one or more haptic events associated with the identified one or more contents into a plurality of action classes; andestimating a scene intensity with respect of each of the identified one or more contents based on a mapping of the identified one or more contents and the plurality of action classes of the one or more contents,wherein the scene intensity corresponds to the level of haptic feedback to be generated at the one or more haptic event locations, and the one or more haptic events with respect to the level of haptic feedback are classified based on the estimated scene intensity,wherein the extracted plurality of features comprises at least one of a size of the identified one or more objects and an audio amplitude of the identified one or more objects,the scene intensity is estimated based on the audio amplitude of the identified one or more objects and the size of the identified one or more objects, andthe estimated scene intensity with respect to each of the identified one or more contents is categorized as one of a high, a medium, a low, or a default scene intensity value.
- The method of claim 5, further comprising:determining a vibration frequency for each of the identified one or more contents based on an audio amplitude for each of the identified one or more contents and the estimated scene intensity, wherein the determination of the vibration frequency for each of the identified one or more contents comprising:calculating an audio amplitude of each of the identified one or more objects, by using a Fast Fourier Transform (FFT), from the identified one or more contents;determining a sound intensity range of each of the identified one or more objects based on the calculated audio amplitude; and mapping the sound intensity range of each of the identified one or more objects with a predefined intensity frequency range, wherein the vibration frequency for each of the identified one or more contents is determined based on the mapping.
- The method of claim 6, wherein the determining, by the calibration engine, the optimized vibration frequency comprising:determining a capability of generating the determined vibration frequency of each of the identified objects from the one or more audio sources of the electronic device based on at least one of positions of the one or more audio sources, identified content position, and a plurality of environmental parameters of the electronic device, thereby determining the capability of generation of sound source by the one or more audio sources of the electronic device;determining a list of the optimized vibration frequency based on a result of determination; andselecting one optimized vibration frequency from the list of the determined optimized vibration frequency by using a regressive learning mechanism.
- The method of claim 7, further comprisingdetermining a required vibration frequency that is required by at least one audio source from the identified one or more audio sources to generate the selected optimized vibration frequency at the content position of the identified one or more contents, wherein the determination of the required vibration frequency that is required by at least one audio source from the identified one or more audio sources is based on a wave mechanism to generate the selected optimized vibration frequency at the content position of the identified one or more contents.
- The method of claim 8, further comprisingdetermining a calibrated amplitude of the required vibration frequency with respect to a required frequency to minimize a noise due to the determined required vibration frequency, wherein the determination of the calibrated amplitude of the required vibration frequency with respect to the required frequency comprisesdetermining the noise due to the determined required vibration frequency;comparing the determined noise with a predefined noise threshold value; andcalibrating an amplitude of the required vibration frequency if the determined noise is more than the predefined noise threshold value as a result of the comparison, wherein the calibrated amplitude of the required vibration frequency corresponds to the determined calibrated amplitude.
- The method of claim 9, further comprisingassigning the required frequency and the calibrated amplitude to the at least one audio source associated with the at least one content from the identified one or more contents, wherein the required frequency corresponds to the optimized vibration frequency, wherein the generating the calibrated optimized vibration frequency comprisesgenerating a first sound wavelets from the at least one audio source associated with the at least one content to generate the calibrated optimized vibration frequency at the content position of the identified one or more contents for the generation of the localized haptic feedback effect by using the one or more audio sources available in the electronic device.
- The method of claim 10, further comprising:generating a second sound wavelets from the at least one audio source associated with the at least one content to generate the determined optimized vibration frequency at the content position of the identified one or more contents to cancel a multiple coherence point of vibration due to the generation of the first sound wavelets for the generation of the localized haptic feedback effect by using the one or more audio sources available in the electronic device.
- The method of claim 1, further comprising:extracting an audio content from the identified one or more contents;generating a vibration pattern based on the required frequency and the calibrated amplitude;calculating a point of coherence from the generated vibration pattern;removing the calculated point of coherence from the generated vibration pattern;obtaining a unique vibration frequency wave based on the removal of the calculated point of coherence and merge it with the present sound wave; andgenerating the determined optimized vibration frequency at the content position of the identified one or more contents by merging the obtained unique vibration frequency wave with the audio content from the identified one or more contents.
- An apparatus for generating a localized haptic feedback effect, the apparatus comprising:one or more audio sources;a display that display one or more contents; anda processor configured toidentify the one or more contents on a display screen of the electronic device;divide the display screen into a plurality of grids; anddetermine one or more haptic event locations, associated with the identified one or more contents, on the plurality of grids; classify one or more haptic events associated with the identified one or more contents with respect to a level of haptic feedback to be generated at the one or more haptic event locations; determine an optimized vibration frequency according to the classified level of haptic feedback to be generated at the one or more haptic event locations for the identified one or more contents based on at least one of a plurality of environmental parameters and a capability of generation of sound source by the one or more audio sources; andgenerate, based on the determined optimized vibration frequency, the calibrated optimized vibration frequency by calibration of the optimized vibration frequency and an amplitude for the generation of the localized haptic feedback effect on the determined one or more haptic event locations, by using the one or more audio sources.
- The apparatus of claim 13, wherein the processor is further configured toidentify whether the one or more contents on the display is at least one of a multimedia content, at least one image, and at least one user interface (UI) element;identify one or more objects based on a result of the identification; andobtain corresponding coordinates defined as the one or more haptic event locations of the identified one or more objects.
- The apparatus of claim 13, further comprising:one or more sensors which sense environmental data,wherein the processor is further configured to:receiving the environmental data from the one or more sensors; anddetect, based on the received environmental data, the plurality of environmental parameters that corresponds to at least one of a state of the apparatus, a surface on which the apparatus is placed, and an orientation of the apparatus derivable from the environmental data.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202111036812 | 2021-08-13 | ||
| IN202111036812 | 2021-08-13 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023018309A1 true WO2023018309A1 (en) | 2023-02-16 |
Family
ID=85200148
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2022/012144 Ceased WO2023018309A1 (en) | 2021-08-13 | 2022-08-12 | A method and an apparatus for generating localized vibrations |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023018309A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025065559A1 (en) * | 2023-09-28 | 2025-04-03 | 瑞声开泰声学科技(上海)有限公司 | Stereoscopic vibration control method and apparatus, and device and storage medium |
| WO2025230596A1 (en) * | 2024-04-30 | 2025-11-06 | Microsoft Technology Licensing, Llc | Computing device with haptic trackpad |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20110049416A (en) * | 2009-11-05 | 2011-05-12 | 주식회사 팬택 | Vibration feedback providing terminal and method |
| US20130038603A1 (en) * | 2011-08-09 | 2013-02-14 | Sungho Bae | Apparatus and method for generating sensory vibration |
| US20150348378A1 (en) * | 2014-05-30 | 2015-12-03 | Obana Kazutoshi | Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information processing method |
| US20200128236A1 (en) * | 2013-03-15 | 2020-04-23 | Immersion Corporation | Method and apparatus for encoding and decoding haptic information in multi-media files |
| US20210186219A1 (en) * | 2018-09-11 | 2021-06-24 | Sony Corporation | Information processing device, information processing method, and recording medium |
-
2022
- 2022-08-12 WO PCT/KR2022/012144 patent/WO2023018309A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20110049416A (en) * | 2009-11-05 | 2011-05-12 | 주식회사 팬택 | Vibration feedback providing terminal and method |
| US20130038603A1 (en) * | 2011-08-09 | 2013-02-14 | Sungho Bae | Apparatus and method for generating sensory vibration |
| US20200128236A1 (en) * | 2013-03-15 | 2020-04-23 | Immersion Corporation | Method and apparatus for encoding and decoding haptic information in multi-media files |
| US20150348378A1 (en) * | 2014-05-30 | 2015-12-03 | Obana Kazutoshi | Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information processing method |
| US20210186219A1 (en) * | 2018-09-11 | 2021-06-24 | Sony Corporation | Information processing device, information processing method, and recording medium |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025065559A1 (en) * | 2023-09-28 | 2025-04-03 | 瑞声开泰声学科技(上海)有限公司 | Stereoscopic vibration control method and apparatus, and device and storage medium |
| WO2025230596A1 (en) * | 2024-04-30 | 2025-11-06 | Microsoft Technology Licensing, Llc | Computing device with haptic trackpad |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2011096694A2 (en) | Method and apparatus for providing user interface using acoustic signal, and device including user interface | |
| WO2023018309A1 (en) | A method and an apparatus for generating localized vibrations | |
| WO2010050693A2 (en) | Interface apparatus for generating control command by touch and motion, interface system including the interface apparatus, and interface method using the same | |
| WO2019059505A1 (en) | Method and apparatus for recognizing object | |
| WO2013191484A1 (en) | Remote control apparatus and control method thereof | |
| WO2021118225A1 (en) | Display device and operating method thereof | |
| WO2018088806A1 (en) | Image processing apparatus and image processing method | |
| WO2021133053A1 (en) | Electronic device and method for controlling same | |
| EP2979365A1 (en) | Mobile terminal and method of controlling the same | |
| WO2019124963A1 (en) | Speech recognition device and method | |
| WO2015125993A1 (en) | Mobile terminal and control method thereof | |
| WO2014038824A1 (en) | Method for changing object position and electronic device thereof | |
| WO2016182361A1 (en) | Gesture recognition method, computing device, and control device | |
| WO2016114432A1 (en) | Method for processing sound on basis of image information, and corresponding device | |
| WO2009116813A2 (en) | Electronic device and a pointer motion control method thereof | |
| WO2014157757A1 (en) | Mobile input device and method for inputting input by using same | |
| WO2015170832A1 (en) | Display device and video call performing method therefor | |
| WO2019135553A1 (en) | Electronic device, control method thereof, and computer readable recording medium | |
| WO2021080360A1 (en) | Electronic device and method for controlling display operation thereof | |
| WO2018030567A1 (en) | Hmd and control method therefor | |
| WO2016111588A1 (en) | Electronic device and method for representing web content for the electronic device | |
| WO2020149600A1 (en) | Electronic device and operation method thereof | |
| WO2021132743A1 (en) | Electronic device for displaying application-related content, and method for controlling same | |
| WO2016122153A1 (en) | Display apparatus and control method thereof | |
| WO2016129843A1 (en) | Display apparatus and display method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22856302 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22856302 Country of ref document: EP Kind code of ref document: A1 |