[go: up one dir, main page]

WO2024228700A1 - Détection de position de haut-parleur - Google Patents

Détection de position de haut-parleur Download PDF

Info

Publication number
WO2024228700A1
WO2024228700A1 PCT/US2023/020718 US2023020718W WO2024228700A1 WO 2024228700 A1 WO2024228700 A1 WO 2024228700A1 US 2023020718 W US2023020718 W US 2023020718W WO 2024228700 A1 WO2024228700 A1 WO 2024228700A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
audio
signal
response
strain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/020718
Other languages
English (en)
Inventor
Gordon DIX
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/020718 priority Critical patent/WO2024228700A1/fr
Publication of WO2024228700A1 publication Critical patent/WO2024228700A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/002Damping circuit arrangements for transducers, e.g. motional feedback circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • H04R29/003Monitoring arrangements; Testing arrangements for loudspeakers of the moving-coil type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • H04R3/08Circuits for transducers, loudspeakers or microphones for correcting frequency response of electromagnetic transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R9/00Transducers of moving-coil, moving-strip, or moving-wire type
    • H04R9/02Details
    • H04R9/04Construction, mounting, or centering of coil
    • H04R9/041Centering
    • H04R9/043Inner suspension or damper, e.g. spider
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R9/00Transducers of moving-coil, moving-strip, or moving-wire type
    • H04R9/06Loudspeakers

Definitions

  • Audio systems include one or more speakers and a controller which processes an audio signal and generates a driving signal for the speakers. In response to the driving signal, the speakers generate audio waves corresponding to the driving signal. At least partly because of nonlinear mechanical aspects of the speakers, the audio waves do not precisely correspond with the driving signal, and the audio waves include some distortion.
  • Some audio systems adjust the driving signal based at least partly on a control signal generated based at least partly on current and voltage of the driving signal. Improved control signal generation systems and methods are needed in the art.
  • One inventive aspect is an audio system including a housing, a speaker within the housing including an audio generation element, and configured to generate a position state signal indicating a position of the audio generation element, a processing module configured to receive audio data information, to receive the position state signal, and to generate an audio signal based at least partly on the received audio data information and on the received position state signal, and a speaker driver communicatively coupled to the processing module and configured to receive the audio signal, and to generate a driving signal in response to the audio signal.
  • the speaker is communicatively coupled to the speaker driver to receive the driving signal and is configured to generate compression sound waves in response to the driving signal.
  • the speaker includes a strain gauge configured to generate the position state signal in response to a strain of the strain gauge, and the strain is generated in response to a mechanical position of one or more components of the speaker.
  • the speaker includes a pressure transducer configured to generate the position state signal in response to a pressure at the pressure transducer, and the pressure is generated in response to a mechanical position of a one or more components of the speaker.
  • the speaker includes a surround
  • the strain gauge is attached to the surround
  • the strain is generated in response to a mechanical position of the surround.
  • the speaker includes a spider
  • the strain gauge is attached to the spider
  • the strain is generated in response to a mechanical position of the spider.
  • the speaker includes a plurality of strain gauges.
  • the strain gauges are positioned on the speaker such that the angular spacing between any pair of adjacent strain gauges is substantially identical.
  • the processing module is configured to use the position state signal to generate the audio signal such that the generated audio signal at least partially compensates for a nonlinearity of the speaker.
  • including a smart-home device including a smart-home device.
  • Another inventive aspect is a method of compensating for speaker nonlinearity in an audio system, the audio system including a processing module, a speaker driver, and a speaker having an audio generation element, the method including with the speaker, sensing a position of the audio generation element, with the speaker, generating a position state signal indicating the sensed position, with the processing module, receiving audio data information, with the processing module, receiving the position state signal, with the processing module, generating an audio signal based at least partly on the received audio data information and on the received position state signal, with the speaker driver, generating a driving signal based at least partly on the received audio signal, with the speaker, receiving the driving signal from the speaker driver, and with the speaker, generating compression sound waves corresponding with the driving signal.
  • the speaker includes a strain gauge
  • the method further includes, with the strain gauge, generating the position state signal in response to a strain of the strain gauge, and the strain is generated in response to a mechanical position of one or more components of the speaker.
  • the speaker includes a pressure transducer
  • the method further includes, with the speaker, generating the position state signal in response to a pressure at the pressure transducer, the pressure is generated in response to a mechanical position of a one or more components of the speaker.
  • the method further includes, with the processing module, using the position state signal to generate the audio signal such that the generated audio signal at least partially compensates for a nonlinearity of the speaker.
  • the audio system includes a smart-home device.
  • a speaker including an audio generation element.
  • the speaker is configured to generate a position state signal indicating a position of the audio generation element, and the speaker is configured to receive a driving signal and to generate compression sound waves in response to the driving signal.
  • the speaker includes a strain gauge coupled to one or more mechanical components of the speaker, and is configured to generate the position state signal in response to a strain of the strain gauge, and the strain is generated in response to a position of the one or more components of the speaker.
  • the speaker includes a pressure transducer configured to generate the position state signal in response to a pressure at the pressure transducer, and the pressure is generated in response to a mechanical position of a one or more components of the speaker.
  • the speaker includes a surround
  • the strain gauge is coupled to the surround
  • the strain is generated in response to a mechanical position of the surround.
  • the speaker includes a spider
  • the strain gauge is coupled to the spider
  • the strain is generated in response to a mechanical position of the spider.
  • the speaker includes a plurality of strain gauges.
  • FIG. 1 illustrates a block diagram of an embodiment of a smart home device.
  • FIG. 2 illustrates a smart home environment that includes various smart-home devices which can produce audio signals.
  • FIG. 3 illustrates a speaker at rest according to some embodiments.
  • FIG. 4 illustrates a speaker drum according to some embodiments.
  • FIG. 5 illustrates a speaker spider according to some embodiments.
  • FIG. 6 illustrates a speaker maximally extended according to some embodiments.
  • FIG. 7 illustrates a speaker minimally extended according to some embodiments.
  • FIG. 8 illustrates a speaker at rest according to some embodiments.
  • FIG. 9 illustrates a speaker maximally extended according to some embodiments.
  • FIG. 10 illustrates a speaker minimally extended according to some embodiments.
  • FIG. 11 illustrates a speaker at rest according to some embodiments.
  • FIG. 12 illustrates a speaker maximally extended according to some embodiments.
  • FIG. 13 illustrates a speaker minimally extended according to some embodiments.
  • FIG. 14 illustrates a method of using an audio system.
  • speakers for example included in an audio system of a smart home device, are driven by a processing module, which generates driving signals which are transmitted to the speakers.
  • the speakers receive the driving signals and mechanically respond with the driving signals by moving an audio generation element.
  • the movement of the audio generation element causes sound or audio compression waves to propagate from the audio generation element as an audio signal.
  • the audio compression waves do not precisely correspond with the driving signal, and the audio signal includes some amount of distortion. For example, near a maximum excursion distances from a rest position of the audio generation element, an amount of change in the driving signal needed to move the audio generation element a particular distance farther from the rest position is greater than the amount of change in the driving signal needed to move the audio generation element the particular distance farther from the rest position when the audio generation element is near the rest position.
  • an audio system may mitigate the effect of the nonlinear mechanical aspects of the speaker by sensing that the driving signal causes the audio generation element to operate near a maximum excursion distance from the rest position, and by responding to the sensed over-excursion condition by taking mitigating action. For example, in response to the sensed over-excursion condition, the audio system may reduce a gain of the speaker, reduce a gain of a driving signal generation element, apply targeted equalization, use a sophisticated model in the time domain to limit such overexcursion conditions, and/or take one or more other mitigating actions.
  • the position of the speaker audio generation element is sensed with a strain gauge embedded into a part of the suspension audio generation element.
  • the strain gauge is configured to generate a signal based at least partly on a strain condition of the strain gauge, for example, as understood by those of skill in the art.
  • the strain gauge is embedded or connected to or mounted on a surround of the speaker. As the driver signal causes the surround of the speaker to move, the strain gauge generates a signal corresponding with a strain of the surround which indicates the position of the audio generation element.
  • the strain gauge is additionally or alternatively embedded or connected to or mounted on a spider of the speaker. As the driver signal causes the spider of the speaker to move, the strain gauge generates a signal corresponding with a strain of the spider which indicates the position of the audio generation element.
  • the strain gauge is used to characterize the non-linear behavior of the motion of the audio generation element, and the characterization is used by a processing module to cause the driving signal to compensate for the characterized nonlinearity of the speaker.
  • the position of the speaker audio generation element is additionally or alternatively sensed with a pressure transducer, such as a microphone element, configured to generate a signal based at least partly on a pressure or a pressure differential, for example, as understood by those of skill in the art.
  • the pressure transducer is configured to generate a signal corresponding with pressure translated thereto by the audio generation element.
  • the driver signal causes the audio generation element of the speaker to move
  • the audio generation element generates a pressure signal, for example, as understood by those of skill in the art.
  • the pressure transducer senses the pressure signal, and generates a sense signal corresponding with the sensed pressure signal. Accordingly, the sense signal indicates the position of the audio generation element.
  • FIG. 1 illustrates a block diagram of an embodiment of an audio system 200.
  • Audio system 200 (“system 200”) can include: device 201, which may, for example, be a smart-home device; network 240; and cloud server system 250.
  • Device 201 may, for example, be any of various types of smart-home devices, such as a smart home assistant device that can respond to spoken queries from persons nearby.
  • a smart home assistant device may selectively or continuously listen for a spoken passphrase, which triggers the smart home assistant device to capture and analyze a spoken query.
  • Various forms of smart-home devices which can function as device 201 are detailed in relation to FIG. 2.
  • the audio processing components 212, 214, and 215 may be incorporated into an electronic device other than a smart-home device.
  • Device 201 can include: network interface 203, processing module 210, speaker driver 215, display screen 216, speaker 217, and microphone 218.
  • Processing module 210 can represent a monolithic integrated circuit. Therefore, all components of processing module 210 may be implemented within a single package that can be affixed to a printed circuit board of device 201. In addition to other modules, for example, understood by those of skill in the art, processing module 210 may include audio processing components, such as speaker position estimation engine 212 and audio data processor 214.
  • the monolithic integrated circuit also includes speaker driver 215.
  • Audio data processor 214 may be configured to receive audio data information, for example, from a memory (not shown), from network interface 203, and/or from one or more other sources. Based at least partly on the received audio data information, audio data processor 214 may be configured to generate a digital audio signal for speaker driver 215. Using techniques and/or components having functionality understood by those of skill in the art, audio data processor 214 may, for example, perform audio processing functions such as volume or gain control, equalization, and or one or more other functions known to those of skill in the art. Audio data processor 214 may include, for example, digital processing elements such as filters, amplifiers, time to frequency domain converters, frequency to time domain converters, and other digital circuits, for example, known to those of skill in the art.
  • Speaker driver 215 is configured to receive the digital audio signal from audio data processor 214. Based at least partly on the digital audio signal, speaker driver 215 is configured to generate a driving signal for speaker 217. Using techniques and/or components having functionality understood by those of skill in the art, speaker driver 215 may, for example, convert the digital audio signal into an analog audio signal and to generate the driving signal for speaker 217 based at least partly on the analog audio signal. In some embodiments, speaker driver 215 may be configured to, for example, perform audio processing functions such as volume or gain control, equalization, and/or one or more other functions known to those of skill in the art. Speaker driver 215 may include, for example, analog processing elements such as one or more digital to analog converters, filters, amplifiers, and other analog circuits, for example, known to those of skill in the art.
  • Speaker 217 is configured to receive the driving signal from speaker driver 215. Based at least partly on the received driving signal, using an audio generation element understood by those of skill in the art, speaker 217 is configured to generate compression sound waves, for example, as audible sounds.
  • Speaker 217 is also configured to generate a position state signal for speaker position estimation engine 212.
  • the position state signal provides an indication of a mechanical state or position of the audio generation element.
  • speaker 217 is configured to sense the mechanical state or position of the audio generation element, and to generate an electrical signal corresponding with the sensed mechanical state or position of the audio generation element as the position state signal as the position state signal for speaker position estimation engine 212.
  • Speaker position estimation engine 212 is configured to receive the position state signal from speaker 217. Based at least partly on the position state signal, speaker position estimation engine 212 is configured to generate a speaker state signal for audio data processor 214. Using techniques and/or components having functionality understood by those of skill in the art, speaker position estimation engine 212 may, for example, convert an analog position state signal from speaker 217 into a digital position state signal and to generate the speaker state signal for audio data processor 214 based at least partly on the digital position state signal. Speaker position estimation engine 212 may include, for example, analog and/or digital processing elements such as one or more analog-to-digital converters, filters, amplifiers, and other analog circuits, for example, known to those of skill in the art.
  • speaker position estimation engine 212 includes an analog amplifier configured to receive the position state signal from speaker 217 and to generate an amplified signal. In some embodiments, speaker position estimation engine 212 also includes an analog to digital converter configured to receive the amplified signal and to generate an analog version of the amplified signal, for example as the speaker state signal for audio data processor 214.
  • Audio data processor 214 may be also configured to receive the speaker state signal from speaker position estimation engine 212, where the speaker state signal provides an indication of the state of speaker 217. Audio data processor 214 may also be configured to generate the digital audio signal for speaker driver 215 based partly on the speaker state signal from speaker position estimation engine 212. [0056] For example, audio data processor 214 may be configured to determine, based at least partly on the speaker state signal from speaker position estimation engine 212, that the audio generation element of speaker 217 has experienced an over-excursion condition, such that the distortion effects of the nonlinear speaker movement is greater than an acceptable threshold.
  • the audio data processor 214 may, for example, reduce a gain of one or more audio data processing elements, reduce a gain of a driving signal generation element, apply targeted equalization, use a sophisticated model in the time domain to limit such over-excursion conditions, and/or take one or more other mitigating actions.
  • processing module 210 is configured to cause a digital characterization audio signal to be provided to audio data processor 214 to characterize the nonlinear behavior of speaker 217. For example, a digital ramp signal corresponding with values between minimum and maximum digital values, may be provided to audio data processor 214. In response to the digital characterization audio signal, audio data processor 214 is configured to generate a digital audio signal for speaker driver 215.
  • speaker driver 215 is configured to generate a driving signal for speaker 217 which causes the audio generation element of speaker 217 to stepwise travel from a first position, corresponding, for example, with a minimum position, to a second position, corresponding, for example, with a maximum position, stepping, for example, through each of a number of intermediate positions.
  • the first position, the second position, and the intermediate positions correspond with all possible values of the digital characterization audio signal.
  • the first position, the second position, and the intermediate positions correspond with a subset of the possible values of the digital characterization audio signal.
  • the driving signal for speaker 217 also causes speaker 217 to generate a position state signal value for speaker position estimation engine 212 for each of the first position, the second position, and the intermediate positions, and speaker position estimation engine 212 is configured to generate a corresponding speaker state signal for audio data processor 214 for each of the first position, the second position, and the intermediate positions.
  • processing module 210 may be configured to store the speaker state signals corresponding with each of the first position, the second position, and the intermediate positions in a memory. In some embodiments, processing module 210 is configured to store data corresponding with the speaker state signals corresponding with each of the first position, the second position, and the intermediate positions in a memory. [0060] In some embodiments, processing module 210 is configured to generate a conversion mapping based on the stored speaker state signals or the stored data corresponding with the speaker state signals. In some embodiments, audio data processor 214 is configured to generate the digital audio signal for speaker driver 215 based partly on the conversion mapping.
  • each particular digital value of the audio data information may be converted to a digital value based on the conversion mapping.
  • the converted digital values are generated for the conversion mapping using principles understood by those of skill in the art to compensate or partially compensate for the nonlinearity of the speaker 217.
  • system 200 includes a monolithic IC that performs all of the audio functions described herein, and in other embodiments, the components of processing module 210 preforming the described functions may be split among multiple components or chips or packages.
  • Device 201 can include network interface 203.
  • Network interface 203 can allow device 201 to communicate via one or more wired and/or wireless networks.
  • network interface 203 may allow device 201 to communicate via a wireless local area network, such as a wireless network that operates in accordance with an IEEE 802.11 standard.
  • Network interface 203 may also communicate via one or more mesh networking protocols, such as Thread, Zigbee, or Z- Wave.
  • Network interface 203 may permit device 201 to communicate with network 240.
  • Network 240 can include one or more private and/or public networks, such as the Internet.
  • Network 240 may be used such that device 201 can communicate with the cloud server system 250.
  • Cloud server system 250 may, in some embodiments, perform some of the processing functions described herein as being performed by processing module 210. Additionally or alternatively, cloud server system 250 may be used to relay notifications and/or store data produced by device 201 for example, in association with a user account.
  • Display screen 216, speaker 217, and microphone 218 may permit device 201 to interact with persons nearby.
  • Display screen 216 may be a touchscreen display that presents information pertinent to other smart-home devices that have been linked with device 201 and results obtained in response to a query posed by user via microphone 218.
  • device 201 may not have display screen 216.
  • some forms of smart home assistants which respond to auditory queries, use speech as the primary input and output interfaces.
  • Microphone 218 can be used for a person to pose a spoken query to device 201.
  • the spoken query may be analyzed locally or may be transmitted by device 201 to cloud server system 250 for analysis.
  • a result of the spoken query may be transmitted back to device 201 by cloud server system 250 to be output via speaker 217 using recorded or synthesized speech.
  • Speaker 217 and microphone 218 may further be used to interact with a person.
  • Processing module 210 may include one or more special-purpose or general-purpose processors.
  • Such special-purpose processors may include processors that are specifically designed to perform the functions detailed herein.
  • Such special-purpose processors may be ASICs or FPGAs which are general-purpose components that are physically and electrically configured to perform the functions detailed herein.
  • Such general-purpose processors may execute specialpurpose software that is stored using one or more non-transitory processor-readable mediums, such as random access memory (RAM), flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • RAM random access memory
  • HDD hard disk drive
  • SSD solid state drive
  • the components that are presented as part of processing module 210 can be implemented as individual hardware and/or software components or may be implemented together, such as in the form of software that is executed by one or more processors.
  • FIG. 2 illustrates an embodiment of a smart home environment 300 in which various smart-home devices may include the componentry of device 201 to perform the functions described herein.
  • various smart-home devices including those located indoors or outdoors, may benefit from the ability to perform the functions described herein.
  • the smart home environment 300 includes a structure 350 (e.g., a house, daycare, office building, apartment, condominium, garage, or mobile home) with various integrated devices. It will be appreciated that devices may also be integrated into a smart home environment 300 that does not include an entire structure 350, such as an apartment or condominium. Further, the smart home environment 300 may control and/or be coupled to devices outside of the actual structure 350. Indeed, several devices in the smart home environment 300 need not be physically within the structure 350.
  • a structure 350 e.g., a house, daycare, office building, apartment, condominium, garage, or mobile home
  • devices may also be integrated into a smart home environment 300 that does not include an entire structure 350, such as an apartment or condominium.
  • the smart home environment 300 may control and/or be coupled to devices outside of the actual structure 350. Indeed, several devices in the smart home environment 300 need not be physically within the structure 350.
  • “smart home environments” may refer to smart environments for homes such as a single-family house, but the scope of the present teachings is not so limited.
  • the present teachings are also applicable, without limitation, to duplexes, townhomes, multi -unit apartment buildings, hotels, retail stores, office buildings, industrial buildings, and more generally any living space or workspace.
  • the customer may be the landlord with respect to purchasing the unit
  • the installer may be a local apartment supervisor
  • a first user may be the tenant
  • a second user may again be the landlord with respect to remote control functionality.
  • identity of the person performing the action may be germane to a particular advantage provided by one or more of the implementations, such identity should not be construed in the descriptions that follow as necessarily limiting the scope of the present teachings to those particular individuals having those particular identities.
  • the depicted structure 350 includes a plurality of rooms 352, separated at least partly from each other via walls 354.
  • the walls 354 may include interior walls or exterior walls.
  • Each room may further include a floor 356 and a ceiling 358.
  • Devices may be mounted on, integrated with and/or supported by a wall 354, floor 356 or ceiling 358.
  • the integrated devices of the smart home environment 300 include intelligent, multi-sensing, network-connected devices that integrate seamlessly with each other in a smart home network and/or with a central server or a cloud-computing system to provide a variety of useful smart home functions.
  • the smart home environment 300 may include one or more intelligent, multi-sensing, network-connected thermostats 302 (hereinafter referred to as “smart thermostats 302”), one or more intelligent, network-connected, multi -sensing hazard detection units 304 (hereinafter referred to as “smart hazard detectors 304”), one or more intelligent, multi-sensing, network-connected entryway interface devices 306 and 320 and one or more intelligent, multi-sensing, network-connected alarm systems 322 (hereinafter referred to as “smart alarm systems 322”). Each of these devices may have the functionality of device 201 incorporated.
  • the one or more smart thermostats 302 detect ambient climate characteristics (e.g., temperature and/or humidity) and control an HVAC system 303 accordingly.
  • a respective smart thermostats 302 includes an ambient temperature sensor.
  • a smart hazard detector may detect smoke, carbon monoxide, and/or some other hazard present in the environment.
  • the one or more smart hazard detectors 304 may include thermal radiation sensors directed at respective heat sources (e.g., a stove, oven, other appliances, a fireplace, etc.).
  • a smart hazard detector 304 in a kitchen 353 includes a thermal radiation sensor directed at a network-connected appliance 312.
  • a thermal radiation sensor may determine the temperature of the respective heat source (or a portion thereof) at which it is directed and may provide corresponding black-body radiation data as output.
  • the smart doorbell 306 and/or the smart door lock 320 may detect a person’s approach to or departure from a location (e.g., an outer door), control doorbell/door locking functionality (e.g., receive user inputs from a portable electronic device 366-1 to actuate the bolt of the smart door lock 320), announce a person’s approach or departure via audio or visual means, and/or control settings on a security system (e.g., to activate or deactivate the security system when occupants go and come).
  • the smart doorbell 306 includes some or all of the components and features of the camera 318-1.
  • the smart doorbell 306 includes a camera 318-1, and, therefore, is also called “doorbell camera 306” in this document.
  • Cameras 318-1 and/or 318-2 may function as a streaming video camera and the streaming audio device detailed in relation to various embodiments herein.
  • Cameras 318 may be mounted in a location, such as indoors and to a wall or can be moveable and placed on a surface, such as illustrated with camera 318-2.
  • Various embodiments of cameras 318 may be installed indoors or outdoors. Each of these types of devices may have the functionality of device 201 incorporated.
  • the smart alarm system 322 may detect the presence of an individual within close proximity (e.g., using built-in IR sensors), sound an alarm (e.g., through a built-in speaker, or by sending commands to one or more external speakers), and send notifications to entities or users within/outside of the smart home environment 300.
  • the smart alarm system 322 also includes one or more input devices or sensors (e.g., keypad, biometric scanner, NFC transceiver, microphone) for verifying the identity of a user, and one or more output devices (e.g., display, speaker).
  • the smart alarm system 322 may also be set to an armed mode, such that detection of a trigger condition or event causes the alarm to be sounded unless a disarming action is performed.
  • Each of these devices may have the functionality of device 201 incorporated.
  • the smart home environment 300 includes one or more intelligent, multi-sensing, network-connected wall switches 308 (hereinafter referred to as “smart wall switches 308”), along with one or more intelligent, multi-sensing, network-connected wall plug interfaces 310 (hereinafter referred to as “smart wall plugs 310”).
  • the smart wall switches 308 may detect ambient lighting conditions, detect room-occupancy states, and control a power and/or dim state of one or more lights. In some instances, smart wall switches 308 may also control a power state or speed of a fan, such as a ceiling fan.
  • the smart wall plugs 310 may detect occupancy of a room or enclosure and control the supply of power to one or more wall plugs (e.g., such that power is not supplied to the plug if nobody is at home). Each of these types of devices may have the functionality of device 201 incorporated.
  • the smart home environment 300 of FIG. 2 includes a plurality of intelligent, multi-sensing, network-connected appliances 312 (hereinafter referred to as “smart appliances 312”), such as refrigerators, stoves, ovens, televisions, washers, dryers, lights, stereos, intercom systems, wall clock, garage-door openers, floor fans, ceiling fans, wall air conditioners, pool heaters, irrigation systems, security systems, space heaters, window AC units, motorized duct vents, and so forth.
  • smart appliances 312 such as refrigerators, stoves, ovens, televisions, washers, dryers, lights, stereos, intercom systems, wall clock, garage-door openers, floor fans, ceiling fans, wall air conditioners, pool heaters, irrigation systems, security systems, space heaters, window AC units, motorized duct vents, and so forth.
  • smart appliances 312 such as refrigerators, stoves, ovens, televisions, washers, dryers, lights, stereos, intercom systems, wall clock, garage-door openers, floor
  • an appliance when plugged in, an appliance may announce itself to the smart home network, such as by indicating what type of appliance it is, and it may automatically integrate with the controls of the smart home. Such communication by the appliance to the smart home may be facilitated by either a wired or wireless communication protocol.
  • the smart home may also include a variety of non-communicating legacy appliances 340, such as old conventional washer/dryers, refrigerators, and the like, which may be controlled by smart wall plugs 310.
  • the smart home environment 300 may further include a variety of partially communicating legacy appliances 342, such as infrared (“IR”) controlled wall air conditioners or other IR-controlled devices, which may be controlled by IR signals provided by the smart hazard detectors 304 or the smart wall switches 308.
  • IR infrared
  • the smart home environment 300 includes one or more network-connected cameras 318 that are configured to provide video monitoring and security in the smart home environment 300.
  • the cameras 318 may be used to determine occupancy of the structure 350 and/or particular rooms 352 in the structure 350, and thus may act as occupancy sensors.
  • video captured by the cameras 318 may be processed to identify the presence of an occupant in the structure 350 (e.g., in a particular room 352).
  • Specific individuals may be identified based, for example, on their appearance (e.g., height, face) and/or movement (e.g., their walk/gait).
  • Cameras 318 may additionally include one or more sensors (e.g., IR sensors, motion detectors), input devices (e.g., microphone for capturing audio), and output devices (e.g., speaker for outputting audio).
  • the cameras 318 are each configured to operate in a day mode and in a low-light mode (e.g., a night mode).
  • the cameras 318 each include one or more IR illuminators for providing illumination while the camera is operating in the low-light mode.
  • the cameras 318 include one or more outdoor cameras.
  • the outdoor cameras include additional features and/or components such as weatherproofing and/or solar ray compensation. Such cameras may have the functionality of device 201 incorporated.
  • the smart home environment 300 may additionally or alternatively include one or more other occupancy sensors (e.g., the smart doorbell 306, smart door locks 320, touch screens, IR sensors, microphones, ambient light sensors, motion detectors, smart nightlights 370, etc.).
  • the smart home environment 300 includes radio-frequency identification (RFID) readers (e.g., in each room 352 or a portion thereof) that determine occupancy based on RFID tags located on or embedded in occupants.
  • RFID readers may be integrated into the smart hazard detectors 304.
  • Each of these devices may have the functionality of device 201 incorporated.
  • Smart home assistant 319 may have one or more microphones that continuously listen to an ambient environment. Smart home assistant 319 may be able to respond to verbal queries posed by a user, possibly preceded by a triggering phrase. Smart home assistant 319 may stream audio and, possibly, video if a camera is integrated as part of the device, to a cloud-based server system 364 (which represents an embodiment of cloud-based host system 200 of FIG. 2). Smart home assistant 319 may be a smart device through which non-auditory discomfort alerts may be output and/or an audio stream from the streaming video camera can be output. As previously noted, smart home assistant 319 may have the functionality of device 201 incorporated.
  • one or more of the smart-home devices of FIG. 2 may further allow a user to interact with the device even if the user is not proximate to the device.
  • a user may communicate with a device using a computer (e.g., a desktop computer, laptop computer, or tablet) or other portable electronic device 366 (e.g., a mobile phone, such as a smart phone).
  • a webpage or application may be configured to receive communications from the user and control the device based on the communications and/or to present information about the device’s operation to the user.
  • the user may view a current set point temperature for a device (e.g., a stove) and adjust it using a computer.
  • the user may be in the structure during this remote communication or outside the structure.
  • users may control smart devices in the smart home environment 300 using a network-connected computer or portable electronic device 366.
  • some or all of the occupants e.g., individuals who live in the home
  • An occupant may use their registered portable electronic device 366 to remotely control the smart devices of the home, such as when the occupant is at work or on vacation.
  • the occupant may also use their registered device to control the smart devices when the occupant is actually located inside the home, such as when the occupant is sitting on a couch inside the home.
  • the smart home environment 300 may make inferences about which individuals live in the home and are therefore occupants and which portable electronic devices 366 are associated with those individuals. As such, the smart home environment may “learn” who is an occupant and permit the portable electronic devices 366 associated with those individuals to control the smart devices of the home.
  • smart thermostat 302, smart hazard detector 304, smart doorbell 306, smart wall switch 308, smart wall plug 310, network-connected appliances 312, camera 318, smart home assistant 319, smart door lock 320, and/or smart alarm system 322 are capable of data communications and information sharing with other smart devices, a central server or cloud-computing system, and/or other devices that are network-connected.
  • Data communications may be carried out using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 3LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.5A, WirelessHART, MiWi, etc.) and/or any of a variety of custom or standard wired protocols (e.g., Ethernet, HomePlug, etc.), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
  • custom or standard wireless protocols e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 3LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.5A, WirelessHART, MiWi, etc.
  • any of a variety of custom or standard wired protocols e.g., Ethernet, HomePlug, etc.
  • the smart devices serve as wireless or wired repeaters.
  • a first one of the smart devices communicates with a second one of the smart devices via a wireless router.
  • the smart devices may further communicate with each other via a connection (e.g., network interface 360) to a network, such as the Internet.
  • a connection e.g., network interface 360
  • the smart devices may communicate with a cloud-based server system 364 (also called a cloudbased server system, central server system, and/or a cloud-computing system herein).
  • Cloud-based server system 364 may be associated with a manufacturer, support entity, or service provider associated with the smart device(s).
  • a user is able to contact customer support using a smart device itself rather than needing to use other communication means, such as a telephone or Internet-connected computer.
  • software updates are automatically sent from cloud-based server system 364 to smart devices (e.g., when available, when purchased, or at routine intervals).
  • the network interface 360 includes a conventional network device (e.g., a router), and the smart home environment 300 of FIG. 2 includes a hub device 380 that is communicatively coupled to the network(s) 362 directly or via the network interface 360.
  • the hub device 380 is further communicatively coupled to one or more of the above intelligent, multi-sensing, network-connected devices (e.g., smart devices of the smart home environment 300).
  • Each of these smart devices optionally communicates with the hub device 380 using one or more radio communication networks available at least in the smart home environment 300 (e.g., ZigBee, Z-Wave, Insteon, Bluetooth, Wi-Fi and other radio communication networks).
  • the hub device 380 and devices coupled with/to the hub device can be controlled and/or interacted with via an application running on a smart phone, household controller, laptop, tablet computer, game console or similar electronic device.
  • a user of such controller application can view the status of the hub device or coupled smart devices, configure the hub device to interoperate with smart devices newly introduced to the home network, commission new smart devices, and adjust or view settings of connected smart devices, etc.
  • the hub device extends capabilities of low capability smart devices to match capabilities of the highly capable smart devices of the same type, integrates functionality of multiple different device types - even across different communication protocols - and is configured to streamline adding of new devices and commissioning of the hub device.
  • hub device 380 further includes a local storage device for storing data related to, or output by, smart devices of smart home environment 300.
  • the data includes one or more of: video data output by a camera device, metadata output by a smart device, settings information for a smart device, usage logs for a smart device, and the like.
  • smart home environment 300 includes a local storage device 390 for storing data related to, or output by, smart devices of smart home environment 300.
  • the data includes one or more of: video data output by a camera device (e.g., cameras 318 or smart doorbell 306), metadata output by a smart device, settings information for a smart device, usage logs for a smart device, and the like.
  • local storage device 390 is communicatively coupled to one or more smart devices via a smart home network.
  • local storage device 390 is selectively coupled to one or more smart devices via a wired and/or wireless communication network.
  • local storage device 390 is used to store video data when external network conditions are poor.
  • local storage device 390 is used when an encoding bitrate of cameras 318 exceeds the available bandwidth of the external network (e.g., network(s) 362).
  • local storage device 390 temporarily stores video data from one or more cameras (e.g., cameras 318) prior to transferring the video data to a server system (e.g., cloud-based server system 364).
  • service robots 368 each configured to carry out, in an autonomous manner, any of a variety of household tasks.
  • the service robots 368 can be respectively configured to perform floor sweeping, floor washing, etc.
  • a service robot may follow a person from room to room and position itself such that the person can be monitored while in the room. The service robot may stop in a location within the room where it will likely be out of the way, but still has a relatively clear field-of-view of the room.
  • Service robots 368 may have the functionality of device 201 incorporated therein. Such an arrangement may have the advantage of allowing one service robot with the functionality of device 201 incorporated to perform the functions described herein.
  • FIG. 3 illustrates a cross-section of speaker 400 at rest according to some embodiments.
  • Speaker 400 includes a magnet 410 including magnetic pole 415, coil and coil form 420, spider 430, anchor 440, diaphragm 450, and surround 460.
  • the coil of coil and coil form 420 includes a conductor which winds around the coil form.
  • the conductor may be electrically connected with a speaker driver which provides electrical signals thereto.
  • the coil In response to the current of the electrical signals, the coil generates a magnetic field which interacts with the magnetic field of magnet 410 and causes a force to be exerted on the coil and coil form 420 with respect to the magnet 410.
  • coil and coil form 420 moves relative to the magnet 410.
  • Coil current traveling in a first direction generates a force which induces the coil and coil form 420 to move in an upward direction in the orientation of the figure
  • coil current traveling in a second, opposite, direction generates a force which induces the coil and coil form 420 to move in a downward direction in the orientation of the figure.
  • the spider 430 is connected to the coil and coil form 420 and is connected to the magnet 410, for example, as illustrated.
  • the spider 430 mechanically resists the movement of the coil and coil form 420.
  • the coil form 420 is mechanically coupled to the diaphragm 450 such that the movement of coil and coil form 420 induces a corresponding movement in the diaphragm 450.
  • Anchor 440 is substantially fixed with respect to magnet 410, and accordingly does not move in response to the movement of coil and coil form 420.
  • Surround 460 is mechanically connected to anchor 440 and to diaphragm 450, and is configured to conform to allow movement of the diaphragm 450 with the movement of the coil and coil form 420.
  • Speaker 400 as illustrated in the figure, has diaphragm 450 at rest as a result of the signal received by the coil and coil form 420 not causing the coil and coil form 420 to experience an upward or a downward force, as represented by the orientation of the figure.
  • FIG. 4 illustrates a top-down view of speaker drum 401 and a cross-sectional view of a portion of speaker drum 401 according to some embodiments.
  • Speaker drum 401 includes anchor 440, surround 460, diaphragm 450, strain gauges 465, and conductors 467. In alternative embodiments, speaker drum 401 does not include the strain gauges 465 and conductors 467 connected thereto.
  • Anchor 440 is configured to be fixed in speaker 400 with respect to the magnetic 410. Surround 460 flexibly connects anchor 440 to diaphragm 450.
  • Strain gauges 465 are flexible and are connected to surround 460, for example, as illustrated in the cross-sectional view.
  • each conductor 467 is connected to one of the strain gauges 465, for example, as illustrated in the cross-sectional view.
  • Each conductor 467 may include multiple conductors electrically isolated from one another, where each of the conductors is electrically connected with a different electrical connection of the strain gauge 465 connected thereto. As understood by those of skill in the art, other configurations may be implemented.
  • surround 460 in response to movement of diaphragm 450, surround 460 flexes, and the strain gauges 465 connected thereto also flex because of their connection to surround 460.
  • Strain gauges 465 each receive an input electrical signal through the conductor 467 connected thereto and generate an output electrical signal representing the mechanical strain thereof, as understood by those of skill in the art. Accordingly, in response to the strain gauges 465 flexing in response to the flexing of the surround 460, the output electrical signal represents a flexing state of the surround. Because the flexing state of the surround corresponds with the position of the diaphragm 450, the output electrical signal of the strain gauges 465 correspond with the position of the diaphragm 450.
  • the output electrical signal of the strain gauges 465 is correlated with the position of the diaphragm 450 with a calibration process.
  • the speaker may be driven to the at rest position, for example illustrated in FIG. 2, and the output electrical signals of the strain gauges 465 may be measured and stored in a memory as corresponding with the at rest position.
  • the speaker may be driven to the maximally extended position illustrated, for example, in FIG. 6, discussed below, and the output electrical signals of the strain gauges 465 may be measured and stored in a memory as corresponding with the maximally extended position.
  • the speaker may be driven to the minimally extended position illustrated, for example, in FIG.
  • the output electrical signals of the strain gauges 465 may be measured and stored in a memory as corresponding with the minimally extended position.
  • the speaker may be driven to other positions, for example, between the maximally and minimally extended positions, and the output electrical signals of the strain gauges 465 may be measured and stored in a memory as corresponding with those other positions.
  • the calibration process may occur as part of a manufacturing process used to make the speaker 400 or to make audio system 200 having speaker 400.
  • audio system 200 is configured to perform the calibration process, for example, in response to a startup condition or in response to another condition of audio system 200.
  • strain gauges 465 are illustrated. In other embodiments another number of strain gauges are used. In some embodiments, the strain gauges 465 are evenly distributed around surround 460, such that the angular spacing between any pair of adjacent strain gauges 465 is identical or substantially identical.
  • the conductors 467 may be electrically connected, for example, to speaker position estimation engine 212 such that the electrical signals generated by the strain gauges 465 are transmitted to the speaker position estimation engine 212.
  • FIG. 5 illustrates a top-down view of spider 430 and a cross-sectional view of a portion of spider 430 according to some embodiments.
  • Spider 430 includes strain gauges 465 and conductors 467. In alternative embodiments, spider 430 does not include strain gauges 465 and electrical connectors 467.
  • the outer portion 432 of spider 430 is configured to be fixed in speaker 400 with respect to the magnetic 410, and flexibly connects magnet 410 to the inner portion 434 of spider 430, which is connected to coil and coil form 420. Accordingly, as coil and coil form 420 moves in response to an audio signal, the inner portion 434 of spider 430 correspondingly moves, and the outer portion 432 of spider 430 remains fixed. As a result, spider 430 provides a restorative force resisting movements induced by the signal received by coil and coil form 420.
  • Strain gauges 465 are flexible and are connected to spider 430, for example, as illustrated in the cross-sectional view.
  • each conductor 467 is connected to one of the strain gauges 465, for example, as illustrated in the cross-sectional view.
  • Each conductor 467 may include multiple conductors electrically isolated from one another, where each of the conductors is electrically connected with a different electrical connection of the strain gauge 465 connected thereto. As understood by those of skill in the art, other configurations may be implemented.
  • spider 430 in response to movement of diaphragm 450, spider 430 flexes, and the strain gauges 465 connected thereto also flex because of their connection to surround 460. Strain gauges 465 each receive an input electrical signal through the conductor 467 connected thereto and generate an output electrical signal representing the mechanical strain thereof, as understood by those of skill in the art. Accordingly, in response to the strain gauges 465 flexing in response to the flexing of the spider 430, the output electrical signal represents a flexing state of the spider 430. Because the flexing state of the spider 430 corresponds with the position of the diaphragm 450, the output electrical signal of the strain gauges 465 correspond with the position of the diaphragm 450.
  • strain gauges 465 are illustrated. In other embodiments another number of strain gauges are used. In some embodiments, the strain gauges 465 are evenly distributed around spider 430, such that the angular spacing between any pair of adjacent strain gauges 465 is identical or substantially identical.
  • the conductors 467 may be electrically connected, for example, to speaker position estimation engine 212 such that the electrical signals generated by the strain gauges 465 are transmitted to the speaker position estimation engine 212.
  • strain gauges 465 are attached to surround 460, for example, as illustrated in FIG. 4, and no strain gauges are attached to spider 430. In some embodiments of speaker 400, strain gauges 465 are attached to spider 430, for example, as illustrated in FIG. 5, and no strain gauges are attached to surround 460. In some embodiments of speaker 400, strain gauges 465 are attached to surround 460, for example, as illustrated in FIG. 4, and strain gauges 465 are attached to spider 430, for example, as illustrated in FIG. 5.
  • FIG. 6 illustrates speaker 400 maximally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended upward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a first polarity from a speaker driver.
  • any strain gauges 465 attached to the surround 460 provide output electrical signals indicating the upward extended position of diaphragm 450.
  • any strain gauges 465 attached to the spider 430 provide output electrical signals indicating the upward extended position of diaphragm 450.
  • FIG. 7 illustrates speaker 400 minimally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended downward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a second polarity opposite the first polarity from a speaker driver.
  • any strain gauges 465 attached to the surround 460 provide output electrical signals indicating the downward extended position of diaphragm 450.
  • spider 430 is flexed, for example, as illustrated in the figure. Accordingly, any strain gauges 465 attached to the spider 430 provide output electrical signals indicating the downward extended position of diaphragm 450.
  • FIG. 8 illustrates an alternative configuration of speaker 400 at rest according to some embodiments.
  • the illustrated embodiment of speaker 400 includes features similar or identical to those of the embodiment of speaker 400 illustrated in FIG. 3. Accordingly, the functionality described with reference to FIG. 3 is similar or identical to the functionality of speaker 400 illustrated in FIG. 8.
  • speaker 400 of FIG. 8 additionally includes cavity 475 in magnet pole 415 and pressure transducer 480 within cavity 475.
  • Pressure transducer 480 receives an input electrical signal through a conductor (not shown) connected thereto and generates an output electrical signal representing a pressure sensed thereby, as understood by those of skill in the art. Accordingly, in response to the pressure changing in response to the movement of coil and coil form 420 and diaphragm 450, pressure transducer 480 senses the changed pressure and correspondingly changes the output electrical signal. Accordingly, the output electrical signal corresponds with the pressure sensed by pressure transducer 480. In addition, because the position state of the coil and coil form 420 and diaphragm 450 corresponds with the sensed pressure, the output electrical signal of the pressure transducer 480 corresponds with the position of coil and coil form 420 and diaphragm 450.
  • the conductor may pass through an opening in any of the illustrated elements.
  • the conductor passes through a hole (not shown) in the magnetic pole 415.
  • the conductor may be electrically connected, for example, to speaker position estimation engine 212 such that the electrical signals generated by the pressure transducer 480 are transmitted to the speaker position estimation engine 212.
  • FIG. 9 illustrates speaker 400 maximally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended upward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a first polarity from a speaker driver.
  • the pressure transducer provides output electrical signals indicating the upward extended position of diaphragm 450.
  • FIG. 10 illustrates speaker 400 minimally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended downward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a second polarity opposite the first polarity from a speaker driver.
  • the output electrical signals indicate pressure, and the pressure corresponds with the position of the diaphragm 450, the output electrical signals indicate position of the diaphragm 450 throughout the range of movement of the diaphragm 450.
  • FIG. 11 illustrates an alternative configuration of speaker 400 at rest according to some embodiments.
  • the illustrated embodiment of speaker 400 includes features similar or identical to those of the embodiment of speaker 400 illustrated in FIG. 3. Accordingly, the functionality described with reference to FIG. 3 is similar or identical to the functionality of speaker 400 illustrated in FIG. 11.
  • speaker 400 of FIG. 8 additionally includes cavity 475 in magnet pole 415, hole 470 extending from cavity 475 through magnet pole 415, and pressure transducer 480 located at the end of the hole 470 on the lower side of magnet pole 415 in the orientation illustrated.
  • Pressure transducer 480 receives an input electrical signal through a conductor (not shown) connected thereto and generates an output electrical signal representing a pressure sensed thereby, as understood by those of skill in the art. Accordingly, in response to the pressure changing in response to the movement of coil and coil form 420 and diaphragm 450, pressure transducer 480 senses the changed pressure and correspondingly changes the output electrical signal. Accordingly, the output electrical signal corresponds with the pressure sensed by pressure transducer 480. In addition, because the position state of the coil and coil form 420 and diaphragm 450 corresponds with the sensed pressure, the output electrical signal of the pressure transducer 480 corresponds with the position of coil and coil form 420 and diaphragm 450. [0131] In the device 201 illustrated in FIG. 1, the conductor may be electrically connected, for example, to speaker position estimation engine 212 such that the electrical signals generated by the pressure transducer 480 are transmitted to the speaker position estimation engine 212.
  • FIG. 12 illustrates speaker 400 maximally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended upward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a first polarity from a speaker driver.
  • FIG. 13 illustrates speaker 400 minimally extended according to some embodiments.
  • coil and coil form 420 and diaphragm 450 are extended downward in the orientation of the figure, for example, as a result of coil and coil form 420 receiving an electrical signal of a second polarity opposite the first polarity from a speaker driver.
  • the output electrical signals indicate pressure, and the pressure corresponds with the position of the diaphragm 450, the output electrical signals indicate position of the diaphragm 450 throughout the range of movement of the diaphragm 450.
  • both a pressure transducer 480 for example, as described with reference to FIGs. 11-23 and one or more strain gauges 465, for example, as described with reference to FIGs. 3-10 are used.
  • the sense data transmitted by the sensors may be combined by a speaker position estimation engine, for example, by averaging the transmitted sense data.
  • FIG. 14 illustrates a method of using an audio system, where the audio system has, for example, a processing module, a speaker driver, and a speaker with an audio generation element.
  • the speaker may sense a position of the audio generation element.
  • the position of the audio generation element may be sensed using techniques similar or identical to those described with reference to the embodiments of figures 3-10. Additionally or alternatively, the position of the audio generation element may be sensed using techniques similar or identical to those described with reference to the embodiments of figures 11-13. In some embodiments, other techniques are used to sense the position of the audio generation element.
  • the speaker may generate a position state signal indicating the sensed position.
  • the position state signal may be generated using techniques similar or identical to those described with reference to the embodiments of figures 3-10. Additionally or alternatively, the position state signal may be generated using techniques similar or identical to those described with reference to the embodiments of figures 11-13. In some embodiments, other techniques are used to generate the position state signal.
  • the processing module may receive audio data information.
  • the processing module may receive audio data information, for example, using techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to receive the audio data information.
  • the processing module may receive the position state signal.
  • the processing module may receive the position state signal, for example, using techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to receive the position state signal.
  • the processing module may generate an audio signal based at least partly on the received audio data information and on the received position state signal.
  • the processing module may generate the audio signal using, for example, techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to generate the driving signal.
  • the speaker driver may generate a driving signal based at least partly on the received audio signal.
  • the speaker driver may generate the driving signal using, for example, techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to generate the driving signal.
  • the speaker may receive the driving signal from the processing module.
  • the speaker may receive the driving signal using, for example, techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to receive the driving signal.
  • the speaker may generate compression sound waves corresponding with the driving signal.
  • the speaker may generate the compression sound waves using, for example, techniques similar or identical to those described with reference to audio system 200. In some embodiments, other techniques are used to generate the compression sound waves.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention divulgue un système audio. Le système audio comprend un boîtier, un haut-parleur au sein du boîtier comprenant un élément de génération audio, et configuré pour générer un signal d'état de position indiquant une position de l'élément de génération audio, un module de traitement configuré pour recevoir des informations de données audio, pour recevoir le signal d'état de position, et pour générer un signal audio sur la base, au moins en partie, des informations de données audio reçues et du signal d'état de position reçu, et un circuit d'attaque de haut-parleur couplé en communication au module de traitement et configuré pour recevoir le signal audio, et pour générer un signal d'attaque en réponse au signal audio. Le haut-parleur est couplé en communication au circuit d'attaque de haut-parleur pour recevoir le signal d'attaque et est configuré pour générer des ondes sonores de compression en réponse au signal d'attaque.
PCT/US2023/020718 2023-05-02 2023-05-02 Détection de position de haut-parleur Pending WO2024228700A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/020718 WO2024228700A1 (fr) 2023-05-02 2023-05-02 Détection de position de haut-parleur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/020718 WO2024228700A1 (fr) 2023-05-02 2023-05-02 Détection de position de haut-parleur

Publications (1)

Publication Number Publication Date
WO2024228700A1 true WO2024228700A1 (fr) 2024-11-07

Family

ID=86604454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/020718 Pending WO2024228700A1 (fr) 2023-05-02 2023-05-02 Détection de position de haut-parleur

Country Status (1)

Country Link
WO (1) WO2024228700A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160302018A1 (en) * 2015-04-09 2016-10-13 Audera Acoustics Inc. Acoustic transducer systems with position sensing
WO2022183276A1 (fr) * 2021-03-01 2022-09-09 Audera Acoustics Inc. Systèmes de transducteurs acoustiques et procédés de fonctionnement de systèmes de transducteurs acoustiques pour optimiser les performances d'intervention

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160302018A1 (en) * 2015-04-09 2016-10-13 Audera Acoustics Inc. Acoustic transducer systems with position sensing
WO2022183276A1 (fr) * 2021-03-01 2022-09-09 Audera Acoustics Inc. Systèmes de transducteurs acoustiques et procédés de fonctionnement de systèmes de transducteurs acoustiques pour optimiser les performances d'intervention

Similar Documents

Publication Publication Date Title
US20240112559A1 (en) Privacy-preserving radar-based fall monitoring
EP4155782B1 (fr) Systèmes et procédés de détection d'ultrasons dans des dispositifs intelligents
US10876681B2 (en) Camera stand having constant resistance for a portion of a range of motion along an axis of rotation
US10869006B2 (en) Doorbell camera with battery at chime
US10522027B1 (en) Thermal management in smart doorbells
US10240713B2 (en) Camera stand having constant resistance for a portion of a range of motion along an axis of rotation
US10621838B2 (en) External video clip distribution with metadata from a smart-home environment
US20220067089A1 (en) Systems and Methods for Monitoring Objects and Their States by Using Acoustic Signals
US9327851B1 (en) Method of packaging camera facilitating ease of installation
US20210072378A1 (en) Systems and methods of ultrasonic sensing in smart devices
US11259076B2 (en) Tactile launching of an asymmetric visual communication session
EP3403086A2 (fr) Systèmes et procédés permettant de surveiller des objets et leurs états au moyen de signaux acoustiques
WO2024228700A1 (fr) Détection de position de haut-parleur
US20240381014A1 (en) Speaker grille passively tensioning acoustic fabric
US20240264724A1 (en) Application extensibility for smart device control
KR20190036832A (ko) 음성 수신 단말과 이를 포함하는 사물 인터넷 네트워크 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23726776

Country of ref document: EP

Kind code of ref document: A1