WO2024048816A1 - Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil - Google Patents
Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil Download PDFInfo
- Publication number
- WO2024048816A1 WO2024048816A1 PCT/KR2022/013085 KR2022013085W WO2024048816A1 WO 2024048816 A1 WO2024048816 A1 WO 2024048816A1 KR 2022013085 W KR2022013085 W KR 2022013085W WO 2024048816 A1 WO2024048816 A1 WO 2024048816A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subtask
- data
- layer
- information
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W76/00—Connection management
- H04W76/10—Connection setup
Definitions
- the following description is about a wireless communication system, and relates to an apparatus and method for transmitting and receiving signals in a wireless communication system.
- a method and device for semantic source coding of signals for transmission and reception in semantic communication can be provided.
- Wireless access systems are being widely deployed to provide various types of communication services such as voice and data.
- a wireless access system is a multiple access system that can support communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.).
- multiple access systems include code division multiple access (CDMA) systems, frequency division multiple access (FDMA) systems, time division multiple access (TDMA) systems, orthogonal frequency division multiple access (OFDMA) systems, and single carrier frequency (SC-FDMA) systems. division multiple access) systems, etc.
- enhanced mobile broadband (eMBB) communication technology is being proposed compared to the existing radio access technology (RAT).
- RAT radio access technology
- a communication system that takes into account reliability and latency-sensitive services/UE (user equipment) as well as mMTC (massive machine type communications), which connects multiple devices and objects to provide a variety of services anytime and anywhere, is being proposed. .
- mMTC massive machine type communications
- a method of operating a first device in a wireless communication system includes receiving a synchronization signal from a second device, performing a synchronization procedure with the second device based on the synchronization signal, Establishing a connection, receiving control information and first data from a second device, and performing a task based on the first data, wherein the control information is semantic control information.
- SCI semantic control information
- a method of operating a second device in a wireless communication system transmitting a synchronization signal to the first device, a synchronization procedure with the first device based on the synchronization signal Establishing a connection and transmitting control information and first data to a first device, wherein the control information includes semantic control information (SCI), and the first data is a task. ), and the task can be performed through the first data, SCI, and shared information.
- SCI semantic control information
- a first device in a wireless communication system includes a transceiver and a processor connected to the transceiver, and the processor receives a synchronization signal from the second device and responds to the synchronization signal. Based on this, a synchronization procedure with the second device is performed to establish a connection, control information and first data are received from the second device, and control is performed to perform a task based on the first data, where the control information is It includes semantic control information (SCI), and the task can be performed through the first data, SCI, and shared information.
- SCI semantic control information
- a second device in a wireless communication system includes a transceiver and a processor connected to the transceiver, and the processor transmits a synchronization signal to the first device, based on the synchronization signal.
- SCI semantic control information
- 1 data is used to perform a task, and the task can be performed through the first data, SCI, and shared information.
- the at least one processor receives a synchronization signal from the second device. (synchronization signal), perform a synchronization procedure with the second device based on the synchronization signal to establish a connection, receive control information and first data from the second device, and perform a task ( control to perform a task, the control information includes semantic control information (SCI), and the task can be performed through the first data, SCI, and shared information.
- synchronization signal synchronization signal
- control information and first data receive control information and first data from the second device
- the control information includes semantic control information (SCI)
- the task can be performed through the first data, SCI, and shared information.
- SCI semantic control information
- At least one instruction includes: receiving a synchronization signal from a second device, performing a synchronization procedure with the second device based on the synchronization signal to establish a connection, and receiving a synchronization signal from the second device.
- Control information and first data are received, and control is performed to perform a task based on the first data, where the control information includes semantic control information (SCI), and the task includes the first data, SCI. and shared information.
- SCI semantic control information
- the shared information is background knowledge shared by the first device and the second device based on the semantic layer, and the first data is extracted from the second data for the task based on the semantic layer. It may be extracted feature information.
- a task is composed of at least one subtask, and first data, which is feature information, is extracted from second data for the task through semantic encoding based on shared information. It can be.
- concatenated subtask layers are configured based on a task, and shared information is stored in each subtask layer of the concatenated subtask layers. It may include a subtask state transition matrix set indicating relationship information between output states (subtask layer output states).
- the extracted first data may be composed of feature vectors extracted from consecutive subtask layers.
- the information on the first subtask layer Through the first subtask state transition matrix, the correlation information between the output state of the second subtask layer, which is the previous subtask layer of the first subtask layer, and the output state of the first subtask layer are confirmed, and the output state of the second subtask layer is confirmed.
- the first feature vector of the first subtask layer may be extracted based on the second feature vector and the first data set of the first subtask layer.
- the output state of the second subtask layer and the second feature vector may be determined based on at least one subtask layer located before the second subtask layer.
- consecutive subtask layers form a concatenation sub-DNN (deep neural network) structure, and each feature vector in each of the consecutive subtask layers through the sub-DNN is can be extracted.
- DNN deep neural network
- each of the decoupled subtasks may use a different sub DNN.
- the task when a task is performed through first data received based on shared information, the task may be performed through semantic decoding in concatenated subtask layers. there is.
- consecutive subtask layers form a concatenation sub-DNN (deep neural network) structure, and semantic decoding will be performed in each of the contiguous subtask layers through the sub-DNN. You can.
- concatenation sub-DNN deep neural network
- Semantic decoding when semantic decoding is performed in a first subtask layer among consecutive subtask layers, semantic decoding output information of the second subtask layer, which is the previous subtask layer of the first subtask layer, Semantic decoding may be performed through the first feature vector received from the first subtask layer.
- each decoupled subtask may use a different sub DNN.
- SCI is control information for setting a semantic encoder and a semantic decoder, including a neural network (NN) model selection method, a task type, and the depth of the layer required to perform the corresponding type of task. It may include at least one piece of information about (depth).
- NN neural network
- the present disclosure can improve communication performance by performing semantic source coding using background knowledge in a wireless communication system.
- the present disclosure can provide the advantage of utilizing a small deep neural network (DNN) of an optimal size for each coupled subtask in a wireless communication system.
- DNN deep neural network
- FIG. 1 is a diagram showing an example of a communication system applicable to the present disclosure.
- Figure 2 is a diagram showing an example of a wireless device applicable to the present disclosure.
- Figure 3 is a diagram showing another example of a wireless device applicable to the present disclosure.
- Figure 4 is a diagram showing an example of a portable device applicable to the present disclosure.
- Figure 5 is a diagram showing an example of a vehicle or autonomous vehicle applicable to the present disclosure.
- Figure 6 is a diagram showing an example of AI (Artificial Intelligence) applicable to the present disclosure.
- AI Artificial Intelligence
- Figure 7 is a diagram showing a method of processing a transmission signal applicable to the present disclosure.
- Figure 8 is a diagram showing an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
- Figure 9 is a diagram showing an electromagnetic spectrum applicable to the present disclosure.
- FIG. 10 is a diagram showing a THz communication method applicable to the present disclosure.
- Figure 11(a) shows an example of existing communication systems according to an embodiment of the present disclosure.
- Figure 11(b) shows an example of communication systems according to an embodiment of the present disclosure.
- Figure 12 shows an example of a protocol stack for semantic communication according to an embodiment of the present disclosure.
- Figure 13 shows an example of a subtask matrix set according to an embodiment of the present disclosure.
- Figure 14 shows an example of a semantic encoder and decoder structure based on background knowledge according to an embodiment of the present disclosure.
- Figure 15 shows an example of a structure of concatenated layers according to an embodiment of the present disclosure.
- Figure 16 shows an example of a semantic decoder according to an embodiment of the present disclosure.
- Figure 17 shows an example of a semantic encoding and decoding procedure according to an embodiment of the present disclosure.
- each component or feature may be considered optional unless explicitly stated otherwise.
- Each component or feature may be implemented in a form that is not combined with other components or features. Additionally, some components and/or features may be combined to configure an embodiment of the present disclosure. The order of operations described in embodiments of the present disclosure may be changed. Some features or features of one embodiment may be included in another embodiment or may be replaced with corresponding features or features of another embodiment.
- the base station is meant as a terminal node of the network that directly communicates with the mobile station. Certain operations described in this document as being performed by the base station may, in some cases, be performed by an upper node of the base station.
- 'base station' is a term such as fixed station, Node B, eNB (eNode B), gNB (gNode B), ng-eNB, advanced base station (ABS), or access point. It can be replaced by .
- a terminal may include a user equipment (UE), a mobile station (MS), a subscriber station (SS), a mobile subscriber station (MSS), It can be replaced with terms such as mobile terminal or advanced mobile station (AMS).
- UE user equipment
- MS mobile station
- SS subscriber station
- MSS mobile subscriber station
- AMS advanced mobile station
- the transmitting end refers to a fixed and/or mobile node that provides a data service or a voice service
- the receiving end refers to a fixed and/or mobile node that receives a data service or a voice service. Therefore, in the case of uplink, the mobile station can be the transmitting end and the base station can be the receiving end. Likewise, in the case of downlink, the mobile station can be the receiving end and the base station can be the transmitting end.
- Embodiments of the present disclosure include wireless access systems such as the IEEE 802.xx system, 3GPP (3rd Generation Partnership Project) system, 3GPP LTE (Long Term Evolution) system, 3GPP 5G (5th generation) NR (New Radio) system, and 3GPP2 system. It may be supported by at least one standard document disclosed in one, and in particular, embodiments of the present disclosure are supported by the 3GPP TS (technical specification) 38.211, 3GPP TS 38.212, 3GPP TS 38.213, 3GPP TS 38.321 and 3GPP TS 38.331 documents. It can be.
- 3GPP TS technical specification
- embodiments of the present disclosure can be applied to other wireless access systems and are not limited to the above-described systems. As an example, it may be applicable to systems applied after the 3GPP 5G NR system and is not limited to a specific system.
- CDMA code division multiple access
- FDMA frequency division multiple access
- TDMA time division multiple access
- OFDMA orthogonal frequency division multiple access
- SC-FDMA single carrier frequency division multiple access
- LTE is 3GPP TS 36.xxx Release 8 and later.
- LTE technology after 3GPP TS 36.xxx Release 10 may be referred to as LTE-A
- LTE technology after 3GPP TS 36.xxx Release 13 may be referred to as LTE-A pro.
- 3GPP NR may refer to technology after TS 38.xxx Release 15.
- 3GPP 6G may refer to technology after TS Release 17 and/or Release 18. “xxx” refers to the standard document detail number.
- LTE/NR/6G can be collectively referred to as a 3GPP system.
- FIG. 1 is a diagram illustrating an example of a communication system applied to the present disclosure.
- the communication system 100 applied to the present disclosure includes a wireless device, a base station, and a network.
- a wireless device refers to a device that performs communication using wireless access technology (e.g., 5G NR, LTE) and may be referred to as a communication/wireless/5G device.
- wireless devices include robots (100a), vehicles (100b-1, 100b-2), extended reality (XR) devices (100c), hand-held devices (100d), and home appliances (100d).
- appliance) (100e), IoT (Internet of Thing) device (100f), and AI (artificial intelligence) device/server (100g).
- vehicles may include vehicles equipped with wireless communication functions, autonomous vehicles, vehicles capable of inter-vehicle communication, etc.
- the vehicles 100b-1 and 100b-2 may include an unmanned aerial vehicle (UAV) (eg, a drone).
- UAV unmanned aerial vehicle
- the XR device 100c includes augmented reality (AR)/virtual reality (VR)/mixed reality (MR) devices, including a head-mounted device (HMD), a head-up display (HUD) installed in a vehicle, a television, It can be implemented in the form of smartphones, computers, wearable devices, home appliances, digital signage, vehicles, robots, etc.
- the mobile device 100d may include a smartphone, smart pad, wearable device (eg, smart watch, smart glasses), computer (eg, laptop, etc.), etc.
- Home appliances 100e may include a TV, refrigerator, washing machine, etc.
- IoT device 100f may include sensors, smart meters, etc.
- the base station 120 and the network 130 may also be implemented as wireless devices, and a specific wireless device 120a may operate as a base station/network node for other wireless devices.
- Wireless devices 100a to 100f may be connected to the network 130 through the base station 120.
- AI technology may be applied to the wireless devices 100a to 100f, and the wireless devices 100a to 100f may be connected to the AI server 100g through the network 130.
- the network 130 may be configured using a 3G network, 4G (eg, LTE) network, or 5G (eg, NR) network.
- Wireless devices 100a to 100f may communicate with each other through the base station 120/network 130, but communicate directly (e.g., sidelink communication) without going through the base station 120/network 130. You may.
- vehicles 100b-1 and 100b-2 may communicate directly (eg, vehicle to vehicle (V2V)/vehicle to everything (V2X) communication).
- the IoT device 100f eg, sensor
- the IoT device 100f may communicate directly with other IoT devices (eg, sensor) or other wireless devices 100a to 100f.
- Wireless communication/connection may be established between the wireless devices (100a to 100f)/base station (120) and the base station (120)/base station (120).
- wireless communication/connection includes various methods such as uplink/downlink communication (150a), sidelink communication (150b) (or D2D communication), and inter-base station communication (150c) (e.g., relay, integrated access backhaul (IAB)).
- IAB integrated access backhaul
- This can be achieved through wireless access technology (e.g. 5G NR).
- wireless communication/connection 150a, 150b, 150c
- a wireless device and a base station/wireless device, and a base station and a base station can transmit/receive wireless signals to each other.
- wireless communication/connection 150a, 150b, and 150c may transmit/receive signals through various physical channels.
- various configuration information setting processes for transmitting/receiving wireless signals various signal processing processes (e.g., channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.) , at least some of the resource allocation process, etc. may be performed.
- FIG. 2 is a diagram illustrating an example of a wireless device applicable to the present disclosure.
- the first wireless device 200a and the second wireless device 200b can transmit and receive wireless signals through various wireless access technologies (eg, LTE, NR).
- ⁇ first wireless device 200a, second wireless device 200b ⁇ refers to ⁇ wireless device 100x, base station 120 ⁇ and/or ⁇ wireless device 100x, wireless device 100x) in FIG. ⁇ can be responded to.
- the first wireless device 200a includes one or more processors 202a and one or more memories 204a, and may further include one or more transceivers 206a and/or one or more antennas 208a.
- Processor 202a controls memory 204a and/or transceiver 206a and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed herein.
- the processor 202a may process information in the memory 204a to generate first information/signal and then transmit a wireless signal including the first information/signal through the transceiver 206a.
- the processor 202a may receive a wireless signal including the second information/signal through the transceiver 206a and then store information obtained from signal processing of the second information/signal in the memory 204a.
- the memory 204a may be connected to the processor 202a and may store various information related to the operation of the processor 202a.
- memory 204a may perform some or all of the processes controlled by processor 202a or instructions for performing the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed herein.
- Software code containing them can be stored.
- the processor 202a and the memory 204a may be part of a communication modem/circuit/chip designed to implement wireless communication technology (eg, LTE, NR).
- Transceiver 206a may be coupled to processor 202a and may transmit and/or receive wireless signals via one or more antennas 208a.
- Transceiver 206a may include a transmitter and/or receiver.
- the transceiver 206a may be used interchangeably with a radio frequency (RF) unit.
- RF radio frequency
- a wireless device may mean a communication modem/circuit/chip.
- the second wireless device 200b includes one or more processors 202b, one or more memories 204b, and may further include one or more transceivers 206b and/or one or more antennas 208b.
- Processor 202b controls memory 204b and/or transceiver 206b and may be configured to implement the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed herein.
- the processor 202b may process information in the memory 204b to generate third information/signal and then transmit a wireless signal including the third information/signal through the transceiver 206b.
- the processor 202b may receive a wireless signal including the fourth information/signal through the transceiver 206b and then store information obtained from signal processing of the fourth information/signal in the memory 204b.
- the memory 204b may be connected to the processor 202b and may store various information related to the operation of the processor 202b. For example, memory 204b may perform some or all of the processes controlled by processor 202b or instructions for performing the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed herein. Software code containing them can be stored.
- the processor 202b and the memory 204b may be part of a communication modem/circuit/chip designed to implement wireless communication technology (eg, LTE, NR).
- Transceiver 206b may be coupled to processor 202b and may transmit and/or receive wireless signals via one or more antennas 208b.
- the transceiver 206b may include a transmitter and/or a receiver.
- the transceiver 206b may be used interchangeably with an RF unit.
- a wireless device may mean a communication modem/circuit/chip.
- one or more protocol layers may be implemented by one or more processors 202a and 202b.
- one or more processors 202a and 202b may operate on one or more layers (e.g., physical (PHY), media access control (MAC), radio link control (RLC), packet data convergence protocol (PDCP), and radio resource (RRC). control) and functional layers such as SDAP (service data adaptation protocol) can be implemented.
- layers e.g., physical (PHY), media access control (MAC), radio link control (RLC), packet data convergence protocol (PDCP), and radio resource (RRC). control
- SDAP service data adaptation protocol
- One or more processors 202a, 202b may generate one or more Protocol Data Units (PDUs) and/or one or more service data units (SDUs) according to the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed in this document. can be created.
- One or more processors 202a and 202b may generate messages, control information, data or information according to the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in this document.
- One or more processors 202a, 202b generate signals (e.g., baseband signals) containing PDUs, SDUs, messages, control information, data, or information according to the functions, procedures, proposals, and/or methods disclosed herein.
- transceivers 206a, 206b can be provided to one or more transceivers (206a, 206b).
- One or more processors 202a, 202b may receive signals (e.g., baseband signals) from one or more transceivers 206a, 206b, and the descriptions, functions, procedures, suggestions, methods, and/or operational flowcharts disclosed herein.
- PDU, SDU, message, control information, data or information can be obtained.
- One or more processors 202a, 202b may be referred to as a controller, microcontroller, microprocessor, or microcomputer.
- One or more processors 202a and 202b may be implemented by hardware, firmware, software, or a combination thereof.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGAs field programmable gate arrays
- the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in this document may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, etc.
- Firmware or software configured to perform the descriptions, functions, procedures, suggestions, methods and/or operation flowcharts disclosed in this document may be included in one or more processors 202a and 202b or stored in one or more memories 204a and 204b. It may be driven by the above processors 202a and 202b.
- the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed in this document may be implemented using firmware or software in the form of codes, instructions and/or sets of instructions.
- One or more memories 204a and 204b may be connected to one or more processors 202a and 202b and may store various types of data, signals, messages, information, programs, codes, instructions and/or commands.
- One or more memories 204a, 204b may include read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), flash memory, hard drives, registers, cache memory, computer readable storage media, and/or It may be composed of a combination of these.
- One or more memories 204a and 204b may be located internal to and/or external to one or more processors 202a and 202b. Additionally, one or more memories 204a and 204b may be connected to one or more processors 202a and 202b through various technologies, such as wired or wireless connections.
- One or more transceivers may transmit user data, control information, wireless signals/channels, etc. mentioned in the methods and/or operation flowcharts of this document to one or more other devices.
- One or more transceivers 206a, 206b may receive user data, control information, wireless signals/channels, etc. referred to in the descriptions, functions, procedures, suggestions, methods and/or operational flowcharts disclosed herein, etc. from one or more other devices. there is.
- one or more transceivers 206a and 206b may be connected to one or more processors 202a and 202b and may transmit and receive wireless signals.
- one or more processors 202a, 202b may control one or more transceivers 206a, 206b to transmit user data, control information, or wireless signals to one or more other devices. Additionally, one or more processors 202a and 202b may control one or more transceivers 206a and 206b to receive user data, control information, or wireless signals from one or more other devices. In addition, one or more transceivers (206a, 206b) may be connected to one or more antennas (208a, 208b), and one or more transceivers (206a, 206b) may be connected to the description and functions disclosed in this document through one or more antennas (208a, 208b).
- one or more antennas may be multiple physical antennas or multiple logical antennas (eg, antenna ports).
- One or more transceivers (206a, 206b) process the received user data, control information, wireless signals/channels, etc. using one or more processors (202a, 202b), and convert the received wireless signals/channels, etc. from the RF band signal. It can be converted to a baseband signal.
- One or more transceivers (206a, 206b) may convert user data, control information, wireless signals/channels, etc. processed using one or more processors (202a, 202b) from a baseband signal to an RF band signal.
- one or more transceivers 206a, 206b may include (analog) oscillators and/or filters.
- FIG. 3 is a diagram illustrating another example of a wireless device applied to the present disclosure.
- the wireless device 300 corresponds to the wireless devices 200a and 200b of FIG. 2 and includes various elements, components, units/units, and/or modules. ) can be composed of.
- the wireless device 300 may include a communication unit 310, a control unit 320, a memory unit 330, and an additional element 340.
- the communication unit may include communication circuitry 312 and transceiver(s) 314.
- communication circuitry 312 may include one or more processors 202a and 202b and/or one or more memories 204a and 204b of FIG. 2 .
- transceiver(s) 314 may include one or more transceivers 206a, 206b and/or one or more antennas 208a, 208b of FIG. 2.
- the control unit 320 is electrically connected to the communication unit 310, the memory unit 330, and the additional element 340 and controls overall operations of the wireless device.
- the control unit 320 may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit 330.
- the control unit 320 transmits the information stored in the memory unit 330 to the outside (e.g., another communication device) through the communication unit 310 through a wireless/wired interface, or to the outside (e.g., to another communication device) through the communication unit 310.
- Information received through a wireless/wired interface from another communication device can be stored in the memory unit 330.
- the additional element 340 may be configured in various ways depending on the type of wireless device.
- the additional element 340 may include at least one of a power unit/battery, an input/output unit, a driving unit, and a computing unit.
- the wireless device 300 includes robots (FIG. 1, 100a), vehicles (FIG. 1, 100b-1, 100b-2), XR devices (FIG. 1, 100c), and portable devices (FIG. 1, 100d).
- FIG. 1, 100e home appliances
- IoT devices Figure 1, 100f
- digital broadcasting terminals hologram devices
- public safety devices MTC devices
- medical devices fintech devices (or financial devices)
- security devices climate/ It can be implemented in the form of an environmental device, AI server/device (FIG. 1, 140), base station (FIG. 1, 120), network node, etc.
- Wireless devices can be mobile or used in fixed locations depending on the usage/service.
- various elements, components, units/parts, and/or modules within the wireless device 300 may be entirely interconnected through a wired interface, or at least some of them may be wirelessly connected through the communication unit 310.
- the control unit 320 and the communication unit 310 are connected by wire, and the control unit 320 and the first unit (e.g., 130, 140) are connected wirelessly through the communication unit 310.
- each element, component, unit/part, and/or module within the wireless device 300 may further include one or more elements.
- the control unit 320 may be comprised of one or more processor sets.
- control unit 320 may be composed of a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphics processing processor, and a memory control processor.
- memory unit 330 may be comprised of RAM, dynamic RAM (DRAM), ROM, flash memory, volatile memory, non-volatile memory, and/or a combination thereof. It can be configured.
- FIG. 4 is a diagram illustrating an example of a portable device to which the present disclosure is applied.
- FIG 4 illustrates a portable device to which the present disclosure is applied.
- Portable devices may include smartphones, smart pads, wearable devices (e.g., smart watches, smart glasses), and portable computers (e.g., laptops, etc.).
- a mobile device may be referred to as a mobile station (MS), user terminal (UT), mobile subscriber station (MSS), subscriber station (SS), advanced mobile station (AMS), or wireless terminal (WT).
- MS mobile station
- UT user terminal
- MSS mobile subscriber station
- SS subscriber station
- AMS advanced mobile station
- WT wireless terminal
- the portable device 400 includes an antenna unit 408, a communication unit 410, a control unit 420, a memory unit 430, a power supply unit 440a, an interface unit 440b, and an input/output unit 440c. ) may include.
- the antenna unit 408 may be configured as part of the communication unit 410.
- Blocks 410 to 430/440a to 440c correspond to blocks 310 to 330/340 in FIG. 3, respectively.
- the communication unit 410 can transmit and receive signals (eg, data, control signals, etc.) with other wireless devices and base stations.
- the control unit 420 can control the components of the portable device 400 to perform various operations.
- the control unit 420 may include an application processor (AP).
- the memory unit 430 may store data/parameters/programs/codes/commands necessary for driving the portable device 400. Additionally, the memory unit 430 can store input/output data/information, etc.
- the power supply unit 440a supplies power to the portable device 400 and may include a wired/wireless charging circuit, a battery, etc.
- the interface unit 440b may support connection between the mobile device 400 and other external devices.
- the interface unit 440b may include various ports (eg, audio input/output ports, video input/output ports) for connection to external devices.
- the input/output unit 440c may input or output image information/signals, audio information/signals, data, and/or information input from the user.
- the input/output unit 440c may include a camera, a microphone, a user input unit, a display unit 440d, a speaker, and/or a haptic module.
- the input/output unit 440c acquires information/signals (e.g., touch, text, voice, image, video) input from the user, and the obtained information/signals are stored in the memory unit 430. It can be saved.
- the communication unit 410 can convert the information/signal stored in the memory into a wireless signal and transmit the converted wireless signal directly to another wireless device or to a base station. Additionally, the communication unit 410 may receive a wireless signal from another wireless device or a base station and then restore the received wireless signal to the original information/signal.
- the restored information/signal may be stored in the memory unit 430 and then output in various forms (eg, text, voice, image, video, haptic) through the input/output unit 440c.
- FIG. 5 is a diagram illustrating an example of a vehicle or autonomous vehicle applied to the present disclosure.
- a vehicle or autonomous vehicle can be implemented as a mobile robot, vehicle, train, aerial vehicle (AV), ship, etc., and is not limited to the form of a vehicle.
- AV aerial vehicle
- the vehicle or autonomous vehicle 500 includes an antenna unit 508, a communication unit 510, a control unit 520, a drive unit 540a, a power supply unit 540b, a sensor unit 540c, and an autonomous driving unit. It may include a portion 540d.
- the antenna unit 550 may be configured as part of the communication unit 510. Blocks 510/530/540a to 540d correspond to blocks 410/430/440 in FIG. 4, respectively.
- the communication unit 510 may transmit and receive signals (e.g., data, control signals, etc.) with external devices such as other vehicles, base stations (e.g., base stations, road side units, etc.), and servers.
- the control unit 520 may control elements of the vehicle or autonomous vehicle 500 to perform various operations.
- the control unit 520 may include an electronic control unit (ECU).
- ECU electronice control unit
- FIG. 6 is a diagram showing an example of an AI device applied to the present disclosure.
- AI devices include fixed devices such as TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, vehicles, etc. It can be implemented as a device or a movable device.
- the AI device 600 includes a communication unit 610, a control unit 620, a memory unit 630, an input/output unit (640a/640b), a learning processor unit 640c, and a sensor unit 640d. may include. Blocks 610 to 630/640a to 640d may correspond to blocks 310 to 330/340 of FIG. 3, respectively.
- the communication unit 610 uses wired and wireless communication technology to communicate with wired and wireless signals (e.g., sensor information, user Input, learning model, control signal, etc.) can be transmitted and received. To this end, the communication unit 610 may transmit information in the memory unit 630 to an external device or transmit a signal received from an external device to the memory unit 630.
- wired and wireless signals e.g., sensor information, user Input, learning model, control signal, etc.
- the control unit 620 may determine at least one executable operation of the AI device 600 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. And, the control unit 620 can control the components of the AI device 600 to perform the determined operation. For example, the control unit 620 may request, search, receive, or utilize data from the learning processor unit 640c or the memory unit 630, and may select at least one operation that is predicted or determined to be desirable among the executable operations. Components of the AI device 600 can be controlled to execute operations.
- control unit 620 collects history information including the operation content of the AI device 600 or user feedback on the operation, and stores it in the memory unit 630 or the learning processor unit 640c, or the AI server ( It can be transmitted to an external device such as Figure 1, 140). The collected historical information can be used to update the learning model.
- the memory unit 630 can store data supporting various functions of the AI device 600.
- the memory unit 630 may store data obtained from the input unit 640a, data obtained from the communication unit 610, output data from the learning processor unit 640c, and data obtained from the sensing unit 640. Additionally, the memory unit 630 may store control information and/or software codes necessary for operation/execution of the control unit 620.
- the input unit 640a can obtain various types of data from outside the AI device 600.
- the input unit 620 may obtain training data for model training and input data to which the learning model will be applied.
- the input unit 640a may include a camera, microphone, and/or a user input unit.
- the output unit 640b may generate output related to vision, hearing, or tactile sensation.
- the output unit 640b may include a display unit, a speaker, and/or a haptic module.
- the sensing unit 640 may obtain at least one of internal information of the AI device 600, surrounding environment information of the AI device 600, and user information using various sensors.
- the sensing unit 640 may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and/or a radar. there is.
- the learning processor unit 640c can train a model composed of an artificial neural network using training data.
- the learning processor unit 640c may perform AI processing together with the learning processor unit of the AI server (FIG. 1, 140).
- the learning processor unit 640c may process information received from an external device through the communication unit 610 and/or information stored in the memory unit 630. Additionally, the output value of the learning processor unit 640c may be transmitted to an external device through the communication unit 610 and/or stored in the memory unit 630.
- Figure 7 is a diagram illustrating a method of processing a transmission signal applied to the present disclosure.
- the transmission signal may be processed by a signal processing circuit.
- the signal processing circuit 700 may include a scrambler 710, a modulator 720, a layer mapper 730, a precoder 740, a resource mapper 750, and a signal generator 760.
- the operation/function of FIG. 7 may be performed in the processors 202a and 202b and/or transceivers 206a and 206b of FIG. 2.
- the hardware elements of FIG. 7 may be implemented in the processors 202a and 202b and/or transceivers 206a and 206b of FIG. 2.
- blocks 710 to 760 may be implemented in processors 202a and 202b of FIG. 2. Additionally, blocks 710 to 750 may be implemented in the processors 202a and 202b of FIG. 2, and block 760 may be implemented in the transceivers 206a and 206b of FIG. 2, and are not limited to the above-described embodiment.
- the codeword can be converted into a wireless signal through the signal processing circuit 700 of FIG. 7.
- a codeword is an encoded bit sequence of an information block.
- the information block may include a transport block (eg, UL-SCH transport block, DL-SCH transport block).
- Wireless signals may be transmitted through various physical channels (eg, PUSCH, PDSCH).
- the codeword may be converted into a scrambled bit sequence by the scrambler 710.
- the scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of the wireless device.
- the scrambled bit sequence may be modulated into a modulation symbol sequence by the modulator 720.
- Modulation methods may include pi/2-binary phase shift keying (pi/2-BPSK), m-phase shift keying (m-PSK), and m-quadrature amplitude modulation (m-QAM).
- the complex modulation symbol sequence may be mapped to one or more transport layers by the layer mapper 730.
- the modulation symbols of each transport layer may be mapped to corresponding antenna port(s) by the precoder 740 (precoding).
- the output z of the precoder 740 can be obtained by multiplying the output y of the layer mapper 730 with the precoding matrix W of N*M.
- N is the number of antenna ports and M is the number of transport layers.
- the precoder 740 may perform precoding after performing transform precoding (eg, discrete Fourier transform (DFT) transform) on complex modulation symbols. Additionally, the precoder 740 may perform precoding without performing transform precoding.
- transform precoding eg, discrete Fourier transform (DFT) transform
- the resource mapper 750 can map the modulation symbols of each antenna port to time-frequency resources.
- a time-frequency resource may include a plurality of symbols (eg, CP-OFDMA symbol, DFT-s-OFDMA symbol) in the time domain and a plurality of subcarriers in the frequency domain.
- the signal generator 760 generates a wireless signal from the mapped modulation symbols, and the generated wireless signal can be transmitted to another device through each antenna.
- the signal generator 760 may include an inverse fast fourier transform (IFFT) module, a cyclic prefix (CP) inserter, a digital-to-analog converter (DAC), a frequency uplink converter, etc. .
- IFFT inverse fast fourier transform
- CP cyclic prefix
- DAC digital-to-analog converter
- the signal processing process for the received signal in the wireless device may be configured as the reverse of the signal processing process (710 to 760) of FIG. 7.
- a wireless device eg, 200a and 200b in FIG. 2
- the received wireless signal can be converted into a baseband signal through a signal restorer.
- the signal restorer may include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast fourier transform (FFT) module. Thereafter, the baseband signal can be restored to a codeword through a resource de-mapper process, postcoding process, demodulation process, and de-scramble process.
- ADC analog-to-digital converter
- FFT fast fourier transform
- a signal processing circuit for a received signal may include a signal restorer, resource de-mapper, postcoder, demodulator, de-scrambler, and decoder.
- 6G (wireless communications) systems require (i) very high data rates per device, (ii) very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) battery- The goal is to reduce the energy consumption of battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
- the vision of the 6G system can be four aspects such as “intelligent connectivity”, “deep connectivity”, “holographic connectivity”, and “ubiquitous connectivity”, and the 6G system can satisfy the requirements as shown in Table 1 below. In other words, Table 1 is a table showing the requirements of the 6G system.
- the 6G system includes enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine type communications (mMTC), AI integrated communication, and tactile communication.
- eMBB enhanced mobile broadband
- URLLC ultra-reliable low latency communications
- mMTC massive machine type communications
- AI integrated communication and tactile communication.
- tactile internet high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and improved data security. It can have key factors such as enhanced data security.
- FIG. 10 is a diagram illustrating an example of a communication structure that can be provided in a 6G system applicable to the present disclosure.
- the 6G system is expected to have simultaneous wireless communication connectivity 50 times higher than that of the 5G wireless communication system.
- URLLC a key feature of 5G, is expected to become an even more mainstream technology in 6G communications by providing end-to-end delays of less than 1ms.
- the 6G system will have much better volume spectrum efficiency, unlike the frequently used area spectrum efficiency.
- 6G systems can provide very long battery life and advanced battery technologies for energy harvesting, so mobile devices in 6G systems may not need to be separately charged.
- AI The most important and newly introduced technology in the 6G system is AI.
- AI was not involved in the 4G system.
- 5G systems will support partial or very limited AI.
- 6G systems will be AI-enabled for full automation.
- Advances in machine learning will create more intelligent networks for real-time communications in 6G.
- Introducing AI in communications can simplify and improve real-time data transmission.
- AI can use numerous analytics to determine how complex target tasks are performed. In other words, AI can increase efficiency and reduce processing delays.
- AI can be performed instantly by using AI.
- AI can also play an important role in M2M, machine-to-human and human-to-machine communications. Additionally, AI can enable rapid communication in BCI (brain computer interface).
- BCI brain computer interface
- AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
- AI-based physical layer transmission means applying signal processing and communication mechanisms based on AI drivers, rather than traditional communication frameworks, in terms of fundamental signal processing and communication mechanisms. For example, deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO (multiple input multiple output) mechanism, It may include AI-based resource scheduling and allocation.
- Machine learning can be used for channel estimation and channel tracking, and can be used for power allocation, interference cancellation, etc. in the physical layer of the DL (downlink). Machine learning can also be used for antenna selection, power control, and symbol detection in MIMO systems.
- Deep learning-based AI algorithms require a large amount of training data to optimize training parameters.
- a lot of training data is used offline. This means that static training on training data in a specific channel environment may result in a contradiction between the dynamic characteristics and diversity of the wireless channel.
- signals of the physical layer of wireless communication are complex signals.
- more research is needed on neural networks that detect complex domain signals.
- Machine learning refers to a series of operations that train machines to create machines that can perform tasks that are difficult or difficult for humans to perform.
- Machine learning requires data and a learning model.
- data learning methods can be broadly divided into three types: supervised learning, unsupervised learning, and reinforcement learning.
- Neural network learning is intended to minimize errors in output. Neural network learning repeatedly inputs learning data into the neural network, calculates the output of the neural network and the error of the target for the learning data, and backpropagates the error of the neural network from the output layer of the neural network to the input layer to reduce the error. ) is the process of updating the weight of each node in the neural network.
- Supervised learning uses training data in which the correct answer is labeled, while unsupervised learning may not have the correct answer labeled in the training data. That is, for example, in the case of supervised learning on data classification, the learning data may be data in which each training data is labeled with a category. Labeled learning data is input to a neural network, and error can be calculated by comparing the output (category) of the neural network with the label of the learning data. The calculated error is backpropagated in the reverse direction (i.e., from the output layer to the input layer) in the neural network, and the connection weight of each node in each layer of the neural network can be updated according to backpropagation. The amount of change in the connection weight of each updated node may be determined according to the learning rate.
- the neural network's calculation of input data and backpropagation of errors can constitute a learning cycle (epoch).
- the learning rate may be applied differently depending on the number of repetitions of the learning cycle of the neural network. For example, in the early stages of neural network training, a high learning rate can be used to ensure that the neural network quickly achieves a certain level of performance to increase efficiency, and in the later stages of training, a low learning rate can be used to increase accuracy.
- Learning methods may vary depending on the characteristics of the data. For example, in a communication system, when the goal is to accurately predict data transmitted from a transmitter at a receiver, it is preferable to perform learning using supervised learning rather than unsupervised learning or reinforcement learning.
- the learning model corresponds to the human brain, and can be considered the most basic linear model.
- deep learning is a machine learning paradigm that uses a highly complex neural network structure, such as artificial neural networks, as a learning model. ).
- Neural network cores used as learning methods are broadly divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent neural networks (recurrent boltzmann machine). And this learning model can be applied.
- DNN deep neural networks
- CNN convolutional deep neural networks
- recurrent neural networks recurrent boltzmann machine
- THz communication can be applied in the 6G system.
- the data transfer rate can be increased by increasing the bandwidth. This can be accomplished by using sub-THz communications with wide bandwidth and applying advanced massive MIMO technology.
- FIG. 9 is a diagram showing an electromagnetic spectrum applicable to the present disclosure.
- THz waves also known as submillimeter radiation, typically represent a frequency band between 0.1 THz and 10 THz with a corresponding wavelength in the range of 0.03 mm-3 mm.
- the 100GHz-300GHz band range (Sub THz band) is considered the main part of the THz band for cellular communications. Adding the Sub-THz band to the mmWave band increases 6G cellular communication capacity.
- 300GHz-3THz is in the far infrared (IR) frequency band.
- the 300GHz-3THz band is part of the wideband, but it is at the border of the wideband and immediately behind the RF band. Therefore, this 300 GHz-3 THz band shows similarities to RF.
- THz communications Key characteristics of THz communications include (i) widely available bandwidth to support very high data rates, (ii) high path loss occurring at high frequencies (highly directional antennas are indispensable).
- the narrow beamwidth produced by a highly directional antenna reduces interference.
- the small wavelength of THz signals allows a much larger number of antenna elements to be integrated into devices and BSs operating in this band. This enables the use of advanced adaptive array techniques that can overcome range limitations.
- THz Terahertz
- FIG. 10 is a diagram illustrating a THz communication method applicable to the present disclosure.
- THz waves are located between RF (Radio Frequency)/millimeter (mm) and infrared bands. (i) Compared to visible light/infrared, they penetrate non-metal/non-polarized materials better and have a shorter wavelength than RF/millimeter waves, so they have high straightness. Beam focusing may be possible.
- Semantic communication is a communication system that efficiently transmits and receives semantic information using common information (e.g., background knowledge) shared between a transmitter and a receiver.
- Semantic communication a highly efficient communication method that considers the meaning of transmitted data, can transmit and process data in accordance with the exponentially increasing data traffic speed.
- AI artificial intelligence
- FIG. 11 shows examples of communication systems according to an embodiment of the present disclosure.
- existing communication involves decoding the encoded signal received by the destination 1120 from the source 1110 into an existing signal without error.
- semantic communication focuses on the meaning to be conveyed through signals, such as when people exchange information through the 'meaning' of words when communicating.
- the core of semantic communication is for the destination to extract the “meaning” of the information transmitted from the source. Semantic information can be successfully “interpreted” based on a consensus knowledge base (KB) 1150 between the source 1130 and the destination.
- KB consensus knowledge base
- Semantic communication can be used for online meetings, augmented reality (AR), virtual reality (VR), etc. because it can significantly reduce the energy and wireless resources required to transmit data.
- DNN deep neural network
- semantic information theory e.g., definition of semantic entropy, definition of semantic mutual information
- a loss function metric is defined based on semantic entropy and channel capacity in a broad sense
- a semantic encoder and decoder are developed using DNN based on the defined metric. Construction studies are in progress.
- semantic communication focus on extracting features containing the whole data sequence from the embedding vector in terms of semantic encoders.
- semantic decoder the focus is on outputting output suitable for the task purpose from latent vectors.
- Semantic encoders can optimally perform compression by extracting features by considering all components of the data.
- the size of the embedding vector which is the input of the semantic encoder and semantic decoder, becomes large, the size of each DNN model is inevitably large (large-scale model).
- there is a disadvantage that it is difficult to select the importance of the extracted features or determine which feature in the error occurred among the extracted features.
- the present disclosure proposes a new semantic source coding method in consideration of the above-mentioned points. Specifically, the present disclosure divides a task into a plurality of subtasks using background knowledge, and uses a semantic layered encoder and decoder ( We propose a semantic source coding method using semantic layered encoder/decoder.
- FIG. 12 shows an example of a protocol stack for semantic communication according to an embodiment of the present disclosure.
- the protocol stack of the user data plane there may be a MAC layer, RLC layer, PDCP layer, and SDAP layer above the PHY layer.
- the MAC layer, RLC layer, PDCP layer, and SDAP layer may be layer 2.
- the SDAP layer may not exist in an existing wireless communication system (e.g. LTE), but this may not be limited.
- the semantic layer 1210 may exist above the SDAP layer.
- the semantic layer 1210 can be used to extract features suitable for the task purpose from original data.
- the semantic layer 1210 exists above the SDAP layer and can transmit extracted features to the SDAP layer. Afterwards, the extracted features can be transmitted from the transmitting end to the receiving end, and a task can be performed at the receiving end according to the characteristics. In other words, features can be used to perform tasks, and existing data can be processed into a form to convey meaning by utilizing background knowledge through the semantic layer.
- semantic communication may be a method of inferring meaning through the background knowledge of the transmitting end and the receiving end, and may be a form of communication in which data is processed by extracting features suitable for each task purpose. there is.
- the settings for semantic inference may be different for each task, and the feature extraction method may be different.
- the semantic layer 1210 exists above the SDAP layer, but may be configured in relation to each layer within layer 2.
- the syntax error conditions required for semantic inference may be different for each task. That is, considering semantic inference, the extent to which syntax errors are allowed can be set differently for each task, and HARQ/ARQ feedback settings can be set differently considering the task.
- the semantic layer 1210 when features are extracted in consideration of performing a specific task in the semantic layer 1210 and transmission is performed by transmitting the processed data (or extracted features) to the lower layer, the semantic layer The operation considering the task of 1210 may be different, and taking this into account, the semantic layer 1210 may be configured in association with the layers in layer 2.
- the semantic layer 1220 may be combined with layer 2. Specifically, layer merging can be considered based on artificial intelligence (AI).
- the semantic layer 1220 does not exist as a separate layer, but may be combined with layer 2.
- the semantic layer 1220 may be combined with each of the layers in layer 2.
- the semantic layer 1220 can be combined and operated through AI within layer 2, and through this, each layer within layer 2 can perform operations considering semantic communication.
- the semantic layer may exist in conjunction with or related to the existing protocol stack, and may not be limited to a specific embodiment.
- feature information is extracted from original data using background knowledge through a semantic encoder at the transmitting end based on the semantic layer and transmitted to the receiving end, and the feature information and background received through the semantic decoder at the receiving end. Describe how the task is performed using knowledge.
- control information for configuration of semantic encoders and decoders e.g., NN (neural network) model selection method, layer depth information
- NN neural network
- the information bottleneck theory is a theory derived from the rate distortion theory.
- the information bottleneck theory can be expressed as Equation 1 below based on relevant information.
- Related information is a label or output that corresponds to the purpose of the task.
- Equation 1 X is the original data signal space, S is the compressed signal (e.g. feature) space, and Y is the relevant information signal space. )am. ⁇ is a language multiplier and has a positive value.
- the signal S is a compressed latent vector that extracts only the features of related information Y from existing data X. If Equation 1 is reformulated, it becomes Equation 2 below.
- b) is Kullback-Leibler divergence.
- Kullback-Leibler divergence refers to how similar the probability distributions a and b are.
- x) corresponds to the semantic encoder, and p(y
- a task can be divided into multiple subtasks. It is assumed that the divided subtasks each have a random relationship.
- a task can be divided into subtasks of different domains, so it can be a multi-layer subtask. Each divided subtask may correspond to one layer.
- a task can be expressed as a tuple of divided subtasks. Additionally, data and labels related to subtasks can be expressed as tuples.
- the output (e.g., state) of each subtask layer may have dependency on the outputs of the previous task layer. That is, task layer outputs decoupled from previous task layers pass through the current subtask layer and can be converted into outputs for different subtask layers. In the following, two examples are described to help understand the task.
- a factory automation situation can be assumed.
- the task may be deciding ‘what, where, and how’ the robot will do.
- the task may be ‘what to do, where to do it, and how to do it.’
- a task can be divided into subtasks called ‘what’, ‘where’ and ‘how’.
- a task can be expressed as a subtask tuple (A, B, C).
- A may represent a subtask related to an object
- B may represent a subtask related to a place
- C may represent a subtask related to an action.
- candidate locations may be determined (or separated) depending on the object
- actions may be determined (or separated) depending on the determined location.
- the place to cook using the cooking ingredients is determined as the second subtask according to the cooking ingredients determined as the first subtask, and a specific Methods of cooking in a location using the corresponding cooking ingredients can be determined as the third subtask.
- the task can be partially divided into subtasks that determine the animal's appearance.
- the subtasks of the corresponding task may be tasks that identify the upper level (eg, head), middle level (eg, torso), and lower level (eg, leg).
- a task can be expressed as a subtask tuple (A, B, C).
- A may represent a subtask related to a high-level appearance
- B may represent a subtask related to a middle-level appearance
- C may represent a subtask related to a low-level appearance.
- subtask B determines the differences between mammals and birds (e.g., in the case of mammals, the color/texture characteristics of body fur, In the case of birds, it is possible to infer unique features and perform classification tasks by focusing on the color/texture characteristics of feathers and the shape of wings. Accordingly, data and labels can be decoupled by including feature information based on differences.
- the transmitting and receiving end shares background knowledge, which is information containing relationship information between subtask layer states, and performs semantic layered encoding and decoding based on the background knowledge. suggests a method. Relationship information between subtask layer states can be expressed as a subtask state transition matrix set (hereinafter referred to as ‘subtask matrix set’) described below.
- the subtask matrix set can be used as background knowledge and represents the relationship between subtask layers. Equation 3 below expresses the subtask matrix set.
- Equation 3 is the subtask state transition matrix of the mth layer ( ) (hereinafter referred to as ‘subtask matrix’).
- row( ) is the index of possible subtask combinations up to the m-1th layers
- column ( ) means the available state index of the mth subtask. If 1 exists in the row index and column index, it means that switching between the relevant subtask states is possible.
- the subtask tuple corresponding to the jth row index can be expressed as Equation 4 below.
- Equation 4 means the subtask output state of the ith layer.
- the subtask output state of the mth layer can be coupled as shown in Equation 5 below.
- Equation 5 Is The nth row vector of means a collection set of indices where nonzero elements in vector a are located. Therefore, Equation 5 is an index set of possible outputs from the mth subtask layer. In equation 5, is the output type, is the number of decoupled subsets. At this time, subtasks included in coupling subsets with different subtask output states in the m-1th layer may not be able to transition to the same subtask state in the mth layer.
- Figure 13 shows an example of a subtask matrix set 1300 according to an embodiment of the present disclosure.
- the first subtask layer 1310 is , the second subtask layer 1320 is , the third subtask layer 1330 is , the fourth subtask layer 1340 is am.
- Figure 14 shows an example of a semantic encoder and decoder structure based on background knowledge according to an embodiment of the present disclosure.
- the entire data space is expressed as X (1410)
- the label space is expressed as Y (1430)
- the compressed feature space is expressed as S (1420).
- the label space may include relevant information.
- samples included in each space may be partial vector components for each subtask, or may be a concatenation of a partial data sequence and a partial label sequence corresponding to each subtask.
- the semantic encoder is am. here is the M-domain compressed signal (feature) vector, are variable vectors for data vectors.
- a domain may be a subtask.
- the output of the current subtask layer is influenced by the data corresponding to the current subtask layer and the output state sequence of the previously accumulated subtask layers. Since the encoder shares the subtask matrix set as background knowledge, Can be decomposed and approximated as shown in Equation 6 below.
- Equation 6 when training the semantic encoder, the mth conditional pdf is used. In order to obtain, The probability distribution for and the corresponding input vector set must be secured. Probability distribution and input vector set can be obtained from output samples obtained from DNN and data samples corresponding to conditional pdfs determined in previous subtask layers.
- Compressed feature vector for the mth subtask layer is the data vector of the current subtask layer and the continuous feature vectors generated from previous subtask layers. It can be explained as a feature vector that is progressively generated by using recursive.
- the semantic encoder ( ) is divided.
- a large scale single DNN may be divided into concatenation small scale DNNs.
- the DNN corresponding to the conditional pdf of each layer can slim the DNN structure by constructing a different DNN structure for each decoupled subtask state set. This can be similarly applied to the decoder stage.
- Equation 7 indicates that the decoder sequentially performs work corresponding to the subtask for each subtask layer.
- the encoder or decoder gradually performs tasks corresponding to subtasks.
- the encoder and decoder generate compressed feature vectors for each mth subtask layer. At this time, when only some of the feature vectors generated in previous subtask layers are needed or when the task output feature vectors of specific subtask layers are used as input due to coupling characteristics, as shown below and Equation 8 It can be expressed.
- Equation 8 the variable When extracting a partial feature vector from each layer, can be expressed as Equation 9 below.
- Equation 8 the variable When selecting a feature vector for each layer, it can be expressed as Equation 10 below.
- Is is a partial vector of .
- Y is the layer index set used as input for layer m. Additionally, when the subtask for each layer must perform multiple operations (e.g., reconstruction and classification), related information Y is also configured in multiple forms (e.g., existing data and labels).
- operations e.g., reconstruction and classification
- related information Y is also configured in multiple forms (e.g., existing data and labels).
- Figure 15 shows an example of a deep neural network (DNN) structure of a semantic encoder according to an embodiment of the present disclosure.
- the semantic encoder may include a plurality of layers 1510, 1520, 1530, 1540, and 1550.
- Each of the layers 1510, 1520, 1530, 1540, and 1550 may include concatenation of DNNs and task states coupled to each layer.
- the circle model represents the coupled task state.
- the first circle from the top in layer 4 1540 represents the first coupled task state. means, and the second circle from the top is the coupled task state means, and the third circle from the top is the coupled task state. It can mean.
- layer 1 (1510) is information about objects
- layer 2 (1520) and layer 3 (1530) are information about places
- layer 4 (1540) is information about places.
- 5 (1550) may correspond to information about action.
- layer 1 (1510) is the upper part information
- layer 2 (1520) and layer 3 (1530) are the middle part information
- layer 4 (1540) and layer 5 (1550) are the lower part information. can be responded to.
- Figure 16 shows an example of a semantic decoder according to an embodiment of the present disclosure.
- the semantic encoder of FIG. 16 may be a semantic encoder for the multi-layer task of FIG. 14.
- Lines 1610 and 1620 depicted in FIG. 16 represent flows of a received feature vector and an estimated relevant information vector, respectively.
- 16 is the ith l layer This is the parameter corresponding to the DNN.
- the semantic decoder can generate an embedding input vector by concatenating the relevant information vector obtained from the previous subtask and the feature vector received from the current subtask.
- the semantic decoder can obtain a relevant information vector for the current circuit layer using the generated embedding input vector.
- SCI Semantic control information
- SCI exchange protocol Semantic control information
- control information is required to generate features including semantic information.
- a server and an AI device can exchange information about the type of task and the layer depth required to perform that type of task.
- Table 2 below illustrates the format of semantic control information (SCI).
- SCI can be used to exchange information about tasks between servers and AI devices. For example, when the connection configuration between the server and the AI device is completed, the AI device or server can report information on the tasks it is currently performing using SCI. The server or AI device may generate feature information by performing semantic layered encoding based on the corresponding SCI information and transmit the generated feature information. The AI device or server can perform task work based on the received characteristic information and task information.
- Figure 17 shows an example of a semantic encoding and decoding procedure according to an embodiment of the present disclosure.
- the terminal may receive a synchronization signal from the base station.
- the terminal may perform a synchronization procedure with the base station based on the synchronization signal.
- the terminal may receive channel coding-related information through higher layer signaling.
- control information including data coding information based on channel coding-related information may be received from the base station.
- the upper layer may be a semantic layer.
- the semantic layer can be used to extract features suitable for task purposes from existing data. Additionally, control information for semantic communication can be transmitted between the terminal and the base station through the semantic layer.
- Channel coding-related information may include background knowledge including relationship information between subtask layer states of each subtask layer. Background knowledge may be information shared between the terminal and the base station.
- subtask layers may be a continuous structure divided into layers to perform tasks. Additionally, each of the subtask layers may include a continuous DNN and states coupled at each layer.
- step S1709 data may be decoded based on the channel coding-related information and data coding information.
- Data coding information may be vector information of the subtask layers in a concatenation structure.
- the terminal can decode data by concatenating the relevant information vector obtained from the subtask and the feature vector received from the current subtask to generate an embedding input vector.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Databases & Information Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente divulgation peut fournir un procédé permettant de faire fonctionner un premier dispositif dans un système de communication sans fil. Le procédé de mise en fonctionnement d'un premier dispositif peut comprendre les étapes consistant à : recevoir un signal de synchronisation en provenance d'un second dispositif ; effectuer une procédure de synchronisation avec le second dispositif sur la base du signal de synchronisation ; recevoir des informations relatives au codage de canal par l'intermédiaire d'une signalisation de couche supérieure ; recevoir des informations de commande comprenant des informations de codage de données sur la base des informations relatives au codage de canal provenant du second dispositif ; et décoder des données sur la base des informations relatives au codage de canal et des informations de codage de données. Selon l'invention, les informations relatives au codage de canal sont basées sur des informations partagées, et les informations partagées peuvent comprendre des connaissances d'arrière-plan comprenant des informations de relation entre des états de couche de sous-tâche de chacune des couches de sous-tâche concaténées.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/KR2022/013085 WO2024048816A1 (fr) | 2022-09-01 | 2022-09-01 | Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/KR2022/013085 WO2024048816A1 (fr) | 2022-09-01 | 2022-09-01 | Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024048816A1 true WO2024048816A1 (fr) | 2024-03-07 |
Family
ID=90098040
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2022/013085 Ceased WO2024048816A1 (fr) | 2022-09-01 | 2022-09-01 | Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024048816A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118629393A (zh) * | 2024-08-12 | 2024-09-10 | 香港中文大学(深圳) | 面向语音合成的生成式语义通信的方法、系统和计算机设备 |
| CN118631393A (zh) * | 2024-06-04 | 2024-09-10 | 浙江大学 | 一种多模态的轻量化语义通信方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019161249A1 (fr) * | 2018-02-15 | 2019-08-22 | DMAI, Inc. | Système et procédé de construction de scène visuelle d'après une communication d'utilisateur |
| KR102082411B1 (ko) * | 2018-10-11 | 2020-02-27 | 연세대학교 산학협력단 | 스트림 데이터와 링크드 데이터 간의 조인 뷰 스케줄링 방법 및 장치 |
| US20210064828A1 (en) * | 2019-05-02 | 2021-03-04 | Google Llc | Adapting automated assistants for use with multiple languages |
| WO2021185118A1 (fr) * | 2020-03-16 | 2021-09-23 | 华为技术有限公司 | Procédé de communication et dispositif terminal |
-
2022
- 2022-09-01 WO PCT/KR2022/013085 patent/WO2024048816A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019161249A1 (fr) * | 2018-02-15 | 2019-08-22 | DMAI, Inc. | Système et procédé de construction de scène visuelle d'après une communication d'utilisateur |
| KR102082411B1 (ko) * | 2018-10-11 | 2020-02-27 | 연세대학교 산학협력단 | 스트림 데이터와 링크드 데이터 간의 조인 뷰 스케줄링 방법 및 장치 |
| US20210064828A1 (en) * | 2019-05-02 | 2021-03-04 | Google Llc | Adapting automated assistants for use with multiple languages |
| WO2021185118A1 (fr) * | 2020-03-16 | 2021-09-23 | 华为技术有限公司 | Procédé de communication et dispositif terminal |
Non-Patent Citations (1)
| Title |
|---|
| YANG WANTING; LIEW ZI QIN; LIM WEI YANG BRYAN; XIONG ZEHUI; NIYATO DUSIT; CHI XUEFEN; CAO XIANBIN; LETAIEF KHALED B.: "Semantic Communication Meets Edge Intelligence", IEEE WIRELESS COMMUNICATIONS, COORDINATED SCIENCE LABORATORY; DEPT. ELECTRICAL AND COMPUTER ENGINEERING; UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN, US, vol. 29, no. 5, 1 October 2022 (2022-10-01), US , pages 28 - 35, XP011929941, ISSN: 1536-1284, DOI: 10.1109/MWC.004.2200050 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118631393A (zh) * | 2024-06-04 | 2024-09-10 | 浙江大学 | 一种多模态的轻量化语义通信方法 |
| CN118629393A (zh) * | 2024-08-12 | 2024-09-10 | 香港中文大学(深圳) | 面向语音合成的生成式语义通信的方法、系统和计算机设备 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022250221A1 (fr) | Procédé et dispositif d'émission d'un signal dans un système de communication sans fil | |
| WO2024167035A1 (fr) | Appareil et procédé pour effectuer une mise à jour de connaissances d'arrière-plan sur la base d'une représentation sémantique dans une communication sémantique | |
| WO2024048816A1 (fr) | Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil | |
| WO2024038926A1 (fr) | Dispositif et procédé pour émettre et recevoir un signal dans un système de communication sans fil | |
| WO2022092905A1 (fr) | Appareil et procédé de transmission de signal dans un système de communication sans fil | |
| WO2022119424A1 (fr) | Dispositif et procédé de transmission de signal dans un système de communication sans fil | |
| WO2022050565A1 (fr) | Appareil et procédé de transfert intercellulaire dans un système de communication sans fil | |
| WO2023022251A1 (fr) | Procédé et appareil permettant de transmettre un signal dans un système de communication sans fil | |
| WO2024034695A1 (fr) | Appareil et procédé de génération de signaux d'émission et de réception dans un système de communication sans fil | |
| WO2024048803A1 (fr) | Dispositif et procédé de configuration de paramètres associés à une opération de réception discontinue dans un système de communication sans fil | |
| WO2023219192A1 (fr) | Appareil et procédé permettant d'estimer un canal associé à une surface réfléchissante intelligente dans un système de communication sans fil | |
| WO2023219193A1 (fr) | Dispositif et procédé d'estimation de canal dans un système de communication sans fil | |
| WO2022124729A1 (fr) | Dispositif et procédé d'émission d'un signal dans un système de communication sans fil | |
| WO2023042941A1 (fr) | Procédé et appareil de transmission de signal dans un système de communication sans fil | |
| WO2024117296A1 (fr) | Procédé et appareil d'émission et de réception de signaux dans un système de communication sans fil faisant intervenir un émetteur-récepteur ayant des paramètres réglables | |
| WO2024150847A1 (fr) | Procédé et dispositif d'émission/réception de signal dans un système de communication sans fil | |
| WO2024150846A1 (fr) | Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil | |
| WO2025089456A1 (fr) | Dispositif et procédé pour effectuer un balayage de faisceau à l'aide d'une surface intelligente reconfigurable dans système de communication sans fil | |
| WO2023113282A1 (fr) | Appareil et procédé pour effectuer un apprentissage en ligne d'un modèle d'émetteur-récepteur dans un système de communication sans fil | |
| WO2024122667A1 (fr) | Appareil et procédé pour réaliser un apprentissage pour un récepteur basé sur un modèle d'ensemble dans un système de communication sans fil | |
| WO2022231084A1 (fr) | Procédé et dispositif d'émission d'un signal dans un système de communication sans fil | |
| WO2024111685A1 (fr) | Dispositif et procédé permettant d'effectuer une communication radar à l'aide d'un saut de fréquence dans un système de communication sans fil | |
| WO2025028685A1 (fr) | Appareil et procédé de mesure de canal utilisant une surface intelligente reconfigurable dans un système de communication sans fil | |
| WO2024143594A1 (fr) | Appareil et procédé de commande de signaux à l'aide d'une métasurface dans un système de communication sans fil | |
| WO2024111693A1 (fr) | Appareil et procédé pour effectuer une communication radar à l'aide de réseau de réception virtuel dans système de communication sans fil |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22957495 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22957495 Country of ref document: EP Kind code of ref document: A1 |