[go: up one dir, main page]

WO2025206439A1 - Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil - Google Patents

Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Info

Publication number
WO2025206439A1
WO2025206439A1 PCT/KR2024/004102 KR2024004102W WO2025206439A1 WO 2025206439 A1 WO2025206439 A1 WO 2025206439A1 KR 2024004102 W KR2024004102 W KR 2024004102W WO 2025206439 A1 WO2025206439 A1 WO 2025206439A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
sub
frequency
bands
present disclosure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/004102
Other languages
English (en)
Korean (ko)
Inventor
장지환
홍성철
한가원
강성현
정재훈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Korea Advanced Institute of Science and Technology KAIST
Original Assignee
LG Electronics Inc
Korea Advanced Institute of Science and Technology KAIST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc, Korea Advanced Institute of Science and Technology KAIST filed Critical LG Electronics Inc
Priority to PCT/KR2024/004102 priority Critical patent/WO2025206439A1/fr
Publication of WO2025206439A1 publication Critical patent/WO2025206439A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/06Continuously compensating for, or preventing, undesired influence of physical parameters
    • H03M1/08Continuously compensating for, or preventing, undesired influence of physical parameters of noise
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/12Analogue/digital converters

Definitions

  • Wireless access systems are widely deployed to provide various types of communication services, such as voice and data.
  • wireless access systems are multiple access systems that support communications with multiple users by sharing available system resources (e.g., bandwidth, transmission power).
  • multiple access systems include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single-carrier frequency division multiple access (SC-FDMA).
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single-carrier frequency division multiple access
  • enhanced mobile broadband (eMBB) communication technologies are being proposed, improving upon existing radio access technology (RAT).
  • massive machine type communications (mMTC) which connects numerous devices and objects to provide diverse services anytime and anywhere, as well as communication systems that consider reliability and latency-sensitive services/user equipment (UE), are being proposed.
  • UE latency-sensitive services/user equipment
  • OFDM Orthogonal Frequency Division Multiplexing
  • the present disclosure provides a method and device for implementing a radar with high range resolution while using an ADC with a low sampling rate.
  • the present disclosure provides a method and device for unfolding a sub-sampled signal, and a method and device for removing noise.
  • the present disclosure provides a method and device for transmitting and receiving a signal by dividing a sub-band and applying a phase code from the signal design stage to omit a noise removal step.
  • a method of operating a first node in a communication system includes the steps of receiving at least one signal from a plurality of nodes constituting a plurality of paths in a network, performing sub-sampling on the signal, and restoring the signal based on the sub-sampled signal, wherein a sampling frequency of the sub-sampling may be smaller than a frequency of the signal.
  • the method of operating the first node may further include a step of performing unfolding on the sub-sampled signal to obtain a signal composed of a plurality of sub-bands.
  • the step of removing the symbol mismatch noise may include the step of generating a reconstructed signal based on the target information and the signal composed of the plurality of subbands, and the step of applying the reconstructed signal to the signal on which the unfolding has been performed, thereby recursively removing the symbol mismatch noise of the unfolded signal.
  • the signal may be generated by dividing a frequency domain of an original signal of the signal into a first number of frequency sub-bands, applying a first phase code to the first number of frequency sub-bands, and dividing a time domain of the original signal of the signal into a second number of time sub-bands, and applying a second phase code to the second number of time sub-bands.
  • the product of the first number and the second number may be a value obtained by dividing the frequency of the signal by the subsampling frequency.
  • a method of operating a second node in a communication system includes a step of generating a signal and a step of transmitting the signal to a first node that is at least one of a plurality of nodes constituting a plurality of paths in a network, wherein the signal is sub-sampled by the first node, and the signal is restored based on the sub-sampled signal, wherein a sampling frequency of the sub-sampling may be smaller than a frequency of the transmission signal.
  • the signal composed of the plurality of sub-bands includes symbol mismatch noise for the signal, and the signal can be restored by removing the symbol mismatch noise from the sub-sampled signal.
  • the symbol mismatch noise can be removed based on at least one of target information for the signal on which the unfolding is performed and a signal composed of the plurality of sub-bands.
  • the symbol mismatch noise can be removed by recursively applying a signal reconstructed based on the target information and the signal composed of the plurality of sub-bands to the unfolded signal.
  • the signal may be generated by dividing a frequency domain of an original signal of the signal into a first number of frequency sub-bands, applying a first phase code to the first number of frequency sub-bands, and dividing a time domain of the original signal of the signal into a second number of time sub-bands, and applying a second phase code to the second number of time sub-bands.
  • the product of the first number and the second number may be a value obtained by dividing the frequency of the signal by the subsampling frequency.
  • a first node operating in a communication system includes: a transceiver; at least one processor; and at least one memory operably connectable to the at least one processor and storing instructions that, when executed by the at least one processor, perform operations, wherein the operations may include all steps of a method of operating the first node according to various embodiments of the present disclosure.
  • a second node operating in a communication system includes: a transceiver; at least one processor; and at least one memory operably connectable to the at least one processor and storing instructions that, when executed by the at least one processor, perform operations, wherein the operations may include all steps of a method of operating the second node according to various embodiments of the present disclosure.
  • a control device for controlling a first node in a communication system includes at least one processor; and at least one memory operably connected to the at least one processor, wherein the at least one memory stores instructions for performing operations based on being executed by the at least one processor, and the operations may include all steps of an operating method of the first node according to various embodiments of the present disclosure.
  • a control device for controlling a second node in a communication system includes at least one processor; and at least one memory operably connected to the at least one processor, wherein the at least one memory stores instructions for performing operations based on being executed by the at least one processor, wherein the operations may include all steps of an operating method of the second node according to various embodiments of the present disclosure.
  • one or more non-transitory computer-readable media storing one or more instructions, wherein the one or more instructions, when executed by one or more processors, perform operations, the operations including all steps of a method of operating a first node according to various embodiments of the present disclosure.
  • one or more non-transitory computer-readable media storing one or more instructions, wherein the one or more instructions, when executed by one or more processors, perform operations, the operations including all steps of a method of operating a second node according to various embodiments of the present disclosure.
  • FIG. 2 is a diagram illustrating the system structure of a New Generation Radio Access Network (NG-RAN).
  • NG-RAN New Generation Radio Access Network
  • a slash (/) or a comma may mean “and/or.”
  • A/B may mean “A and/or B.”
  • A/B may mean “only A,” “only B,” or “both A and B.”
  • A, B, C may mean “A, B, or C.”
  • “at least one of A, B and C” can mean “only A,” “only B,” “only C,” or “any combination of A, B and C.” Additionally, “at least one of A, B or C” or “at least one of A, B and/or C” can mean “at least one of A, B and C.”
  • parentheses used in various embodiments of the present disclosure may mean “for example.” Specifically, when indicated as “control information (PDCCH)", “PDCCH” may be proposed as an example of “control information.” In other words, “control information” in various embodiments of the present disclosure is not limited to “PDCCH”, and “PDDCH” may be proposed as an example of "control information.” Furthermore, even when indicated as “control information (i.e., PDCCH)", “PDCCH” may be proposed as an example of "control information.”
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • E-UMTS Evolved UMTS
  • LTE-A Advanced/LTE-A pro
  • 3GPP NR New Radio or New Radio Access Technology
  • 3GPP 6G may be an evolved version of 3GPP NR.
  • LTE refers to technology after 3GPP TS 36.xxx Release 8.
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • LTE technology after 3GPP TS 36.xxx Release 13 is referred to as LTE-A pro
  • 3GPP NR refers to technology after TS 38.
  • 3GPP 6G may refer to technology after TS Release 17 and/or Release 18.
  • “xxx” refers to a standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • RRC Radio Resource Control
  • a terminal When a terminal is powered on or enters a new cell, it performs an initial cell search operation, such as synchronizing with the base station (S11). To this end, the terminal receives a Primary Synchronization Signal (PSS) and a Secondary Synchronization Signal (SSS) from the base station to synchronize with the base station and obtain information such as a cell ID. Afterwards, the terminal can receive a Physical Broadcast Channel (PBCH) from the base station to obtain broadcast information within the cell. Meanwhile, the terminal can receive a Downlink Reference Signal (DL RS) during the initial cell search phase to check the downlink channel status.
  • PSS Primary Synchronization Signal
  • SSS Secondary Synchronization Signal
  • PBCH Physical Broadcast Channel
  • DL RS Downlink Reference Signal
  • the terminal may perform a random access procedure (RACH) for the base station (S13 to S16).
  • RACH random access procedure
  • the terminal may transmit a specific sequence as a preamble via a physical random access channel (PRACH) (S13 and S15) and receive a response message (RAR (Random Access Response) message) to the preamble via a PDCCH and a corresponding PDSCH.
  • PRACH physical random access channel
  • RAR Random Access Response
  • a contention resolution procedure may additionally be performed (S16).
  • the terminal that has performed the procedure described above can then perform PDCCH/PDSCH reception (S17) and physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) transmission (S18) as general uplink/downlink signal transmission procedures.
  • the terminal can receive downlink control information (DCI) through the PDCCH.
  • DCI downlink control information
  • the DCI includes control information such as resource allocation information for the terminal, and different formats can be applied depending on the purpose of use.
  • control information that the terminal transmits to the base station via the uplink or that the terminal receives from the base station may include downlink/uplink ACK/NACK signals, CQI (Channel Quality Indicator), PMI (Precoding Matrix Index), RI (Rank Indicator), etc.
  • the terminal may transmit the above-described control information such as CQI/PMI/RI via PUSCH and/or PUCCH.
  • PDSCH carries downlink data (e.g., DL-shared channel transport block, DL-SCH TB) and applies modulation methods such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (QAM), 64 QAM, and 256 QAM.
  • Codewords are generated by encoding the TBs.
  • PDSCH can carry multiple codewords. Scrambling and modulation mapping are performed for each codeword, and modulation symbols generated from each codeword are mapped to one or more layers (Layer mapping). Each layer is mapped to resources along with a Demodulation Reference Signal (DMRS), generated as an OFDM symbol signal, and transmitted through the corresponding antenna port.
  • DMRS Demodulation Reference Signal
  • the PDCCH carries downlink control information (DCI) and employs modulation methods such as QPSK.
  • DCI downlink control information
  • a PDCCH consists of 1, 2, 4, 8, or 16 Control Channel Elements (CCEs), depending on the Aggregation Level (AL).
  • CCEs Control Channel Elements
  • Each CCE is comprised of six Resource Element Groups (REGs). Each REG is defined by one OFDM symbol and one (P)RB.
  • PUSCH transmissions can be dynamically scheduled by UL grants in DCI, or semi-statically scheduled (configured grant) based on higher layer (e.g., RRC) signaling (and/or Layer 1 (L1) signaling (e.g., PDCCH)).
  • PUSCH transmissions can be performed in a codebook-based or non-codebook-based manner.
  • new radio access technology new RAT, NR.
  • the NG-RAN may include a gNB and/or an eNB that provides user plane and control plane protocol termination to the UE.
  • FIG. 1 illustrates a case where only a gNB is included.
  • the gNB and eNB are connected to each other via an Xn interface.
  • the gNB and eNB are connected to the 5th generation core network (5G Core Network: 5GC) via the NG interface.
  • 5G Core Network: 5GC 5th generation core network
  • the gNB is connected to the access and mobility management function (AMF) via the NG-C interface
  • the gNB is connected to the user plane function (UPF) via the NG-U interface.
  • AMF access and mobility management function
  • UPF user plane function
  • Figure 3 is a diagram illustrating the functional division between NG-RAN and 5GC.
  • the gNB can provide functions such as inter-cell radio resource management (Inter Cell RRM), radio bearer management (RB control), connection mobility control (Connection Mobility Control), radio admission control (Radio Admission Control), measurement configuration and provision, and dynamic resource allocation.
  • the AMF can provide functions such as NAS security and idle state mobility processing.
  • the UPF can provide functions such as mobility anchoring and PDU processing.
  • the SMF Session Management Function
  • Figure 4 is a diagram illustrating an example of a 5G usage scenario.
  • the three key requirement areas for 5G include (1) enhanced mobile broadband (eMBB), (2) massive machine type communication (mMTC), and (3) ultra-reliable and low latency communications (URLLC).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communication
  • URLLC ultra-reliable and low latency communications
  • KPI key performance indicator
  • eMBB focuses on improving data speeds, latency, user density, and overall capacity and coverage of mobile broadband connections. It targets throughputs of around 10 Gbps. eMBB significantly exceeds basic mobile internet access, enabling rich interactive experiences, media and entertainment applications in the cloud, and augmented reality. Data is a key driver of 5G, and for the first time, dedicated voice services may not be available in the 5G era. In 5G, voice is expected to be handled as an application, simply using the data connection provided by the communication system. The increased traffic volume is primarily due to the increasing content size and the growing number of applications that require high data rates. Streaming services (audio and video), interactive video, and mobile internet connectivity will become more prevalent as more devices connect to the internet.
  • Cloud storage and applications are rapidly growing on mobile communication platforms, and this can be applied to both work and entertainment.
  • Cloud storage is a particular use case driving the growth of uplink data rates.
  • 5G is also used for remote work in the cloud, requiring significantly lower end-to-end latency to maintain a superior user experience when tactile interfaces are used.
  • cloud gaming and video streaming are other key factors driving the demand for mobile broadband.
  • Entertainment is essential on smartphones and tablets, regardless of location, including in highly mobile environments like trains, cars, and airplanes.
  • Another use case is augmented reality and information retrieval for entertainment, where augmented reality requires extremely low latency and instantaneous data volumes.
  • mMTC is designed to enable communication between a large number of low-cost, battery-powered devices, supporting applications such as smart metering, logistics, field, and body sensors.
  • mMTC targets a battery life of approximately 10 years and/or a population of approximately 1 million devices per square kilometer.
  • mMTC enables seamless connectivity of embedded sensors across all sectors and is one of the most anticipated 5G use cases.
  • the number of IoT devices is projected to reach 20.4 billion by 2020.
  • Industrial IoT is one area where 5G will play a key role, enabling smart cities, asset tracking, smart utilities, agriculture, and security infrastructure.
  • URLLC is ideal for vehicle communications, industrial control, factory automation, remote surgery, smart grids, and public safety applications by enabling devices and machines to communicate with high reliability, very low latency, and high availability.
  • URLLC targets latency on the order of 1 ms.
  • URLLC encompasses new services that will transform industries through ultra-reliable, low-latency links, such as remote control of critical infrastructure and autonomous vehicles. This level of reliability and latency is essential for smart grid control, industrial automation, robotics, and drone control and coordination.
  • 5G can complement fiber-to-the-home (FTTH) and cable-based broadband (or DOCSIS) by delivering streams rated at hundreds of megabits per second to gigabits per second. These high speeds may be required to deliver TV at resolutions beyond 4K (6K, 8K, and beyond), as well as virtual reality (VR) and augmented reality (AR).
  • VR and AR applications include near-immersive sports events. Certain applications may require specialized network configurations. For example, for VR gaming, a gaming company may need to integrate its core servers with the network operator's edge network servers to minimize latency.
  • Automotive is expected to be a significant new driver for 5G, with numerous use cases for in-vehicle mobile communications. For example, passenger entertainment demands both high capacity and high mobile broadband, as future users will consistently expect high-quality connectivity regardless of their location and speed.
  • Another automotive application is augmented reality dashboards.
  • An AR dashboard allows drivers to identify objects in the dark on top of what they see through the windshield. The AR dashboard overlays information to inform the driver about the distance and movement of objects.
  • wireless modules will enable vehicle-to-vehicle communication, information exchange between vehicles and supporting infrastructure, and information exchange between vehicles and other connected devices (e.g., devices accompanying pedestrians).
  • Safety systems can guide drivers to safer driving behaviors, reducing the risk of accidents.
  • Smart grids interconnect these sensors using digital information and communication technologies to collect and act on information. This information can include the behavior of suppliers and consumers, enabling smart grids to improve efficiency, reliability, economic efficiency, sustainable production, and the automated distribution of fuels like electricity. Smart grids can also be viewed as another low-latency sensor network.
  • Telecommunications systems can support telemedicine, which provides clinical care in remote locations. This can help reduce distance barriers and improve access to health services that are otherwise unavailable in remote rural areas. It can also be used to save lives in critical care and emergency situations.
  • Mobile-based wireless sensor networks can provide remote monitoring and sensors for parameters such as heart rate and blood pressure.
  • Logistics and freight tracking are important use cases for mobile communications, enabling the tracking of inventory and packages anywhere using location-based information systems. Logistics and freight tracking typically require low data rates but may require wide-range and reliable location information.
  • next-generation communications e.g., 6G
  • 6G next-generation communications
  • the 6G (wireless communication) system aims to achieve (i) very high data rates per device, (ii) a very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) low energy consumption for battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be divided into four aspects: intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity, and the 6G system can satisfy the requirements as shown in Table 1 below.
  • Table 1 is a table showing an example of the requirements of a 6G system.
  • 6G systems can have key factors such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine-type communication (mMTC), AI integrated communication, tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mMTC massive machine-type communication
  • AI integrated communication tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • Figure 5 is a diagram illustrating an example of a communication structure that can be provided in a 6G system.
  • 6G systems are expected to have 50 times the simultaneous wireless connectivity of 5G systems.
  • URLLC a key feature of 5G, will become even more crucial in 6G communications by providing end-to-end latency of less than 1 ms.
  • 6G systems will have significantly higher volumetric spectral efficiency, compared to the commonly used area spectral efficiency.
  • 6G systems can offer extremely long battery life and advanced battery technologies for energy harvesting, eliminating the need for separate charging for mobile devices in 6G systems.
  • New network characteristics in 6G may include:
  • 6G is expected to integrate with satellites to provide a global mobile network.
  • the integration of terrestrial, satellite, and airborne networks into a single wireless communications system is crucial for 6G.
  • Connected Intelligence Unlike previous generations of wireless communication systems, 6G is revolutionary, upgrading the wireless evolution from "connected objects" to "connected intelligence.” AI can be applied at every stage of the communication process (or at every signal processing step, as described below).
  • 6G wireless networks will transfer power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Small cell networks The concept of small cell networks was introduced to improve received signal quality in cellular systems by increasing throughput, energy efficiency, and spectral efficiency. Consequently, small cell networks are essential for 5G and beyond-5G (5GB) communication systems. Accordingly, 6G communication systems also adopt the characteristics of small cell networks.
  • Ultra-dense heterogeneous networks will be another key feature of 6G communication systems.
  • Multi-tier networks comprised of heterogeneous networks improve overall QoS and reduce costs.
  • High-capacity backhaul Backhaul connections are characterized by high-capacity backhaul networks to support high-volume traffic.
  • High-speed fiber optics and free-space optics (FSO) systems may be potential solutions to this problem.
  • High-precision localization (or location-based services) through communications is a key feature of 6G wireless communication systems. Therefore, radar systems will be integrated with 6G networks.
  • Softwarization and virtualization are two critical features that form the foundation of the design process for 5GB networks to ensure flexibility, reconfigurability, and programmability. Furthermore, billions of devices can be shared on a shared physical infrastructure.
  • AI The most crucial and newly introduced technology for 6G systems is AI. 4G systems did not involve AI. 5G systems will support partial or very limited AI. However, 6G systems will fully support AI for automation. Advances in machine learning will create more intelligent networks for real-time communications in 6G. Incorporating AI into communications can streamline and improve real-time data transmission. AI can use numerous analyses to determine how complex target tasks should be performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play a crucial role in machine-to-machine (M2M), machine-to-human, and human-to-machine communications. Furthermore, AI can facilitate rapid communication in brain-computer interfaces (BCIs). AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • M2M machine-to-machine
  • BCIs brain-computer interfaces
  • AI-based physical layer transmission refers to the application of AI-based signal processing and communication mechanisms, rather than traditional communication frameworks, in the fundamental signal processing and communication mechanisms. For example, this may include deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanisms, and AI-based resource scheduling and allocation.
  • Machine learning can be used for channel estimation and channel tracking, as well as for power allocation and interference cancellation in the physical layer of the downlink (DL). Furthermore, machine learning can be used for antenna selection, power control, and symbol detection in MIMO systems.
  • Deep learning-based AI algorithms require a large amount of training data to optimize training parameters.
  • a large amount of training data is used offline. This means that static training on training data in specific channel environments can lead to conflicts with the dynamic characteristics and diversity of the wireless channel.
  • Machine learning refers to a series of operations that train machines to perform tasks that humans can or cannot perform. Machine learning requires data and a learning model. Data learning methods in machine learning can be broadly categorized into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Neural network training aims to minimize output errors. It involves repeatedly inputting training data into a neural network, calculating the neural network output and target error for the training data, and backpropagating the neural network error from the output layer to the input layer to update the weights of each node in the neural network to reduce the error.
  • Supervised learning uses labeled training data, while unsupervised learning may not have labeled training data.
  • the training data may be data in which each training data category is labeled.
  • Labeled training data is input to a neural network, and the error can be calculated by comparing the output (categories) of the neural network with the training data labels.
  • the calculated error is backpropagated through the neural network in the backward direction (i.e., from the output layer to the input layer), and the connection weights of each node in each layer of the neural network can be updated through backpropagation.
  • the amount of change in the connection weights of each updated node can be determined by the learning rate.
  • the neural network's calculation of the input data and the backpropagation of the error can constitute a learning cycle (epoch).
  • the learning rate can be applied differently depending on the number of iterations of the neural network's learning cycle. For example, in the early stages of training a neural network, a high learning rate can be used to quickly allow the network to achieve a certain level of performance, thereby increasing efficiency. In the later stages of training, a low learning rate can be used to increase accuracy.
  • Learning methods may vary depending on the characteristics of the data. For example, if the goal is to accurately predict data transmitted by a transmitter in a communication system, supervised learning is preferable to unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be thought of, but the machine learning paradigm that uses highly complex neural network structures, such as artificial neural networks, as learning models is called deep learning.
  • the neural network cores used in learning methods are mainly divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machines (RNN).
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machines
  • An artificial neural network is an example of a network of multiple perceptrons.
  • Figure 6 is a schematic diagram illustrating an example of a perceptron structure.
  • a large-scale artificial neural network structure can extend the simplified perceptron structure illustrated in Fig. 6 to apply the input vector to perceptrons of different dimensions. For convenience of explanation, input values or output values are called nodes.
  • the perceptron structure illustrated in Fig. 6 can be explained as consisting of a total of three layers based on input and output values.
  • An artificial neural network in which there are H perceptrons of (d+1) dimensions between the 1st layer and the 2nd layer, and K perceptrons of (H+1) dimensions between the 2nd layer and the 3rd layer can be expressed as in Fig. 7.
  • Figure 7 is a schematic diagram illustrating an example of a multilayer perceptron structure.
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all layers located between the input layer and the output layer are called hidden layers.
  • the example in Fig. 7 shows three layers, but when counting the number of layers in an actual artificial neural network, the input layer is excluded, so it can be viewed as a total of two layers.
  • An artificial neural network is composed of perceptrons, which are basic blocks, connected in two dimensions.
  • the aforementioned input, hidden, and output layers can be applied jointly not only to multilayer perceptrons but also to various artificial neural network structures, such as CNNs and RNNs, which will be described later.
  • the machine learning paradigm that uses sufficiently deep artificial neural networks as learning models is called deep learning.
  • the artificial neural network used for deep learning is called a deep neural network (DNN).
  • Figure 8 is a schematic diagram illustrating an example of a deep neural network.
  • the deep neural network illustrated in Figure 8 is a multilayer perceptron consisting of eight hidden layers and eight output layers.
  • the multilayer perceptron structure is referred to as a fully connected neural network.
  • a fully connected neural network there is no connection between nodes located in the same layer, and there is a connection only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, and can be usefully applied to identify correlation characteristics between inputs and outputs.
  • the correlation characteristic can mean the joint probability of inputs and outputs.
  • the recurrent neural network operates in a predetermined order of time for the input data sequence.
  • Recurrent neural networks are designed to be useful for processing sequence data (e.g., natural language processing).
  • various deep learning techniques such as DNN, CNN, RNN, Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), and Deep Q-Network, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech/signal processing.
  • AI-based physical layer transmission refers to the application of AI-based signal processing and communication mechanisms, rather than traditional communication frameworks, in the fundamental signal processing and communication mechanisms. For example, this may include deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanisms, and AI-based resource scheduling and allocation.
  • THz waves also known as sub-millimeter waves, typically refer to the frequency range between 0.1 THz and 10 THz, with corresponding wavelengths ranging from 0.03 mm to 3 mm.
  • the 100 GHz to 300 GHz band (sub-THz band) is considered a key part of the THz band for cellular communications. Adding the sub-THz band to the mmWave band will increase the capacity of 6G cellular communications.
  • 300 GHz to 3 THz lies in the far infrared (IR) frequency band. While part of the optical band, the 300 GHz to 3 THz band lies at the boundary of the optical band, immediately following the RF band. Therefore, this 300 GHz to 3 THz band exhibits similarities to RF.
  • THz communications Key characteristics include (i) the widely available bandwidth to support very high data rates and (ii) the high path loss that occurs at high frequencies (requiring highly directional antennas).
  • the narrow beamwidths generated by highly directional antennas reduce interference.
  • the small wavelength of THz signals allows for a significantly larger number of antenna elements to be integrated into devices and base stations operating in this band. This enables the use of advanced adaptive array technologies to overcome range limitations.
  • 3D BS will be provided via low-orbit satellites and UAVs. Adding a new dimension in altitude and associated degrees of freedom, 3D connections differ significantly from existing 2D networks.
  • Unmanned Aerial Vehicles will be a key element in 6G wireless communications. In most cases, high-speed wireless connections will be provided using UAV technology.
  • BS entities are installed on UAVs to provide cellular connectivity.
  • UAVs offer specific capabilities not found in fixed BS infrastructure, such as easy deployment, robust line-of-sight links, and controlled mobility. During emergencies such as natural disasters, deploying terrestrial communication infrastructure is not economically feasible, and sometimes, volatile environments make it impossible to provide services. UAVs can easily handle these situations.
  • UAVs will become a new paradigm in wireless communications. This technology facilitates three fundamental requirements for wireless networks: enhanced mobile broadband (eMBB), URLLC, and mMTC.
  • eMBB enhanced mobile broadband
  • URLLC ultra low-access control
  • mMTC massive machine type of networks
  • UAVs can also support various purposes, such as enhancing network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, and accident monitoring. Therefore, UAV technology is recognized as one of the most important technologies for 6
  • Tight integration of multiple frequencies and heterogeneous communication technologies is crucial in 6G systems. As a result, users will be able to seamlessly move from one network to another without requiring any manual configuration on their devices. The best network will be automatically selected from available communication technologies. This will break the limitations of the cell concept in wireless communications. Currently, user movement from one cell to another in dense networks results in excessive handovers, resulting in handover failures, handover delays, data loss, and a ping-pong effect. 6G cell-free communications will overcome all of these challenges and provide better QoS. Cell-free communications will be achieved through multi-connectivity and multi-tier hybrid technologies, as well as heterogeneous radios on devices.
  • WIET uses the same fields and waves as wireless communication systems. Specifically, sensors and smartphones will be charged using wireless power transfer during communication. WIET is a promising technology for extending the life of battery-powered wireless systems. Therefore, battery-less devices will be supported by 6G communications.
  • Autonomous wireless networks are capable of continuously sensing dynamically changing environmental conditions and exchanging information between different nodes.
  • sensing will be tightly integrated with communications to support autonomous systems.
  • each access network will be connected to backhaul connections, such as fiber optics and FSO networks. To accommodate the massive number of access networks, there will be tight integration between access and backhaul networks.
  • Big data analytics is a complex process for analyzing diverse, large-scale data sets, or "big data.” This process uncovers hidden data, unknown correlations, and customer trends, ensuring complete data management. Big data is collected from various sources, such as video, social networks, images, and sensors. This technology is widely used to process massive amounts of data in 6G systems.
  • THz-band signals have strong linearity, which can create many shadow areas due to obstacles.
  • LIS technology which enables expanded communication coverage, enhanced communication stability, and additional value-added services by installing LIS near these shadow areas, is becoming increasingly important.
  • LIS is an artificial surface made of electromagnetic materials that can alter the propagation of incoming and outgoing radio waves. While LIS can be viewed as an extension of massive MIMO, it differs from massive MIMO in its array structure and operating mechanism. Furthermore, LIS operates as a reconfigurable reflector with passive elements, passively reflecting signals without using active RF chains, which offers the advantage of low power consumption. Furthermore, because each passive reflector in LIS must independently adjust the phase shift of the incoming signal, this can be advantageous for wireless communication channels. By appropriately adjusting the phase shift via the LIS controller, the reflected signal can be collected at the target receiver to boost the received signal power.
  • THz Terahertz
  • THz waves are located between the RF (Radio Frequency)/millimeter (mm) and infrared bands, and (i) compared to visible light/infrared light, they penetrate non-metallic/non-polarizable materials well, and compared to RF/millimeter waves, they have a shorter wavelength, so they have high linearity and can focus beams.
  • the photon energy of THz waves is only a few meV, they have the characteristic of being harmless to the human body.
  • the frequency bands expected to be used for THz wireless communication may be the D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz), which have low propagation loss due to molecular absorption in the air. Discussions on standardization of THz wireless communication are being centered around the IEEE 802.15 THz working group in addition to 3GPP, and standard documents issued by the IEEE 802.15 Task Group (TG3d, TG3e) may specify or supplement the contents described in various embodiments of the present disclosure. THz wireless communication can be applied to wireless cognition, sensing, imaging, wireless communication, THz navigation, etc.
  • Figure 14 is a diagram illustrating an example of a THz communication application.
  • THz wireless communication scenarios can be categorized into macro networks, micro networks, and nanoscale networks.
  • THz wireless communication can be applied to vehicle-to-vehicle and backhaul/fronthaul connections.
  • THz wireless communication can be applied to fixed point-to-point or multi-point connections, such as indoor small cells, wireless connections in data centers, and near-field communications, such as kiosk downloads.
  • Table 2 below shows examples of technologies that can be used in THz waves.
  • Transceiver Device Available immatures UTC-PD, RTD and SBD Modulation and Coding Low order modulation techniques (OOK, QPSK), LDPC, Reed Soloman, Hamming, Polar, Turbo Antenna Omni and Directional, phased array with low number of antenna elements Bandwidth 69GHz (or 23GHz) at 300GHz Channel models Partially Data rate 100Gbps Outdoor deployment No Free space loss High Coverage Low Radio Measurements 300GHz indoor Device size Few micrometers
  • Fig. 15 is a diagram illustrating an example of an electronic component-based THz wireless communication transmitter and receiver.
  • Methods for generating THz using electronic components include a method using semiconductor components such as a resonant tunneling diode (RTD), a method using a local oscillator and a multiplier, a MMIC (Monolithic Microwave Integrated Circuits) method using an integrated circuit based on a compound semiconductor HEMT (High Electron Mobility Transistor), and a method using a Si-CMOS-based integrated circuit.
  • a multiplier doubler, tripler, multiplier
  • a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, and matches it to the desired harmonic frequency and filters out all remaining frequencies.
  • beamforming can be implemented by applying an array antenna or the like to the antenna of Fig. 15.
  • IF represents intermediate frequency
  • tripler and multiplexer represent multipliers
  • PA represents power amplifier
  • LNA low noise amplifier
  • PLL phase-locked loop.
  • Fig. 17 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • Optical component-based THz wireless communication technology refers to a method of generating and modulating THz signals using optical components.
  • Optical component-based THz signal generation technology generates an ultra-high-speed optical signal using a laser and an optical modulator, and converts it into a THz signal using an ultra-high-speed photodetector. Compared to technologies that use only electronic components, this technology can easily increase the frequency, generate high-power signals, and obtain flat response characteristics over a wide frequency band.
  • optical component-based THz signal generation requires a laser diode, a wideband optical modulator, and an ultra-high-speed photodetector.
  • the phase of a signal can be changed by passing the optical source of a laser through an optical wave guide. At this time, data is loaded by changing the electrical characteristics through a microwave contact, etc. Therefore, the optical modulator output is formed as a modulated waveform.
  • An opto-electrical modulator (O/E converter) can generate THz pulses by optical rectification operation by a nonlinear crystal, photoelectric conversion by a photoconductive antenna, emission from a bunch of relativistic electrons, etc. Terahertz pulses generated in the above manner can have a length in units of femtoseconds to picoseconds.
  • An optical/electronic converter (O/E converter) performs down conversion by utilizing the non-linearity of the device.
  • Effective down-conversion from the infrared band (IR band) to the terahertz band (THz band) depends on how to utilize the nonlinearity of the optical/electrical converter (O/E converter).
  • O/E converter optical/electrical converter
  • a terahertz transmission and reception system can be implemented using a single optical-to-electrical converter.
  • the number of optical-to-electrical converters may be equal to the number of carriers. This phenomenon will be particularly noticeable in a multi-carrier system that utilizes multiple broadbands according to the aforementioned spectrum usage plan.
  • a frame structure for the multi-carrier system may be considered.
  • a signal down-converted using an optical-to-electrical converter may be transmitted in a specific resource region (e.g., a specific frame).
  • the frequency region of the specific resource region may include multiple chunks. Each chunk may be composed of at least one component carrier (CC).
  • a node may be utilized as a component that refers to elements that constitute multiple paths in a network, such as a transmitter, receiver, terminal, or base station.
  • radar is gaining attention as a core technology for advanced sensors.
  • Research into radar technology is actively underway to enable its use in diverse fields, including autonomous vehicle safety systems, human biometric data collection, and smart building security.
  • Today, radar is evolving toward reducing hardware size while simultaneously improving range and angle resolution.
  • a wide signal bandwidth may be required.
  • signals with a bandwidth of 2 to 4 GHz may be used.
  • Analog radars such as Frequency-Modulated Continuous-Wave (FMCW) radars, process the received signal and the transmitted signal into an intermediate frequency (IF) signal using an ADC. Therefore, the receiver may require an ADC with a sampling frequency higher than the IF bandwidth. Therefore, the required ADC sampling frequency may be in the MHz range.
  • FMCW Frequency-Modulated Continuous-Wave
  • FIG. 20 is a diagram for explaining a stepped-carrier OFDM method applicable to the present disclosure.
  • Figure 20 illustrates resource utilization and frequency waveforms of the stepped carrier OFDM scheme.
  • the frequency-combined OFDM scheme according to Figure 20 may be a scheme that divides a wide-bandwidth OFDM signal into sub-bands and transmits them differentially over time.
  • the Local Oscillator (LO) of the transmitter and receiver is stepped to transmit and receive each sub-band differentially over time, so the sampling rate of the ADC and DAC can be reduced by the bandwidth of the sub-band.
  • the signals of the dividedly transmitted sub-bands are all integrated and processed at the receiver, a radar range resolution equivalent to the entire bandwidth can be obtained.
  • the stepped carrier OFDM scheme transmits signals by time-interleaving, the maximum unambiguous velocity of the radar may be reduced.
  • a phase-locked loop (PLL) with a fast settling time may be required.
  • the stepping of LOs causes a phase offset between OFDM subsymbols, which may require phase-offset calibration to compensate for.
  • FIG. 21 and FIG. 22 are diagrams for explaining a frequency comb OFDM method applicable to the present disclosure.
  • Figure 21 illustrates a frequency waveform of a frequency combination OFDM method
  • Figure 22 illustrates a hardware system block diagram of a frequency combination OFDM method.
  • Frequency-combined OFDM can be a method that uses multiple LOs at the hardware level to divide the bandwidth of the transmit and receive signal into multiple subbands for transmission and reception. Since each subband is not transmitted by time interval segmentation, it can have higher Doppler ambiguity than the aforementioned stepped carrier OFDM. Furthermore, since the bandwidth of the signal transmitted during a certain repetition period is equal to the entire bandwidth, it can have high distance resolution. However, since the wide bandwidth must not increase the ADC sampling rate, only every Lth subcarrier is used for the baseband OFDM signal subcarrier, which can reduce the maximum unambiguous range. Furthermore, because multiple subbands are mixed in parallel for transmission and reception, the complexity of the supporting hardware can significantly increase. Furthermore, because multiple carrier frequencies are used, precise phase offset compensation between different carrier frequencies is required, and this compensation has the problem of sensitively affecting the dynamic range.
  • FIG. 23 and FIG. 24 are diagrams for explaining a subcarrier aliasing OFDM method applicable to the present disclosure.
  • Fig. 23 illustrates an operation method of a subcarrier aliasing OFDM scheme
  • Fig. 24 illustrates a block diagram of a radar signal processing method of the subcarrier aliasing OFDM scheme.
  • the subcarrier aliasing OFDM scheme can use only every ⁇ -th subcarrier of an OFDM signal as an active subcarrier, and transmit the remaining subcarriers as zero subcarriers where no signal exists.
  • the sampling frequency of the ADC at the receiving end is lower than the RF bandwidth, the subcarrier containing the signal can be aliased to the zero subcarrier position, and the transmitted OFDM signal can be restored.
  • the design of the transmitted signal can become complicated because the positions of the subcarrier containing the signal and the zero subcarrier change depending on the sampling frequency of the ADC. Additionally, the maximum measurable range is reduced due to zero subcarriers, and a problem may arise where the rate of reduction in the maximum measurable range becomes greater than the rate of reduction in the ADC sampling frequency.
  • FIG. 25 is a diagram illustrating a possible radar structure according to the present disclosure.
  • the basic structure of a radar according to one embodiment of the present disclosure may be expressed as shown in FIG. 25.
  • the radar according to the present disclosure may be an OFDM radar using a sub-Nyquist sampling method.
  • L can be a positive integer.
  • FIG. 26 and FIG. 27 are drawings for explaining a possible sub-sampling method according to the present disclosure.
  • Fig. 26 illustrates the result of subsampling a signal by setting the sampling rate of the ADC to F S .
  • aliasing may occur, and when the entire signal is composed of L subbands with a bandwidth of F S , a signal may be received in which the L subbands are folded and overlapped.
  • the received signal Z can be expressed according to the following mathematical equation 1.
  • C j can represent a transmission symbol of the j-th subband
  • X j can represent target information corresponding to the j-th subband
  • W j can represent white Gaussian noise or white Gaussian noise corresponding to the j-th subband.
  • Fig. 27 illustrates a process of unfolding a folded signal according to an embodiment of the present disclosure.
  • the received signal may include target information. Therefore, the folded subband OFDM symbol may be a signal in the form of an overlapping L signals with added target information. This signal may be divided into each subband OFDM symbol and unfolded to the frequency position of the original subband. For example, the unfolding process may be performed based on the following mathematical expression 2.
  • the operation result for the signal corresponding to each sub-band among the folded signals can be expressed as target information, which can be expressed as X.
  • Figure 28 is a drawing for explaining noise according to the present disclosure.
  • Figure 29 illustrates a process for removing symbol mismatch noise and restoring the original baseband OFDM signal.
  • Target information is extracted from the unfolded signal, and then the signal can be reconstructed using the subband OFDM symbol and the extracted target information.
  • the symbol mismatch noise included in the signal can be reduced by feeding back the reconstructed signal to the unfolded signal.
  • the feedback process can be performed by subtracting the reconstructed signal from the unfolded signal. By repeating this feedback process until the noise level converges to a desired level, the symbol mismatch noise can be removed.
  • the feedback process of Figure 29 can be performed recursively to remove the symbol mismatch noise of the signal.
  • Figure 32 illustrates the resource allocation of an OFDM signal designed to ensure no symbol mismatch and orthogonality between subbands using phase codes.
  • the OFDM signal is divided into L C subbands, and orthogonality can be secured by applying a phase code to the frequency axis.
  • These subbands are further divided into L S subbands, and orthogonality can be secured by applying a phase code to the time axis.
  • phase code of the pth frequency axis can be defined based on the following mathematical expression 3.
  • phase code of the pth frequency axis is expressed for each individual sub-band
  • P p can be divided into L s along the frequency axis and expressed as in the following mathematical expression 5.
  • Figure 33 is another drawing for explaining a noise removal method according to the present disclosure.
  • Fig. 33 illustrates the noise level when subsampling is performed on a signal waveform according to the present disclosure. According to Fig. 33, it can be confirmed that symbol mismatch noise in the signal after subsampling is not detected, but ambiguous peaks are detected.
  • the maximum measurable range and maximum measurable speed can be reduced by L C and L S times. Since the L C and L S values can be variously selected when designing a signal, they can be flexibly determined by taking into account the distance and speed reduction.
  • FIG. 34 is a diagram illustrating an example of a method for a first node to transmit and receive a signal in a system applicable to the present disclosure.
  • each of the first node, the second node, and the plurality of nodes may correspond to at least one of a transmitter, a receiver, a terminal, or a base station in a wireless communication system.
  • the first node may be a terminal
  • the second node may be a base station.
  • the signal composed of the plurality of sub-bands includes symbol mismatch noise for the signal
  • the step of restoring the signal may include the step of removing the symbol mismatch noise from the sub-sampled signal.
  • the signal may be generated by dividing a frequency domain of an original signal of the signal into a first number of frequency sub-bands, applying a first phase code to the first number of frequency sub-bands, and dividing a time domain of the original signal of the signal into a second number of time sub-bands, and applying a second phase code to the second number of time sub-bands.
  • the product of the first number and the second number may be a value obtained by dividing the frequency of the signal by the subsampling frequency.
  • a first node may be provided in a communication system.
  • the first node may include a transceiver and at least one processor, wherein the at least one processor may be configured to perform the operating method of the first node according to FIG. 34.
  • a device for controlling a first node in a communication system may be provided.
  • the device may include at least one processor and at least one memory operably connected to the at least one processor.
  • the at least one memory may be configured to store instructions for performing an operating method of the first node according to FIG. 34 based on instructions executed by the at least one processor.
  • one or more non-transitory computer-readable media storing one or more instructions may be provided.
  • the one or more instructions when executed by one or more processors, perform operations, and the operations may include the operating method of the first node according to FIG. 34.
  • FIG. 35 is a diagram illustrating an example of a method for a second node to transmit and receive signals in a system applicable to the present disclosure.
  • an operation method of a second node in a communication system applicable to the present disclosure may include a step of generating a signal (S3510) and a step of transmitting the signal to a first node, which is at least one of a plurality of nodes constituting a plurality of paths in a network (S3520).
  • the signal is sub-sampled by the first node, and the signal is restored based on the sub-sampled signal, wherein a sampling frequency of the sub-sampling may be lower than a frequency of the transmission signal.
  • the signal composed of the plurality of sub-bands includes symbol mismatch noise for the signal, and the signal can be restored by removing the symbol mismatch noise from the sub-sampled signal.
  • the symbol mismatch noise can be removed based on at least one of target information for the signal on which the unfolding is performed and a signal composed of the plurality of sub-bands.
  • the symbol mismatch noise can be removed by recursively applying a signal reconstructed based on the target information and the signal composed of the plurality of sub-bands to the unfolded signal.
  • the signal may be generated by dividing a frequency domain of an original signal of the signal into a first number of frequency sub-bands, applying a first phase code to the first number of frequency sub-bands, and dividing a time domain of the original signal of the signal into a second number of time sub-bands, and applying a second phase code to the second number of time sub-bands.
  • the product of the first number and the second number may be a value obtained by dividing the frequency of the signal by the subsampling frequency.
  • a second node may be provided in a communication system.
  • the second node may include a transceiver and at least one processor, wherein the at least one processor may be configured to perform the operating method of the second node according to FIG. 35.
  • a device for controlling a second node in a communication system may be provided.
  • the device may include at least one processor and at least one memory operably connected to the at least one processor.
  • the at least one memory may be configured to store instructions for performing an operating method of the second node according to FIG. 35 based on instructions executed by the at least one processor.
  • one or more non-transitory computer-readable media storing one or more instructions may be provided.
  • the one or more instructions when executed by one or more processors, perform operations, and the operations may include the operating method of a second node according to FIG. 35.
  • FIG. 36 illustrates a communication system (1) applicable to various embodiments of the present disclosure.
  • a communication system (1) applied to various embodiments of the present disclosure includes a wireless device, a base station, and a network.
  • the wireless device refers to a device that performs communication using a wireless access technology (e.g., 5G NR (New RAT), LTE (Long Term Evolution), 6G wireless communication) and may be referred to as a communication/wireless/5G device/6G device.
  • 5G NR New RAT
  • LTE Long Term Evolution
  • 6G wireless communication e.g., 6G wireless communication
  • the wireless device may include a robot (100a), a vehicle (100b-1, 100b-2), an XR (eXtended Reality) device (100c), a hand-held device (100d), a home appliance (100e), an IoT (Internet of Things) device (100f), and an AI device/server (400).
  • the vehicle may include a vehicle equipped with a wireless communication function, an autonomous vehicle, a vehicle capable of performing vehicle-to-vehicle communication, etc.
  • the vehicle may include an Unmanned Aerial Vehicle (UAV) (e.g., a drone).
  • UAV Unmanned Aerial Vehicle
  • XR devices include AR (Augmented Reality)/VR (Virtual Reality)/MR (Mixed Reality) devices, and may be implemented in the form of a Head-Mounted Device (HMD), a Head-Up Display (HUD) installed in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, digital signage, a vehicle, a robot, etc.
  • Mobile devices may include a smartphone, a smart pad, a wearable device (e.g., a smart watch, smart glasses), a computer (e.g., a laptop, etc.), etc.
  • Home appliances may include a TV, a refrigerator, a washing machine, etc.
  • IoT devices may include a sensor, a smart meter, etc.
  • a base station and a network may also be implemented as a wireless device, and a specific wireless device (200a) may act as a base station/network node to other wireless devices.
  • Wireless devices (100a to 100f) can be connected to a network (300) via a base station (200). Artificial Intelligence (AI) technology can be applied to the wireless devices (100a to 100f), and the wireless devices (100a to 100f) can be connected to an AI server (400) via the network (300).
  • the network (300) can be configured using a 3G network, a 4G (e.g., LTE) network, a 5G (e.g., NR) network, or a 6G network.
  • the wireless devices (100a to 100f) can communicate with each other via the base station (200)/network (300), but can also communicate directly (e.g., sidelink communication) without going through the base station/network.
  • vehicles can communicate directly (e.g., V2V (Vehicle to Vehicle)/V2X (Vehicle to everything) communication).
  • IoT devices e.g., sensors
  • IoT devices can communicate directly with other IoT devices (e.g., sensors) or other wireless devices (100a to 100f).
  • Wireless communication/connection can be established between wireless devices (100a ⁇ 100f)/base stations (200), and base stations (200)/base stations (200).
  • wireless communication/connection can be achieved through various wireless access technologies (e.g., 5G NR) such as uplink/downlink communication (150a), sidelink communication (150b) (or D2D communication), and base station-to-base station communication (150c) (e.g., relay, IAB (Integrated Access Backhaul).
  • 5G NR wireless access technologies
  • uplink/downlink communication 150a
  • sidelink communication 150b
  • base station-to-base station communication 150c
  • wireless devices and base stations/wireless devices, and base stations and base stations can transmit/receive wireless signals to each other.
  • wireless communication/connection can transmit/receive signals through various physical channels.
  • various configuration information setting processes for transmitting/receiving wireless signals various signal processing processes (e.g., channel encoding/decoding, modulation/demodulation, resource mapping/demapping, etc.), and resource allocation processes can be performed based on various proposals of various embodiments of the present disclosure.
  • NR supports multiple numerologies (or subcarrier spacing (SCS)) to support various 5G services.
  • SCS subcarrier spacing
  • an SCS of 15 kHz supports a wide area in traditional cellular bands
  • an SCS of 30 kHz/60 kHz supports dense urban areas, lower latency, and wider carrier bandwidth
  • an SCS of 60 kHz or higher supports a bandwidth greater than 24.25 GHz to overcome phase noise.
  • the NR frequency band can be defined by two types of frequency ranges (FR1, FR2).
  • the numerical values of the frequency ranges can be changed, and for example, the frequency ranges of the two types (FR1, FR2) can be as shown in Table 3 below.
  • FR1 can mean the "sub 6 GHz range”
  • FR2 can mean the "above 6 GHz range” and can be called millimeter wave (mmW).
  • mmW millimeter wave
  • the memory (104) may be connected to the processor (102) and may store various information related to the operation of the processor (102). For example, the memory (104) may perform some or all of the processes controlled by the processor (102), or may store software code including commands for performing the descriptions, functions, procedures, proposals, methods, and/or operation flowcharts disclosed in this document.
  • the processor (102) and the memory (104) may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (e.g., LTE, NR).
  • the transceiver (106) may be connected to the processor (102) and may transmit and/or receive wireless signals via one or more antennas (108).
  • the transceiver (106) may include a transmitter and/or a receiver.
  • the transceiver (106) may be used interchangeably with an RF (Radio Frequency) unit.
  • a wireless device may mean a communication modem/circuit/chip.
  • One or more processors (102, 202) may generate messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operation flowcharts disclosed in this document.
  • One or more processors (102, 202) can generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein, and provide the signals to one or more transceivers (106, 206).
  • One or more processors (102, 202) may be referred to as a controller, a microcontroller, a microprocessor, or a microcomputer.
  • One or more processors (102, 202) may be implemented by hardware, firmware, software, or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, etc.
  • the descriptions, functions, procedures, suggestions, methods and/or operation flowcharts disclosed in this document may be implemented using firmware or software configured to perform one or more processors (102, 202) or stored in one or more memories (104, 204) and executed by one or more processors (102, 202).
  • the descriptions, functions, procedures, suggestions, methods and/or operation flowcharts disclosed in this document may be implemented using firmware or software in the form of codes, instructions and/or sets of instructions.
  • One or more memories (104, 204) may be coupled to one or more processors (102, 202) and may store various forms of data, signals, messages, information, programs, codes, instructions, and/or commands.
  • the one or more memories (104, 204) may be configured as ROM, RAM, EPROM, flash memory, hard drives, registers, cache memory, computer-readable storage media, and/or combinations thereof.
  • the one or more memories (104, 204) may be located internally and/or externally to the one or more processors (102, 202). Additionally, the one or more memories (104, 204) may be coupled to the one or more processors (102, 202) via various technologies, such as wired or wireless connections.
  • One or more transceivers (106, 206) can transmit user data, control information, wireless signals/channels, etc., as mentioned in the methods and/or flowcharts of this document, to one or more other devices.
  • One or more transceivers (106, 206) can receive user data, control information, wireless signals/channels, etc., as mentioned in the descriptions, functions, procedures, proposals, methods and/or flowcharts of this document, from one or more other devices.
  • one or more transceivers (106, 206) can be connected to one or more processors (102, 202) and can transmit and receive wireless signals.
  • one or more processors (102, 202) can control one or more transceivers (106, 206) to transmit user data, control information, or wireless signals to one or more other devices. Additionally, one or more processors (102, 202) may control one or more transceivers (106, 206) to receive user data, control information, or wireless signals from one or more other devices.
  • one or more transceivers (106, 206) may be coupled to one or more antennas (108, 208), and one or more transceivers (106, 206) may be configured to transmit and receive user data, control information, wireless signals/channels, or the like, as referred to in the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed herein, via one or more antennas (108, 208).
  • one or more antennas may be multiple physical antennas or multiple logical antennas (e.g., antenna ports).
  • One or more transceivers (106, 206) may convert received user data, control information, wireless signals/channels, etc.
  • One or more transceivers (106, 206) may convert processed user data, control information, wireless signals/channels, etc. from baseband signals to RF band signals using one or more processors (102, 202).
  • one or more transceivers (106, 206) may include an (analog) oscillator and/or a filter.
  • FIG. 38 illustrates another example of a wireless device that can be applied to various embodiments of the present disclosure.
  • the wireless device may include at least one processor (102, 202), at least one memory (104, 204), at least one transceiver (106, 206), and one or more antennas (108, 208).
  • the difference between the example of the wireless device described in FIG. 37 and the example of the wireless device in FIG. 38 is that in FIG. 37, the processor (102, 202) and the memory (104, 204) are separated, but in the example of FIG. 38, the memory (104, 204) is included in the processor (102, 202).
  • processor 102, 202
  • memory 104, 204
  • transceiver 106, 206
  • antennas 108, 208
  • Figure 39 illustrates a signal processing circuit for a transmission signal.
  • the signal processing circuit (1000) may include a scrambler (1010), a modulator (1020), a layer mapper (1030), a precoder (1040), a resource mapper (1050), and a signal generator (1060).
  • the operations/functions of FIG. 39 may be performed in the processor (102, 202) and/or the transceiver (106, 206) of FIG. 37.
  • the hardware elements of FIG. 39 may be implemented in the processor (102, 202) and/or the transceiver (106, 206) of FIG. 37.
  • blocks 1010 to 1060 may be implemented in the processor (102, 202) of FIG. 37.
  • blocks 1010 to 1050 may be implemented in the processor (102, 202) of FIG. 37
  • block 1060 may be implemented in the transceiver (106, 206) of FIG. 37.
  • the codeword can be converted into a wireless signal through the signal processing circuit (1000) of FIG. 39.
  • the codeword is an encoded bit sequence of an information block.
  • the information block can include a transport block (e.g., an UL-SCH transport block, a DL-SCH transport block).
  • the wireless signal can be transmitted through various physical channels (e.g., a PUSCH or a PDSCH).
  • the codeword can be converted into a bit sequence scrambled by a scrambler (1010).
  • the scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of the wireless device, etc.
  • the scrambled bit sequence can be modulated into a modulation symbol sequence by a modulator (1020).
  • the modulation method may include pi/2-BPSK (pi/2-Binary Phase Shift Keying), m-PSK (m-Phase Shift Keying), m-QAM (m-Quadrature Amplitude Modulation), etc.
  • the complex modulation symbol sequence can be mapped to one or more transmission layers by a layer mapper (1030).
  • the modulation symbols of each transmission layer can be mapped to the corresponding antenna port(s) by a precoder (1040) (precoding).
  • the output z of the precoder (1040) can be obtained by multiplying the output y of the layer mapper (1030) by a precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transmission layers.
  • the precoder (1040) can perform precoding after performing transform precoding (e.g., DFT transform) on complex modulation symbols.
  • the precoder (1040) can perform precoding without performing transform precoding.
  • the resource mapper (1050) can map modulation symbols of each antenna port to time-frequency resources.
  • the time-frequency resources can include multiple symbols (e.g., CP-OFDMA symbols, DFT-s-OFDMA symbols) in the time domain and multiple subcarriers in the frequency domain.
  • the signal generator (1060) generates a wireless signal from the mapped modulation symbols, and the generated wireless signal can be transmitted to another device through each antenna.
  • the signal generator (1060) can include an Inverse Fast Fourier Transform (IFFT) module, a Cyclic Prefix (CP) inserter, a Digital-to-Analog Converter (DAC), a frequency uplink converter, etc.
  • IFFT Inverse Fast Fourier Transform
  • CP Cyclic Prefix
  • DAC Digital-to-Analog Converter
  • the signal processing process for receiving signals in a wireless device can be configured in reverse order of the signal processing process (1010 to 1060) of FIG. 39.
  • a wireless device e.g., 100, 200 of FIG. 37
  • the received wireless signals can be converted into baseband signals through a signal restorer.
  • the signal restorer can include a frequency downlink converter, an analog-to-digital converter (ADC), a CP remover, and a fast Fourier transform (FFT) module.
  • ADC analog-to-digital converter
  • FFT fast Fourier transform
  • the baseband signal can be restored to a codeword through a resource demapper process, a postcoding process, a demodulation process, and a descrambling process.
  • a signal processing circuit for a received signal may include a signal restorer, a resource de-mapper, a postcoder, a demodulator, a de-scrambler, and a decoder.
  • the input/output unit (140c) obtains information/signals (e.g., touch, text, voice, image, video) input by the user, and the obtained information/signals can be stored in the memory unit (130).
  • the communication unit (110) converts the information/signals stored in the memory into wireless signals, and can directly transmit the converted wireless signals to other wireless devices or to a base station.
  • the communication unit (110) can receive wireless signals from other wireless devices or base stations, and then restore the received wireless signals to the original information/signals.
  • the restored information/signals can be stored in the memory unit (130) and then output in various forms (e.g., text, voice, image, video, haptic) through the input/output unit (140c).
  • a vehicle or autonomous vehicle may include an antenna unit (108), a communication unit (110), a control unit (120), a driving unit (140a), a power supply unit (140b), a sensor unit (140c), and an autonomous driving unit (140d).
  • the antenna unit (108) may be configured as a part of the communication unit (110).
  • Blocks 110/130/140a to 140d correspond to blocks 110/130/140 of FIG. 40, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with external devices such as other vehicles, base stations (e.g., base stations, road side units, etc.), and servers.
  • the control unit (120) can control elements of the vehicle or autonomous vehicle (100) to perform various operations.
  • the control unit (120) can include an ECU (Electronic Control Unit).
  • the drive unit (140a) can drive the vehicle or autonomous vehicle (100) on the ground.
  • the drive unit (140a) can include an engine, a motor, a power train, wheels, brakes, a steering device, etc.
  • the power supply unit (140b) supplies power to the vehicle or autonomous vehicle (100) and can include a wired/wireless charging circuit, a battery, etc.
  • the sensor unit (140c) can obtain vehicle status, surrounding environment information, user information, etc.
  • the sensor unit (140c) may include an IMU (inertial measurement unit) sensor, a collision sensor, a wheel sensor, a speed sensor, an incline sensor, a weight detection sensor, a heading sensor, a position module, a vehicle forward/backward sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illuminance sensor, a pedal position sensor, etc.
  • IMU intial measurement unit
  • the autonomous driving unit (140d) may implement a technology for maintaining a driving lane, a technology for automatically controlling speed such as adaptive cruise control, a technology for automatically driving along a set path, a technology for automatically setting a path and driving when a destination is set, etc.
  • the communication unit (110) can receive map data, traffic information data, etc. from an external server.
  • the autonomous driving unit (140d) can generate an autonomous driving route and driving plan based on the acquired data.
  • the control unit (120) can control the drive unit (140a) so that the vehicle or autonomous vehicle (100) moves along the autonomous driving route according to the driving plan (e.g., speed/direction control).
  • the communication unit (110) can irregularly/periodically acquire the latest traffic information data from an external server and can acquire surrounding traffic information data from surrounding vehicles.
  • the sensor unit (140c) can acquire vehicle status and surrounding environment information.
  • the autonomous driving unit (140d) can update the autonomous driving route and driving plan based on newly acquired data/information.
  • the communication unit (110) can transmit information regarding the vehicle location, autonomous driving route, driving plan, etc. to the external server.
  • External servers can predict traffic information data in advance using AI technology or other technologies based on information collected from vehicles or autonomous vehicles, and provide the predicted traffic information data to the vehicles or autonomous vehicles.
  • Figure 43 illustrates a vehicle applicable to various embodiments of the present disclosure.
  • the vehicle may also be implemented as a means of transportation, a train, an aircraft, a ship, or the like.
  • the vehicle (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), and a position measurement unit (140b).
  • blocks 110 to 130/140a to 140b correspond to blocks 110 to 130/140 of FIG. 40, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with other vehicles or external devices such as base stations.
  • the control unit (120) can control components of the vehicle (100) to perform various operations.
  • the memory unit (130) can store data/parameters/programs/codes/commands that support various functions of the vehicle (100).
  • the input/output unit (140a) can output AR/VR objects based on information in the memory unit (130).
  • the input/output unit (140a) can include a HUD.
  • the position measurement unit (140b) can obtain position information of the vehicle (100).
  • the position information can include absolute position information of the vehicle (100), position information within a driving line, acceleration information, position information with respect to surrounding vehicles, etc.
  • the position measurement unit (140b) can include GPS and various sensors.
  • the communication unit (110) of the vehicle (100) can receive map information, traffic information, etc. from an external server and store them in the memory unit (130).
  • the location measurement unit (140b) can obtain vehicle location information through GPS and various sensors and store the information in the memory unit (130).
  • the control unit (120) can create a virtual object based on the map information, traffic information, and vehicle location information, and the input/output unit (140a) can display the created virtual object on the vehicle window (1410, 1420).
  • the control unit (120) can determine whether the vehicle (100) is being driven normally within the driving line based on the vehicle location information.
  • control unit (120) can display a warning on the vehicle window through the input/output unit (140a). Additionally, the control unit (120) can broadcast a warning message regarding driving abnormalities to surrounding vehicles through the communication unit (110). Depending on the situation, the control unit (120) can transmit vehicle location information and information regarding driving/vehicle abnormalities to relevant authorities through the communication unit (110).
  • Figure 44 illustrates an XR device applicable to various embodiments of the present disclosure.
  • the XR device may be implemented as an HMD, a head-up display (HUD) installed in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, digital signage, a vehicle, a robot, and the like.
  • HMD head-up display
  • FIG. 44 illustrates an XR device applicable to various embodiments of the present disclosure.
  • the XR device may be implemented as an HMD, a head-up display (HUD) installed in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, digital signage, a vehicle, a robot, and the like.
  • HUD head-up display
  • the XR device (100a) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), a sensor unit (140b), and a power supply unit (140c).
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 40, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., media data, control signals, etc.) with external devices such as other wireless devices, portable devices, or media servers.
  • the media data can include videos, images, sounds, etc.
  • the control unit (120) can control components of the XR device (100a) to perform various operations.
  • the control unit (120) can be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, metadata generation and processing, etc.
  • the memory unit (130) can store data/parameters/programs/codes/commands required for driving the XR device (100a)/generating XR objects.
  • the input/output unit (140a) can obtain control information, data, etc.
  • the input/output unit (140a) can include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module, etc.
  • the sensor unit (140b) can obtain the XR device status, surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar.
  • the power supply unit (140c) supplies power to the XR device (100a) and may include a wired/wireless charging circuit, a battery, etc.
  • the memory unit (130) of the XR device (100a) may include information (e.g., data, etc.) required for creating an XR object (e.g., AR/VR/MR object).
  • the input/output unit (140a) may obtain a command to operate the XR device (100a) from the user, and the control unit (120) may operate the XR device (100a) according to the user's operating command. For example, when a user attempts to watch a movie, news, etc. through the XR device (100a), the control unit (120) may transmit content request information to another device (e.g., a mobile device (100b)) or a media server through the communication unit (130).
  • another device e.g., a mobile device (100b)
  • a media server e.g., a media server
  • the communication unit (130) may download/stream content such as movies and news from another device (e.g., a mobile device (100b)) or a media server to the memory unit (130).
  • the control unit (120) controls and/or performs procedures such as video/image acquisition, (video/image) encoding, and metadata generation/processing for content, and can generate/output an XR object based on information about surrounding space or real objects acquired through the input/output unit (140a)/sensor unit (140b).
  • the XR device (100a) is wirelessly connected to the mobile device (100b) through the communication unit (110), and the operation of the XR device (100a) can be controlled by the mobile device (100b).
  • the mobile device (100b) can act as a controller for the XR device (100a).
  • the XR device (100a) can obtain three-dimensional position information of the mobile device (100b), and then generate and output an XR object corresponding to the mobile device (100b).
  • Figure 45 illustrates robots applicable to various embodiments of the present disclosure. Robots may be classified into industrial, medical, household, and military applications, depending on their intended use or field.
  • the robot (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a), a sensor unit (140b), and a driving unit (140c).
  • blocks 110 to 130/140a to 140c correspond to blocks 110 to 130/140 of FIG. 40, respectively.
  • the communication unit (110) can transmit and receive signals (e.g., driving information, control signals, etc.) with external devices such as other wireless devices, other robots, or control servers.
  • the control unit (120) can control components of the robot (100) to perform various operations.
  • the memory unit (130) can store data/parameters/programs/codes/commands that support various functions of the robot (100).
  • the input/output unit (140a) can obtain information from the outside of the robot (100) and output information to the outside of the robot (100).
  • the input/output unit (140a) can include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module.
  • the sensor unit (140b) can obtain internal information of the robot (100), surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, a radar, etc.
  • the driving unit (140c) may perform various physical operations such as moving the robot joints. In addition, the driving unit (140c) may enable the robot (100) to drive on the ground or fly in the air.
  • the driving unit (140c) may include an actuator, a motor, wheels, brakes, propellers, etc.
  • FIG. 46 illustrates an AI device applicable to various embodiments of the present disclosure.
  • AI devices can be implemented as fixed or mobile devices, such as TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, and vehicles.
  • fixed or mobile devices such as TVs, projectors, smartphones, PCs, laptops, digital broadcasting terminals, tablet PCs, wearable devices, set-top boxes (STBs), radios, washing machines, refrigerators, digital signage, robots, and vehicles.
  • the AI device (100) may include a communication unit (110), a control unit (120), a memory unit (130), an input/output unit (140a/140b), a learning processor unit (140c), and a sensor unit (140d).
  • Blocks 110 to 130/140a to 140d correspond to blocks 110 to 130/140 of FIG. 40, respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente divulgation concerne, selon divers modes de réalisation, un procédé de fonctionnement d'un premier noeud dans un système de communication qui comprend les étapes consistant à : recevoir au moins un signal provenant d'une pluralité de noeuds constituant une pluralité de trajets dans un réseau ; effectuer un sous-échantillonnage sur le signal ; et restaurer le signal sur la base du signal sous-échantillonné, la fréquence d'échantillonnage du sous-échantillonnage pouvant être inférieure à la fréquence du signal.
PCT/KR2024/004102 2024-03-29 2024-03-29 Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil Pending WO2025206439A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/004102 WO2025206439A1 (fr) 2024-03-29 2024-03-29 Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/004102 WO2025206439A1 (fr) 2024-03-29 2024-03-29 Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Publications (1)

Publication Number Publication Date
WO2025206439A1 true WO2025206439A1 (fr) 2025-10-02

Family

ID=97217932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/004102 Pending WO2025206439A1 (fr) 2024-03-29 2024-03-29 Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2025206439A1 (fr)

Similar Documents

Publication Publication Date Title
WO2023128603A1 (fr) Dispositif et procédé de détection et de correction d'erreur d'enchevêtrement par rapport à un état d'enchevêtrement arbitraire de n bits quantiques dans un système de communication quantique
WO2022050466A1 (fr) Procédé d'alignement de symboles de gradient par utilisation d'un biais concernant aircomp dans une plage d'amplitude de signal d'un récepteur
WO2022145548A1 (fr) Procédé et appareil de modulation basée sur une partition de données pour apprentissage fédéré
WO2022149641A1 (fr) Procédé et appareil d'apprentissage fédéré basés sur une configuration de serveur multi-antenne et d'utilisateur à antenne unique
WO2024101461A1 (fr) Appareil et procédé permettant de régler la synchronisation de transmission et de réaliser une association entre un ap et un ue dans un système d-mimo
WO2022119021A1 (fr) Procédé et dispositif d'adaptation d'un système basé sur une classe d'apprentissage à la technologie ai mimo
WO2022045399A1 (fr) Procédé d'apprentissage fédéré basé sur une transmission de poids sélective et terminal associé
WO2022014732A1 (fr) Procédé et appareil d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2021235563A1 (fr) Procédé de distribution de clés quantiques prêtes à l'emploi basé sur des trajets multiples et une division de longueur d'onde, et dispositif d'utilisation de ce procédé
WO2024195920A1 (fr) Appareil et procédé pour effectuer un codage de canal sur un canal d'interférence ayant des caractéristiques de bruit non local dans un système de communication quantique
WO2023068714A1 (fr) Dispositif et procédé permettant de réaliser, sur la base d'informations de canal, un regroupement de dispositifs pour un aircomp, basé sur un apprentissage fédéré, d'un environnement de données non iid dans un système de communication
WO2024150852A1 (fr) Dispositif et procédé pour exécuter une attribution de ressources quantiques basée sur une configuration d'ensemble de liaisons dans un système de communication quantique
WO2024150850A1 (fr) Dispositif et procédé pour effectuer une attribution de ressources quantiques basées sur une sélection de trajet discontinu dans un système de communication quantique
WO2023128604A1 (fr) Procédé et dispositif pour effectuer une correction d'erreurs sur un canal de pauli asymétrique dans un système de communication quantique
WO2022092351A1 (fr) Procédé et appareil permettant d'atténuer une limitation de puissance de transmission par transmission en chevauchement
WO2025206439A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil
WO2025173806A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025206440A1 (fr) Procédé et appareil de transmission et de réception de signaux dans un système de communication sans fil
WO2025211464A1 (fr) Procédé et dispositif d'émission et de réception de signaux dans un système de communication sans fil
WO2025211467A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025249597A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025249599A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025211471A1 (fr) Appareil et procédé pour émettre et recevoir des signaux dans un réseau non terrestre
WO2025173808A1 (fr) Dispositif et procédé de réalisation d'une commutation dans un réseau non terrestre
WO2025211472A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24931416

Country of ref document: EP

Kind code of ref document: A1