[go: up one dir, main page]

WO2025206432A1 - Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil - Google Patents

Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil

Info

Publication number
WO2025206432A1
WO2025206432A1 PCT/KR2024/003981 KR2024003981W WO2025206432A1 WO 2025206432 A1 WO2025206432 A1 WO 2025206432A1 KR 2024003981 W KR2024003981 W KR 2024003981W WO 2025206432 A1 WO2025206432 A1 WO 2025206432A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
pilot signal
region
area
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/003981
Other languages
English (en)
Korean (ko)
Inventor
김현민
김봉회
김기준
이동순
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority to PCT/KR2024/003981 priority Critical patent/WO2025206432A1/fr
Publication of WO2025206432A1 publication Critical patent/WO2025206432A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0408Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas using two or more beams, i.e. beam diversity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W74/00Wireless channel access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data

Definitions

  • the present disclosure relates to a method and device for beam refinement in a wireless communication system. Specifically, a method for performing beam refinement using multiple beams covering a narrow area is proposed.
  • Wireless access systems are widely deployed to provide various types of communication services, such as voice and data.
  • wireless access systems are multiple access systems that support communications with multiple users by sharing available system resources (e.g., bandwidth, transmission power).
  • multiple access systems include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single-carrier frequency division multiple access (SC-FDMA).
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single-carrier frequency division multiple access
  • the present disclosure provides a device and method for performing at least one of beamforming, beam refinement, and beam measurement in a wireless communication system.
  • the present disclosure provides a device and method for performing beam measurement using multiple beams covering a narrow area.
  • a method performed by a user equipment (UE) in a communication system supporting a plurality of beams includes the steps of receiving pilot signal configuration information from a base station (BS), receiving a plurality of pilot signals from the BS based on the pilot signal configuration information, and transmitting beam measurement results measured based on the plurality of pilot signals to the BS, wherein the plurality of pilot signals are for beam measurement results measured based on the plurality of beams, and at least two of the plurality of beams may have an overlapping area.
  • BS base station
  • the method further comprises a step of transmitting UE capability information to the base station, wherein the UE capability information may include information instructing the UE to transmit a beam measurement result based on a pilot signal.
  • the terminal can transmit both measurement results for the first pilot signal and the second pilot signal.
  • the terminal based on the terminal being located in the second area, the terminal can transmit a measurement result for the first pilot signal, and based on the terminal being located in the third area, the terminal can transmit a measurement result for the second pilot signal.
  • FIG. 16 is a diagram illustrating an example of a method for generating a THz signal based on an optical element.
  • Fig. 17 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • Fig. 18 is a diagram illustrating the structure of a photon source-based transmitter.
  • Figure 20 is a drawing for explaining the length elements of a beam applicable to the present disclosure.
  • FIG. 21 is a diagram for explaining the dependence between the frequency and angle of a beam applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating an example of beamforming applicable to the present disclosure.
  • FIG. 24 is a drawing illustrating an example of beam refinement applicable to the present disclosure.
  • FIG. 25 is a drawing for explaining an example of code implementing a beam refinement method applicable to the present disclosure.
  • FIG. 26 is another drawing illustrating an example of a beam refinement method applicable to the present disclosure.
  • Figure 27 is a drawing for explaining the effect of the beam refinement method according to the present disclosure.
  • FIG. 28 is a drawing illustrating another example of a beam refinement method applicable to the present disclosure.
  • Figure 33 is another drawing for explaining the effect of the beam refinement method according to the present disclosure.
  • Figure 35 is another drawing for explaining the effect of the beam refinement method according to the present disclosure.
  • Figure 37 is a diagram for explaining frequency resource allocation applicable to the present disclosure.
  • FIG. 41 illustrates a wireless device that can be applied to various embodiments of the present disclosure.
  • FIG. 42 illustrates another example of a wireless device that can be applied to various embodiments of the present disclosure.
  • Figure 43 illustrates a signal processing circuit for a transmission signal.
  • FIG. 44 illustrates another example of a wireless device applicable to various embodiments of the present disclosure.
  • FIG. 45 illustrates a portable device applicable to various embodiments of the present disclosure.
  • FIG. 47 illustrates a vehicle applicable to various embodiments of the present disclosure.
  • FIG. 49 illustrates a robot applicable to various embodiments of the present disclosure.
  • a or B may mean “only A,” “only B,” or “both A and B.” In other words, in various embodiments of the present disclosure, “A or B” may be interpreted as “A and/or B.” For example, in various embodiments of the present disclosure, “A, B or C” may mean “only A,” “only B,” “only C,” or “any combination of A, B and C.”
  • “at least one of A, B and C” can mean “only A,” “only B,” “only C,” or “any combination of A, B and C.” Additionally, “at least one of A, B or C” or “at least one of A, B and/or C” can mean “at least one of A, B and C.”
  • parentheses used in various embodiments of the present disclosure may mean “for example.” Specifically, when indicated as “control information (PDCCH)", “PDCCH” may be proposed as an example of “control information.” In other words, “control information” in various embodiments of the present disclosure is not limited to “PDCCH”, and “PDDCH” may be proposed as an example of "control information.” Furthermore, even when indicated as “control information (i.e., PDCCH)", “PDCCH” may be proposed as an example of "control information.”
  • CDMA can be implemented using wireless technologies such as UTRA (Universal Terrestrial Radio Access) or CDMA2000.
  • TDMA can be implemented using wireless technologies such as GSM (Global System for Mobile communications)/GPRS (General Packet Radio Service)/EDGE (Enhanced Data Rates for GSM Evolution).
  • OFDMA can be implemented using wireless technologies such as IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802-20, and E-UTRA (Evolved UTRA).
  • UTRA is a part of UMTS (Universal Mobile Telecommunications System).
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • E-UMTS Evolved UMTS
  • LTE-A Advanced/LTE-A pro
  • 3GPP NR New Radio or New Radio Access Technology
  • 3GPP 6G may be an evolved version of 3GPP NR.
  • LTE refers to technology after 3GPP TS 36.xxx Release 8.
  • LTE technology after 3GPP TS 36.xxx Release 10 is referred to as LTE-A
  • LTE technology after 3GPP TS 36.xxx Release 13 is referred to as LTE-A pro
  • 3GPP NR refers to technology after TS 38.
  • 3GPP 6G may refer to technology after TS Release 17 and/or Release 18.
  • “xxx” refers to a standard document detail number.
  • LTE/NR/6G may be collectively referred to as a 3GPP system.
  • RRC Radio Resource Control
  • RRC Radio Resource Control
  • Figure 1 is a diagram illustrating an example of physical channels and general signal transmission used in a 3GPP system.
  • a terminal receives information from a base station via the downlink (DL) and transmits it to the base station via the uplink (UL).
  • the information transmitted and received between the base station and the terminal includes data and various control information, and various physical channels exist depending on the type and purpose of the information being transmitted and received.
  • a terminal that has completed initial cell search can obtain more specific system information by receiving a physical downlink control channel (PDCCH) and a physical downlink shared channel (PDSCH) based on information contained in the PDCCH (S12).
  • PDCCH physical downlink control channel
  • PDSCH physical downlink shared channel
  • the terminal that has performed the procedure described above can then perform PDCCH/PDSCH reception (S17) and physical uplink shared channel (PUSCH)/physical uplink control channel (PUCCH) transmission (S18) as general uplink/downlink signal transmission procedures.
  • the terminal can receive downlink control information (DCI) through the PDCCH.
  • DCI downlink control information
  • the DCI includes control information such as resource allocation information for the terminal, and different formats can be applied depending on the purpose of use.
  • control information that the terminal transmits to the base station via the uplink or that the terminal receives from the base station may include downlink/uplink ACK/NACK signals, CQI (Channel Quality Indicator), PMI (Precoding Matrix Index), RI (Rank Indicator), etc.
  • the terminal may transmit the above-described control information such as CQI/PMI/RI via PUSCH and/or PUCCH.
  • PDSCH Physical Downlink Shared Channel
  • PDSCH carries downlink data (e.g., DL-shared channel transport block, DL-SCH TB) and applies modulation methods such as Quadrature Phase Shift Keying (QPSK), 16 Quadrature Amplitude Modulation (QAM), 64 QAM, and 256 QAM.
  • Codewords are generated by encoding the TBs.
  • PDSCH can carry multiple codewords. Scrambling and modulation mapping are performed for each codeword, and modulation symbols generated from each codeword are mapped to one or more layers (Layer mapping). Each layer is mapped to resources along with a Demodulation Reference Signal (DMRS), generated as an OFDM symbol signal, and transmitted through the corresponding antenna port.
  • DMRS Demodulation Reference Signal
  • PUSCH Physical Uplink Shared Channel
  • PUSCH carries uplink data (e.g., UL-shared channel transport block, UL-SCH TB) and/or uplink control information (UCI), and is transmitted based on a CP-OFDM (Cyclic Prefix - Orthogonal Frequency Division Multiplexing) waveform, a DFT-s-OFDM (Discrete Fourier Transform - spread - Orthogonal Frequency Division Multiplexing) waveform, etc.
  • CP-OFDM Cyclic Prefix - Orthogonal Frequency Division Multiplexing
  • DFT-s-OFDM Discrete Fourier Transform - spread - Orthogonal Frequency Division Multiplexing
  • PUSCH transmissions can be dynamically scheduled by UL grants in DCI, or semi-statically scheduled (configured grant) based on higher layer (e.g., RRC) signaling (and/or Layer 1 (L1) signaling (e.g., PDCCH)).
  • PUSCH transmissions can be performed in a codebook-based or non-codebook-based manner.
  • PUCCH carries uplink control information, HARQ-ACK and/or scheduling request (SR), and can be divided into multiple PUCCHs depending on the PUCCH transmission length.
  • new radio access technology new RAT, NR.
  • FIG. 2 is a diagram illustrating the system structure of a New Generation Radio Access Network (NG-RAN).
  • NG-RAN New Generation Radio Access Network
  • the NG-RAN may include a gNB and/or an eNB that provides user plane and control plane protocol termination to the UE.
  • FIG. 1 illustrates a case where only a gNB is included.
  • the gNB and eNB are connected to each other via an Xn interface.
  • the gNB and eNB are connected to the 5th generation core network (5G Core Network: 5GC) via the NG interface.
  • 5G Core Network: 5GC 5th generation core network
  • the gNB is connected to the access and mobility management function (AMF) via the NG-C interface
  • the gNB is connected to the user plane function (UPF) via the NG-U interface.
  • AMF access and mobility management function
  • UPF user plane function
  • the gNB can provide functions such as inter-cell radio resource management (Inter Cell RRM), radio bearer management (RB control), connection mobility control (Connection Mobility Control), radio admission control (Radio Admission Control), measurement configuration and provision, and dynamic resource allocation.
  • the AMF can provide functions such as NAS security and idle state mobility processing.
  • the UPF can provide functions such as mobility anchoring and PDU processing.
  • the SMF Session Management Function
  • the 5G usage scenario illustrated in FIG. 4 is merely exemplary, and the technical features of various embodiments of the present disclosure can also be applied to other 5G usage scenarios not illustrated in FIG. 4.
  • eMBB focuses on improving data speeds, latency, user density, and overall capacity and coverage of mobile broadband connections. It targets throughputs of around 10 Gbps. eMBB significantly exceeds basic mobile internet access, enabling rich interactive experiences, media and entertainment applications in the cloud, and augmented reality. Data is a key driver of 5G, and for the first time, dedicated voice services may not be available in the 5G era. In 5G, voice is expected to be handled as an application, simply using the data connection provided by the communication system. The increased traffic volume is primarily due to the increasing content size and the growing number of applications that require high data rates. Streaming services (audio and video), interactive video, and mobile internet connectivity will become more prevalent as more devices connect to the internet.
  • Cloud storage and applications are rapidly growing on mobile communication platforms, and this can be applied to both work and entertainment.
  • Cloud storage is a particular use case driving the growth of uplink data rates.
  • 5G is also used for remote work in the cloud, requiring significantly lower end-to-end latency to maintain a superior user experience when tactile interfaces are used.
  • cloud gaming and video streaming are other key factors driving the demand for mobile broadband.
  • Entertainment is essential on smartphones and tablets, regardless of location, including in highly mobile environments like trains, cars, and airplanes.
  • Another use case is augmented reality and information retrieval for entertainment, where augmented reality requires extremely low latency and instantaneous data volumes.
  • URLLC is ideal for vehicle communications, industrial control, factory automation, remote surgery, smart grids, and public safety applications by enabling devices and machines to communicate with high reliability, very low latency, and high availability.
  • URLLC targets latency on the order of 1 ms.
  • URLLC encompasses new services that will transform industries through ultra-reliable, low-latency links, such as remote control of critical infrastructure and autonomous vehicles. This level of reliability and latency is essential for smart grid control, industrial automation, robotics, and drone control and coordination.
  • Automotive is expected to be a significant new driver for 5G, with numerous use cases for in-vehicle mobile communications. For example, passenger entertainment demands both high capacity and high mobile broadband, as future users will consistently expect high-quality connectivity regardless of their location and speed.
  • Another automotive application is augmented reality dashboards.
  • An AR dashboard allows drivers to identify objects in the dark on top of what they see through the windshield. The AR dashboard overlays information to inform the driver about the distance and movement of objects.
  • wireless modules will enable vehicle-to-vehicle communication, information exchange between vehicles and supporting infrastructure, and information exchange between vehicles and other connected devices (e.g., devices accompanying pedestrians).
  • Safety systems can guide drivers to safer driving behaviors, reducing the risk of accidents.
  • Wireless and mobile communications are becoming increasingly important in industrial applications. Wiring is expensive to install and maintain. Therefore, the potential to replace cables with reconfigurable wireless links presents an attractive opportunity for many industries. However, achieving this requires wireless connections to operate with similar latency, reliability, and capacity to cables, while simplifying their management. Low latency and extremely low error rates are new requirements for 5G connectivity.
  • Logistics and freight tracking are important use cases for mobile communications, enabling the tracking of inventory and packages anywhere using location-based information systems. Logistics and freight tracking typically require low data rates but may require wide-range and reliable location information.
  • next-generation communications e.g., 6G
  • 6G next-generation communications
  • the 6G (wireless communication) system aims to achieve (i) very high data rates per device, (ii) a very large number of connected devices, (iii) global connectivity, (iv) very low latency, (v) low energy consumption for battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capabilities.
  • the vision of the 6G system can be divided into four aspects: intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity, and the 6G system can satisfy the requirements as shown in Table 1 below.
  • Table 1 is a table showing an example of the requirements of a 6G system.
  • 6G systems can have key factors such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine-type communication (mMTC), AI integrated communication, tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • eMBB enhanced mobile broadband
  • URLLC ultra-reliable low latency communications
  • mMTC massive machine-type communication
  • AI integrated communication tactile internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
  • 6G systems are expected to have 50 times the simultaneous wireless connectivity of 5G systems.
  • URLLC a key feature of 5G, will become even more crucial in 6G communications by providing end-to-end latency of less than 1 ms.
  • 6G systems will have significantly higher volumetric spectral efficiency, compared to the commonly used area spectral efficiency.
  • 6G systems can offer extremely long battery life and advanced battery technologies for energy harvesting, eliminating the need for separate charging for mobile devices in 6G systems.
  • New network characteristics in 6G may include:
  • 6G wireless networks will transfer power to charge the batteries of devices such as smartphones and sensors. Therefore, wireless information and energy transfer (WIET) will be integrated.
  • WIET wireless information and energy transfer
  • Ultra-dense heterogeneous networks will be another key feature of 6G communication systems.
  • Multi-tier networks comprised of heterogeneous networks improve overall QoS and reduce costs.
  • High-precision localization (or location-based services) through communications is a key feature of 6G wireless communication systems. Therefore, radar systems will be integrated with 6G networks.
  • Softwarization and virtualization are two critical features that form the foundation of the design process for 5GB networks to ensure flexibility, reconfigurability, and programmability. Furthermore, billions of devices can be shared on a shared physical infrastructure.
  • AI The most crucial and newly introduced technology for 6G systems is AI. 4G systems did not involve AI. 5G systems will support partial or very limited AI. However, 6G systems will fully support AI for automation. Advances in machine learning will create more intelligent networks for real-time communications in 6G. Incorporating AI into communications can streamline and improve real-time data transmission. AI can use numerous analyses to determine how complex target tasks should be performed. In other words, AI can increase efficiency and reduce processing delays.
  • AI can also play a crucial role in machine-to-machine (M2M), machine-to-human, and human-to-machine communications. Furthermore, AI can facilitate rapid communication in brain-computer interfaces (BCIs). AI-based communication systems can be supported by metamaterials, intelligent structures, intelligent networks, intelligent devices, intelligent cognitive radios, self-sustaining wireless networks, and machine learning.
  • M2M machine-to-machine
  • BCIs brain-computer interfaces
  • Machine learning can be used for channel estimation and channel tracking, as well as for power allocation and interference cancellation in the physical layer of the downlink (DL). Furthermore, machine learning can be used for antenna selection, power control, and symbol detection in MIMO systems.
  • Deep learning-based AI algorithms require a large amount of training data to optimize training parameters.
  • a large amount of training data is used offline. This means that static training on training data in specific channel environments can lead to conflicts with the dynamic characteristics and diversity of the wireless channel.
  • Machine learning refers to a series of operations that train machines to perform tasks that humans can or cannot perform. Machine learning requires data and a learning model. Data learning methods in machine learning can be broadly categorized into three types: supervised learning, unsupervised learning, and reinforcement learning.
  • Learning methods may vary depending on the characteristics of the data. For example, if the goal is to accurately predict data transmitted by a transmitter in a communication system, supervised learning is preferable to unsupervised learning or reinforcement learning.
  • the learning model corresponds to the human brain, and the most basic linear model can be thought of, but the machine learning paradigm that uses highly complex neural network structures, such as artificial neural networks, as learning models is called deep learning.
  • the neural network cores used in learning methods are mainly divided into deep neural networks (DNN), convolutional deep neural networks (CNN), and recurrent boltzmann machines (RNN).
  • DNN deep neural networks
  • CNN convolutional deep neural networks
  • RNN recurrent boltzmann machines
  • An artificial neural network is an example of a network of multiple perceptrons.
  • the perceptron structure illustrated in Fig. 6 can be explained as consisting of a total of three layers based on input and output values.
  • An artificial neural network in which there are H perceptrons of (d+1) dimensions between the 1st layer and the 2nd layer, and K perceptrons of (H+1) dimensions between the 2nd layer and the 3rd layer can be expressed as in Fig. 7.
  • Figure 7 is a schematic diagram illustrating an example of a multilayer perceptron structure.
  • the layer where the input vector is located is called the input layer
  • the layer where the final output value is located is called the output layer
  • all layers located between the input layer and the output layer are called hidden layers.
  • the example in Fig. 7 shows three layers, but when counting the number of layers in an actual artificial neural network, the input layer is excluded, so it can be viewed as a total of two layers.
  • An artificial neural network is composed of perceptrons, which are basic blocks, connected in two dimensions.
  • the aforementioned input, hidden, and output layers can be applied jointly not only to multilayer perceptrons but also to various artificial neural network structures, such as CNNs and RNNs, which will be described later.
  • the machine learning paradigm that uses sufficiently deep artificial neural networks as learning models is called deep learning.
  • the artificial neural network used for deep learning is called a deep neural network (DNN).
  • Figure 8 is a schematic diagram illustrating an example of a deep neural network.
  • the deep neural network illustrated in Figure 8 is a multilayer perceptron consisting of eight hidden layers and eight output layers.
  • the multilayer perceptron structure is referred to as a fully connected neural network.
  • a fully connected neural network there is no connection between nodes located in the same layer, and there is a connection only between nodes located in adjacent layers.
  • DNN has a fully connected neural network structure and is composed of a combination of multiple hidden layers and activation functions, and can be usefully applied to identify correlation characteristics between inputs and outputs.
  • the correlation characteristic can mean the joint probability of inputs and outputs.
  • Figure 9 is a schematic diagram illustrating an example of a convolutional neural network.
  • Fig. 9 can assume a case where nodes are arranged two-dimensionally, with w nodes in width and h nodes in height (the convolutional neural network structure of Fig. 9).
  • a weight is added to each connection in the connection process from one input node to the hidden layer, a total of h ⁇ w weights must be considered. Since there are h ⁇ w nodes in the input layer, a total of h2w2 weights are required between two adjacent layers.
  • the convolutional neural network of Fig. 9 has a problem in that the number of weights increases exponentially according to the number of connections. Therefore, instead of considering the connections of all modes between adjacent layers, it assumes that there are small filters, and performs weighted sum and activation function operations on the overlapping portions of the filters, as in Fig. 10.
  • Figure 10 is a schematic diagram illustrating an example of a filter operation in a convolutional neural network.
  • Each filter has a weight corresponding to its size, and weight learning can be performed to extract and output a specific feature on the image as a factor.
  • weight learning can be performed to extract and output a specific feature on the image as a factor.
  • a 3x3 filter is applied to the upper left 3x3 region of the input layer, and the output value resulting from performing weighted sum and activation function operations on the corresponding node is stored in z22.
  • the above filter performs weighted sum and activation function operations while moving at a certain horizontal and vertical interval while scanning the input layer, and places the output value at the current filter position.
  • This operation method is similar to the convolution operation for images in the field of computer vision, so a deep neural network with this structure is called a convolutional neural network (CNN), and the hidden layer generated as a result of the convolution operation is called a convolutional layer.
  • a neural network with multiple convolutional layers is called a deep convolutional neural network (DCNN).
  • the number of weights can be reduced by calculating a weighted sum that includes only the nodes located in the area covered by the filter, starting from the node where the current filter is located. This allows a single filter to focus on features within a local area. Accordingly, CNNs can be effectively applied to image data processing where physical distance in a two-dimensional area is an important criterion for judgment. Meanwhile, CNNs can apply multiple filters immediately before the convolutional layer, and can generate multiple output results through the convolution operation of each filter.
  • a structure that applies a method of inputting one element of the data sequence at each timestep and inputting the output vector (hidden vector) of the hidden layer output at a specific timestep together with the immediately following element in the sequence is called a recurrent neural network structure.
  • Figure 11 is a schematic diagram illustrating an example of a neural network structure in which a recurrent loop exists.
  • a recurrent neural network is a structure that inputs elements (x1(t), x2(t), ,..., xd(t)) of a data sequence at a time point t into a fully connected neural network, and then inputs the hidden vectors (z1(t-1), z2(t-1),..., zH(t-1)) of the immediately preceding time point t-1 together and applies a weighted sum and activation function.
  • the reason for transmitting the hidden vector to the next time point in this way is because the information in the input vectors of the preceding time points is considered to be accumulated in the hidden vector of the current time point.
  • the recurrent neural network operates in a predetermined order of time for the input data sequence.
  • the hidden vector (z1(1), z2(1),..., zH(1)) is input together with the input vector (x1(2), x2(2),..., xd(2)) at time point 2, and the vector (z1(2), z2(2),..., zH(2)) of the hidden layer is determined through a weighted sum and an activation function. This process is repeatedly performed until time point 2, time point 3, ,,, time point T.
  • Recurrent neural networks are designed to be useful for processing sequence data (e.g., natural language processing).
  • various deep learning techniques such as DNN, CNN, RNN, Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), and Deep Q-Network, and can be applied to fields such as computer vision, speech recognition, natural language processing, and speech/signal processing.
  • AI-based physical layer transmission refers to the application of AI-based signal processing and communication mechanisms, rather than traditional communication frameworks, in the fundamental signal processing and communication mechanisms. For example, this may include deep learning-based channel coding and decoding, deep learning-based signal estimation and detection, deep learning-based MIMO mechanisms, and AI-based resource scheduling and allocation.
  • THz waves also known as sub-millimeter waves, typically refer to the frequency range between 0.1 THz and 10 THz, with corresponding wavelengths ranging from 0.03 mm to 3 mm.
  • the 100 GHz to 300 GHz band (sub-THz band) is considered a key part of the THz band for cellular communications. Adding the sub-THz band to the mmWave band will increase the capacity of 6G cellular communications.
  • 300 GHz to 3 THz lies in the far infrared (IR) frequency band. While part of the optical band, the 300 GHz to 3 THz band lies at the boundary of the optical band, immediately following the RF band. Therefore, this 300 GHz to 3 THz band exhibits similarities to RF.
  • THz communications Key characteristics include (i) the widely available bandwidth to support very high data rates and (ii) the high path loss that occurs at high frequencies (requiring highly directional antennas).
  • the narrow beamwidths generated by highly directional antennas reduce interference.
  • the small wavelength of THz signals allows for a significantly larger number of antenna elements to be integrated into devices and base stations operating in this band. This enables the use of advanced adaptive array technologies to overcome range limitations.
  • OWC technology is designed for 6G communications, in addition to RF-based communications for all possible device-to-access networks. These networks connect to network-to-backhaul/fronthaul networks.
  • OWC technology has already been used in 4G communication systems, but it will be used more widely to meet the demands of 6G communication systems.
  • OWC technologies such as light fidelity, visible light communication, optical camera communication, and wideband-based FSO communication are already well-known. Communications based on optical wireless technology can provide very high data rates, low latency, and secure communications.
  • LiDAR can also be used for ultra-high-resolution 4D mapping in 6G communications based on wideband.
  • FSO can be a promising technology for providing backhaul connectivity in 6G systems, in conjunction with fiber-optic networks.
  • FSO supports high-capacity backhaul connectivity for remote and non-remote areas, such as the ocean, space, underwater, and isolated islands.
  • FSO also supports cellular base station (BS) connections.
  • BS base station
  • MIMO technology One of the key technologies for improving spectral efficiency is the application of MIMO technology. As MIMO technology improves, spectral efficiency also improves. Therefore, massive MIMO technology will be crucial in 6G systems. Because MIMO technology utilizes multiple paths, multiplexing technology must be considered to ensure that data signals can be transmitted along more than one path, as well as beam generation and operation technologies suitable for the THz band.
  • Blockchain will become a crucial technology for managing massive amounts of data in future communication systems.
  • Blockchain is a form of distributed ledger technology.
  • a distributed ledger is a database distributed across numerous nodes or computing devices. Each node replicates and stores an identical copy of the ledger.
  • Blockchains are managed by a peer-to-peer network and can exist without being managed by a central authority or server. Data on a blockchain is collected and organized into blocks. Blocks are linked together and protected using cryptography.
  • Blockchain perfectly complements large-scale IoT with its inherently enhanced interoperability, security, privacy, reliability, and scalability. Therefore, blockchain technology offers several features, such as interoperability between devices, traceability of large amounts of data, autonomous interaction with other IoT systems, and the massive connectivity stability of 6G communication systems.
  • 3D BS will be provided via low-orbit satellites and UAVs. Adding a new dimension in altitude and associated degrees of freedom, 3D connections differ significantly from existing 2D networks.
  • Unsupervised reinforcement learning holds promise in the context of 6G networks. Supervised learning approaches cannot label the massive amounts of data generated by 6G networks. Unsupervised learning does not require labeling. Therefore, this technology can be used to autonomously build representations of complex networks. Combining reinforcement learning and unsupervised learning allows for truly autonomous network operation.
  • Unmanned Aerial Vehicles will be a key element in 6G wireless communications. In most cases, high-speed wireless connections will be provided using UAV technology.
  • BS entities are installed on UAVs to provide cellular connectivity.
  • UAVs offer specific capabilities not found in fixed BS infrastructure, such as easy deployment, robust line-of-sight links, and controlled mobility. During emergencies such as natural disasters, deploying terrestrial communication infrastructure is not economically feasible, and sometimes, volatile environments make it impossible to provide services. UAVs can easily handle these situations.
  • UAVs will become a new paradigm in wireless communications. This technology facilitates three fundamental requirements for wireless networks: enhanced mobile broadband (eMBB), URLLC, and mMTC.
  • eMBB enhanced mobile broadband
  • URLLC ultra low-access control
  • mMTC massive machine type of networks
  • UAVs can also support various purposes, such as enhancing network connectivity, fire detection, disaster emergency services, security and surveillance, pollution monitoring, parking monitoring, and accident monitoring. Therefore, UAV technology is recognized as one of the most important technologies for 6
  • Tight integration of multiple frequencies and heterogeneous communication technologies is crucial in 6G systems. As a result, users will be able to seamlessly move from one network to another without requiring any manual configuration on their devices. The best network will be automatically selected from available communication technologies. This will break the limitations of the cell concept in wireless communications. Currently, user movement from one cell to another in dense networks results in excessive handovers, resulting in handover failures, handover delays, data loss, and a ping-pong effect. 6G cell-free communications will overcome all of these challenges and provide better QoS. Cell-free communications will be achieved through multi-connectivity and multi-tier hybrid technologies, as well as heterogeneous radios on devices.
  • WIET uses the same fields and waves as wireless communication systems. Specifically, sensors and smartphones will be charged using wireless power transfer during communication. WIET is a promising technology for extending the life of battery-powered wireless systems. Therefore, battery-less devices will be supported by 6G communications.
  • each access network will be connected to backhaul connections, such as fiber optics and FSO networks. To accommodate the massive number of access networks, there will be tight integration between access and backhaul networks.
  • Beamforming is a signal processing procedure that adjusts an antenna array to transmit a wireless signal in a specific direction. It is a subset of smart antennas or advanced antenna systems. Beamforming technology offers several advantages, including high signal-to-noise ratio, interference avoidance and rejection, and high network efficiency.
  • Holographic beamforming (HBF) is a novel beamforming method that differs significantly from MIMO systems because it uses software-defined antennas. HBF will be a highly effective approach for efficient and flexible signal transmission and reception in multi-antenna communication devices in 6G.
  • THz-band signals have strong linearity, which can create many shadow areas due to obstacles.
  • LIS technology which enables expanded communication coverage, enhanced communication stability, and additional value-added services by installing LIS near these shadow areas, is becoming increasingly important.
  • LIS is an artificial surface made of electromagnetic materials that can alter the propagation of incoming and outgoing radio waves. While LIS can be viewed as an extension of massive MIMO, it differs from massive MIMO in its array structure and operating mechanism. Furthermore, LIS operates as a reconfigurable reflector with passive elements, passively reflecting signals without using active RF chains, which offers the advantage of low power consumption. Furthermore, because each passive reflector in LIS must independently adjust the phase shift of the incoming signal, this can be advantageous for wireless communication channels. By appropriately adjusting the phase shift via the LIS controller, the reflected signal can be collected at the target receiver to boost the received signal power.
  • THz waves are located between the RF (Radio Frequency)/millimeter (mm) and infrared bands, and (i) compared to visible light/infrared light, they penetrate non-metallic/non-polarizable materials well, and compared to RF/millimeter waves, they have a shorter wavelength, so they have high linearity and can focus beams.
  • the photon energy of THz waves is only a few meV, they have the characteristic of being harmless to the human body.
  • the frequency bands expected to be used for THz wireless communication may be the D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz), which have low propagation loss due to molecular absorption in the air. Discussions on standardization of THz wireless communication are being centered around the IEEE 802.15 THz working group in addition to 3GPP, and standard documents issued by the IEEE 802.15 Task Group (TG3d, TG3e) may specify or supplement the contents described in various embodiments of the present disclosure. THz wireless communication can be applied to wireless cognition, sensing, imaging, wireless communication, THz navigation, etc.
  • Figure 14 is a diagram illustrating an example of a THz communication application.
  • Table 2 below shows examples of technologies that can be used in THz waves.
  • THz wireless communications can be categorized based on the methods used to generate and receive THz waves.
  • THz generation methods can be categorized as either optical or electronic-based.
  • Methods for generating THz using electronic components include a method using semiconductor components such as a resonant tunneling diode (RTD), a method using a local oscillator and a multiplier, a MMIC (Monolithic Microwave Integrated Circuits) method using an integrated circuit based on a compound semiconductor HEMT (High Electron Mobility Transistor), and a method using a Si-CMOS-based integrated circuit.
  • a multiplier doubler, tripler, multiplier
  • a multiplier is essential.
  • the multiplier is a circuit that has an output frequency that is N times that of the input, and matches it to the desired harmonic frequency and filters out all remaining frequencies.
  • beamforming can be implemented by applying an array antenna or the like to the antenna of Fig. 15.
  • IF represents intermediate frequency
  • tripler and multiplexer represent multipliers
  • PA represents power amplifier
  • LNA low noise amplifier
  • PLL phase-locked loop.
  • Fig. 17 is a diagram illustrating an example of an optical element-based THz wireless communication transceiver.
  • an optical coupler refers to a semiconductor device that transmits an electrical signal using optical waves to provide electrical isolation and coupling between circuits or systems
  • a UTC-PD Uni-Travelling Carrier Photo-Detector
  • the UTC-PD is capable of detecting light at 150 GHz or higher.
  • an EDFA Erbium-Doped Fiber Amplifier
  • a PD Photo Detector
  • an OSA optical module (Optical Sub Assembly) that modularizes various optical communication functions (photoelectric conversion, electro-optical conversion, etc.) into a single component
  • a DSO represents a digital storage oscilloscope.
  • Fig. 18 is a diagram illustrating the structure of a photon source-based transmitter.
  • Figure 19 is a drawing showing the structure of an optical modulator.
  • the available bandwidth can be classified based on the oxygen attenuation of 10 ⁇ 2 dB/km in the spectrum up to 1 THz. Accordingly, a framework in which the available bandwidth is composed of multiple band chunks can be considered. As an example of the above framework, if the THz pulse length for one carrier is set to 50 ps, the bandwidth (BW) becomes approximately 20 GHz.
  • Effective down-conversion from the infrared band (IR band) to the terahertz band (THz band) depends on how to utilize the nonlinearity of the optical/electrical converter (O/E converter).
  • O/E converter optical/electrical converter
  • a terahertz transmission and reception system can be implemented using a single optical-to-electrical converter.
  • the number of optical-to-electrical converters may be equal to the number of carriers. This phenomenon will be particularly noticeable in a multi-carrier system that utilizes multiple broadbands according to the aforementioned spectrum usage plan.
  • a frame structure for the multi-carrier system may be considered.
  • a signal down-converted using an optical-to-electrical converter may be transmitted in a specific resource region (e.g., a specific frame).
  • the frequency region of the specific resource region may include multiple chunks. Each chunk may be composed of at least one component carrier (CC).
  • the present disclosure can provide a method for performing effective downlink beam refinement or base station transmit beam refinement (BS Tx beam refinement) using multiple beams.
  • BS Tx beam refinement base station transmit beam refinement
  • the present disclosure can provide a beam refinement method that is more effective than a beam refinement method based on a conventional exhaustive beamforming method.
  • the propagation path loss can be derived based on the following mathematical equation 1.
  • Pathloss FreeSpacePathLoss + 10log(d) + AT [dB] +shadow fading
  • the HPBW of the beam can be derived based on the following mathematical expression 2.
  • represents the frequency number
  • N represents the number of antennas
  • d represents the antenna spacing
  • represents the beam's angle of rejection.
  • the HPBW can be obtained as approximately 1.5 degrees based on Equation 2.
  • FIG. 21 is a diagram for explaining the dependence between the frequency and angle of a beam applicable to the present disclosure.
  • Beamforming based on a phased array can be a method that performs beamforming in the direction of a desired angle by linearly adding phase values to each antenna element.
  • This method can be designed based on a center frequency, but the subcarrier positions of the actual transmitted signal can be spaced at a certain interval from the center frequency. Since the phase component varies depending on the frequency location, a beam can be generated in which an additional phase component is added to the phase component by the phased array as the offset interval from the center frequency increases. This additional phase can cause a slight angle steer compared to the desired target beam angle.
  • f m represents the subcarrier position of the signal actually transmitted
  • f c represents the center frequency
  • Figure 21 shows the degree to which the beam angle is distorted depending on the frequency position.
  • FIG. 22 is a drawing for explaining the relationship between the number of beams and antennas applicable to the present disclosure.
  • FIG. 23 is a diagram illustrating an example of beamforming applicable to the present disclosure.
  • Figure 23 illustrates an example of exhaustive beamforming.
  • a CSI with four Transmission Configuration Indications can be configured for a single SSB index.
  • a maximum of 64 CSIs can be configured for a single SSB.
  • the 64 CSIs configured for each SSB can be configured as follows.
  • the maximum number of CSI ports is 32, so the maximum number of CSIs that can be QCLed on SSB is 32. Since the base station cannot know where the terminal is located within the SSB beam, beam refinement can be performed by transmitting 32 CSIs, analyzing 32 measurement results, and finally selecting the CSI, assuming 32 CSIs are configured.
  • the present disclosure describes a beam refinement method that can achieve the same results using fewer resources than conventional methods.
  • FIG. 24 is a drawing illustrating an example of beam refinement applicable to the present disclosure.
  • Exhaustive beamforming schemes can have a complexity of N 2 .
  • This can be a complexity that includes both base station transmit beam refinement and terminal receive beam refinement.
  • base station transmit beam refinement can have a complexity of N .
  • the present disclosure proposes a method for performing beam refinement by additionally utilizing one or more overlapping regions formed by multiple beams when performing beamforming using multiple beams. Furthermore, the present disclosure proposes a method for minimizing the number of beam measurements by utilizing the overlapping region.
  • the beams available in the present disclosure can be configured with a relatively wider area than those in the prior art.
  • FIG. 25 is a drawing for explaining an example of code implementing a beam refinement method applicable to the present disclosure.
  • the pseudo code of Fig. 25 is an example of a code that implements an algorithm for determining the number of measurements in the beam refinement operation of a base station.
  • FIG. 26 is another drawing illustrating an example of a beam refinement method applicable to the present disclosure.
  • the entire area when measuring 9 areas using 2 beams, the entire area can first be divided into 3 areas and measured using 2 beams with 1 overlapping area. Next, each of the 3 areas that were first divided can be divided again into 3 areas, for a total of 9 areas, and measurement can be performed.
  • the overhead of the system can be reduced if a beam corresponding to the least common multiple of 2 and 3 is used.
  • the reduced overhead can be determined according to the following mathematical expression 4.
  • the number of beams can be used to mean the same thing as the number of measurement areas that multiple beams should measure or the beam resolution.
  • the overhead of the system may be greater than in the conventional method.
  • Figure 27 is a drawing for explaining the effect of the beam refinement method according to the present disclosure.
  • Figure 27 illustrates the simulation results of the required number of beam measurements as the number of beams increases.
  • the linear segments in Figure 27 represent the number of beam measurements based on the conventional exhaustive method. Referring to Figure 27, it can be confirmed that a higher number of measurements is recorded compared to the conventional method in sections excluding beam numbers of 3, 6, and 9. In other words, it can be confirmed that the proposed method records a higher number of measurements compared to the conventional method.
  • FIGS. 28 to 30 are drawings illustrating some examples of beam refinement methods applicable to the present disclosure.
  • FIGS. 28 to 30 illustrate examples of performing beam refinement according to the present disclosure when the number of beams is 3, 4, and 5, respectively.
  • 3 beams are used as in FIG. 28
  • beam refinement can be performed on 5 or fewer beam areas.
  • beam refinement can be performed on 7 or fewer areas.
  • 5 beams are used as in FIG. 30, beam refinement can be performed on 9 or fewer areas.
  • the beams can be designed so that the 4 beams overlap in at least one area.
  • Figures 31 to 34 are several drawings for explaining the effect of the beam refinement method according to the present disclosure.
  • Figures 31 and 33 illustrate the results of simulating the number of required beam measurements or beam measurement overhead when the number of beams is 3, 4, and 5, respectively.
  • the number of beams for which the remainder of the number of beam areas divided by the number of beams is 0 has a high priority, so the number of beams can be initially determined to be 4.
  • the gain can be up to five beams. Furthermore, when using four beams, the gain can be up to seven beams. Furthermore, when using five beams, the gain can be up to 18 beams.
  • both gain-producing and non-gain-producing beam segments can occur in areas exceeding the aforementioned number of beams. Therefore, beam refinement can be performed by selecting the number of beams with gain from the aforementioned combinations.
  • Figure 34 shows the simulation results of the number of beam measurements or beam measurement overhead when the number of beams is increased to 64 when using 5 beams.
  • Figure 35 it can be seen that as the beam measurement area or the number of beams increases, the difference in beam measurement overhead compared to the exhaustive method becomes larger.
  • the present disclosure and the exhaustive method generate the same beam measurement overhead, which can be resolved by increasing the number of beams beyond the above-described explanation.
  • Figure 35 is another drawing for explaining the effect of the beam refinement method according to the present disclosure.
  • FIG 35 is a diagram illustrating hierarchical beamforming.
  • the hierarchical beamforming method may refer to a method of performing measurement and reporting by dividing each beam in half until a desired beam resolution area is reached. For example, when performing measurements on 12 beam areas, beam widths corresponding to 6 areas can be set and 2 beams can be transmitted. After receiving measurement reports for the 2 transmitted beams, the base station can select the beam with the best quality. Next, the base station can set beam widths corresponding to 3 areas and transmit 2 beams again.
  • Figure 35 illustrates simulation results comparing the hierarchical beambomming method with the method proposed in this disclosure.
  • the dashed line represents the overhead result of the method proposed in this disclosure
  • the solid line represents the overhead result of the method that performs measurement beamforming by adaptively using 2 to 5 beams
  • the dashed line represents the overhead result of the method that performs hierarchical beambomming using 2 beams. According to Figure 35, it can be confirmed that the proposed method has a smaller overhead than the hierarchical beamforming method.
  • Figure 36 is a drawing showing a flow chart of a beam refinement method.
  • Fig. 36(a) illustrates a flow chart for a beam refinement method according to a conventional method
  • Fig. 36(b) illustrates a flow chart for a beam refinement method according to the present disclosure
  • Fig. 36(a) illustrates a flow chart for a beam refinement method according to an exhaustive beamforming method.
  • a prerequisite for beam refinement may be the completion of the RACH procedure.
  • beam refinement may be performed after contention resolution of the RACH procedure is performed.
  • the RACH procedure may include operations of transmitting and receiving RACH message 3 (Msg 3) and RACH message 4 (Msg 4) between the terminal and the base station.
  • Contention resolution may be performed by Msg 4.
  • Operations defined in the existing 3GPP standard may be performed based on Msg 3 and Msg 4.
  • the terminal may use UE capability information to inform the base station that CSI-based refinement is to be performed, and if not, the terminal may use UE capability information to inform the base station that pilot signal-based beam refinement is to be performed.
  • the base station may additionally include pilot signal configuration information in Msg 4.
  • Pilot configuration ⁇ ID, time resource (symbol index), frequency resource (start RB, end RB) ⁇
  • the base station can assign a unique ID and time and frequency resources to the pilot signal so that the terminal can distinguish the measurement results when performing the report.
  • Figure 37 is a diagram for explaining frequency resource allocation applicable to the present disclosure.
  • the carrier position of a pilot signal can be calculated based on the following mathematical expression 6 and code.
  • Resource allocation can be done in units of resource blocks (RBs) or resource elements (REs). Since REs can vary in size based on subcarrier spacing (SCS), conditional conversion operations may be required. For example, if the center frequency is 150 GHz, the target angle is 60 degrees, the SSB beam width is 1.6 degrees, and the resolution within the SSB beam is 0.1 degrees, the determined pilot signal can be mapped to a frequency range determined based on the following mathematical expression 7.
  • Pilot signal resource1 150 / sin(60) * sin(60.1), it is 150.1509GHz
  • 8 (or n1) pilot signals can be allocated above the center frequency.
  • 8 (or n1) pilot signals can be allocated to positions of 314RE, 628RE, 942RE ⁇ 314n1 RE.
  • 8 (or n2) pilot signals can be allocated below the center frequency.
  • 8 (or n2) pilot signals can be allocated to positions of 315RE, 630RE, 945RE ⁇ 315n RE. The entire frequency band required for this can be approximately 2.4 GHz.
  • the resources of the time domain can be determined based on the following mathematical expression 8.
  • the base station can receive the corresponding pilot results when the SSB beam is divided into three regions. At this time, the base station can determine that the terminal is in the first region when both the first pilot signal and the second pilot signal are measured or reported, determine that the terminal is in the second region when the first pilot signal is measured or reported, and determine that the terminal is in the third region when the second pilot signal is measured or reported. At this time, one of the first to third regions can be a region where two beams overlap.
  • the terminal can determine the beam in which the terminal is located among the beam sections from 1st to 9th (or nth) and transmit a reference signal (e.g., CSI-RS) for data transmission and reception using the beam.
  • a reference signal e.g., CSI-RS
  • the method further comprises a step of transmitting UE capability information to the base station, wherein the UE capability information may include information instructing the UE to transmit a beam measurement result based on a pilot signal.
  • the plurality of beams may include a first beam and a second beam whose regions are recursively set
  • the terminal may be located in one of a first region in which the first beam and the second beam overlap, a second region excluding the first region in the region of the first beam, and a third region excluding the first region in the second beam
  • the plurality of pilot signals may include a first pilot signal corresponding to a measurement result for the first beam and a second pilot signal corresponding to a measurement result for the second beam.
  • the terminal can transmit both measurement results for the first pilot signal and the second pilot signal.
  • the terminal based on the terminal being located in the second area, the terminal can transmit a measurement result for the first pilot signal, and based on the terminal being located in the third area, the terminal can transmit a measurement result for the second pilot signal.
  • the terminal based on the terminal being located in the first region, the terminal is located in one region among a fourth region included in the first region and where the first beam and the second beam overlap, a fifth region included in the first region and excluding the fourth region in the region of the first beam, and a sixth region included in the first region and excluding the fourth region in the second beam, and the plurality of pilot signals may include a third pilot signal corresponding to a measurement result for the first beam and a fourth pilot signal corresponding to a measurement result for the second beam.
  • a terminal may be provided in a communication system.
  • the terminal may include a transceiver and at least one processor, wherein the at least one processor may be configured to perform the operating method of the terminal according to FIG. 38.
  • a device for controlling a terminal in a communication system may be provided.
  • the device may include at least one processor and at least one memory operably connected to the at least one processor.
  • the at least one memory may be configured to store instructions for performing the operating method of the terminal according to FIG. 38 based on instructions executed by the at least one processor.
  • one or more non-transitory computer-readable media storing one or more commands may be provided.
  • the one or more commands when executed by one or more processors, perform operations, and the operations may include an operating method of a terminal according to FIG. 38.
  • FIG. 39 is a diagram illustrating an example of a method for a base station to transmit and receive signals in a system applicable to the present disclosure.
  • the method includes a step (S3910) of transmitting pilot signal configuration information to a user equipment (UE), a step (S3920) of transmitting a plurality of pilot signals to the UE based on the pilot signal configuration information, and a step (S3930) of receiving beam measurement results measured based on the plurality of pilot signals from the UE, wherein the plurality of pilot signals are for beam measurement results measured based on the plurality of beams, and at least two of the plurality of beams may have an overlapping area.
  • UE user equipment
  • S3920 transmitting a plurality of pilot signals to the UE based on the pilot signal configuration information
  • S3930 of receiving beam measurement results measured based on the plurality of pilot signals from the UE, wherein the plurality of pilot signals are for beam measurement results measured based on the plurality of beams, and at least two of the plurality of beams may have an overlapping area.
  • the method further comprises the step of receiving UE capability information from the terminal, wherein the UE capability information may include information indicating that the terminal will transmit a beam measurement result based on a pilot signal.
  • the plurality of beams may include a first beam and a second beam whose regions are recursively set
  • the terminal may be located in one of a first region in which the first beam and the second beam overlap, a second region excluding the first region in the region of the first beam, and a third region excluding the first region in the second beam
  • the plurality of pilot signals may include a first pilot signal corresponding to a measurement result for the first beam and a second pilot signal corresponding to a measurement result for the second beam.
  • FIG. 40 illustrates a communication system (1) applicable to various embodiments of the present disclosure.
  • XR devices include AR (Augmented Reality)/VR (Virtual Reality)/MR (Mixed Reality) devices, and may be implemented in the form of a Head-Mounted Device (HMD), a Head-Up Display (HUD) installed in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance, digital signage, a vehicle, a robot, etc.
  • Mobile devices may include a smartphone, a smart pad, a wearable device (e.g., a smart watch, smart glasses), a computer (e.g., a laptop, etc.), etc.
  • Home appliances may include a TV, a refrigerator, a washing machine, etc.
  • IoT devices may include a sensor, a smart meter, etc.
  • a base station and a network may also be implemented as a wireless device, and a specific wireless device (200a) may act as a base station/network node to other wireless devices.
  • vehicles can communicate directly (e.g., V2V (Vehicle to Vehicle)/V2X (Vehicle to everything) communication).
  • IoT devices e.g., sensors
  • IoT devices can communicate directly with other IoT devices (e.g., sensors) or other wireless devices (100a to 100f).
  • FR1 may include a band from 410 MHz to 7125 MHz, as shown in Table 4 below. That is, FR1 may include a frequency band above 6 GHz (or 5850, 5900, 5925 MHz, etc.). For example, the frequency band above 6 GHz (or 5850, 5900, 5925 MHz, etc.) included within FR1 may include an unlicensed band. The unlicensed band may be used for various purposes, such as for vehicular communications (e.g., autonomous driving).
  • vehicular communications e.g., autonomous driving
  • the communication system (1) can support terahertz (THz) wireless communication.
  • the frequency band expected to be used for THz wireless communication may be a D-band (110 GHz to 170 GHz) or H-band (220 GHz to 325 GHz) band where propagation loss due to absorption of molecules in the air is small.
  • FIG. 41 illustrates a wireless device that can be applied to various embodiments of the present disclosure.
  • the memory (104) may be connected to the processor (102) and may store various information related to the operation of the processor (102). For example, the memory (104) may perform some or all of the processes controlled by the processor (102), or may store software code including commands for performing the descriptions, functions, procedures, proposals, methods, and/or operation flowcharts disclosed in this document.
  • the processor (102) and the memory (104) may be part of a communication modem/circuit/chip designed to implement a wireless communication technology (e.g., LTE, NR).
  • the transceiver (106) may be connected to the processor (102) and may transmit and/or receive wireless signals via one or more antennas (108).
  • the transceiver (106) may include a transmitter and/or a receiver.
  • the transceiver (106) may be used interchangeably with an RF (Radio Frequency) unit.
  • a wireless device may mean a communication modem/circuit/chip.
  • the second wireless device (200) includes one or more processors (202), one or more memories (204), and may further include one or more transceivers (206) and/or one or more antennas (208).
  • the processor (202) controls the memories (204) and/or the transceivers (206), and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document.
  • the processor (202) may process information in the memory (204) to generate third information/signals, and then transmit a wireless signal including the third information/signals via the transceivers (206).
  • the processor (202) may receive a wireless signal including fourth information/signals via the transceivers (206), and then store information obtained from signal processing of the fourth information/signals in the memory (204).
  • the memory (204) may be connected to the processor (202) and may store various information related to the operation of the processor (202). For example, the memory (204) may perform some or all of the processes controlled by the processor (202), or may store software code including commands for performing the descriptions, functions, procedures, proposals, methods, and/or operation flowcharts disclosed in this document.
  • the processor (202) and the memory (204) may be part of a communication modem/circuit/chip designed to implement wireless communication technology (e.g., LTE, NR).
  • the transceiver (206) may be connected to the processor (202) and may transmit and/or receive wireless signals via one or more antennas (208).
  • the transceiver (206) may include a transmitter and/or a receiver.
  • the transceiver (206) may be used interchangeably with an RF unit.
  • a wireless device may also mean a communication modem/circuit/chip.
  • One or more processors (102, 202) may generate messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operation flowcharts disclosed in this document.
  • One or more processors (102, 202) can generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data or information according to the functions, procedures, proposals and/or methods disclosed herein, and provide the signals to one or more transceivers (106, 206).
  • the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software, and the firmware or software may be implemented to include modules, procedures, functions, etc.
  • the descriptions, functions, procedures, suggestions, methods and/or operation flowcharts disclosed in this document may be implemented using firmware or software configured to perform one or more processors (102, 202) or stored in one or more memories (104, 204) and executed by one or more processors (102, 202).
  • the descriptions, functions, procedures, suggestions, methods and/or operation flowcharts disclosed in this document may be implemented using firmware or software in the form of codes, instructions and/or sets of instructions.
  • FIG. 42 illustrates another example of a wireless device that can be applied to various embodiments of the present disclosure.
  • the wireless device may include at least one processor (102, 202), at least one memory (104, 204), at least one transceiver (106, 206), and one or more antennas (108, 208).
  • the codeword can be converted into a bit sequence scrambled by a scrambler (1010).
  • the scramble sequence used for scrambling is generated based on an initialization value, and the initialization value may include ID information of the wireless device, etc.
  • the scrambled bit sequence can be modulated into a modulation symbol sequence by a modulator (1020).
  • the modulation method may include pi/2-BPSK (pi/2-Binary Phase Shift Keying), m-PSK (m-Phase Shift Keying), m-QAM (m-Quadrature Amplitude Modulation), etc.
  • the complex modulation symbol sequence can be mapped to one or more transmission layers by a layer mapper (1030).
  • the modulation symbols of each transmission layer can be mapped to the corresponding antenna port(s) by a precoder (1040) (precoding).
  • the output z of the precoder (1040) can be obtained by multiplying the output y of the layer mapper (1030) by a precoding matrix W of N*M.
  • N is the number of antenna ports
  • M is the number of transmission layers.
  • the precoder (1040) can perform precoding after performing transform precoding (e.g., DFT transform) on complex modulation symbols.
  • the precoder (1040) can perform precoding without performing transform precoding.
  • the transceiver(s) (114) may include one or more transceivers (106, 206) and/or one or more antennas (108, 208) of FIG. 41.
  • the control unit (120) is electrically connected to the communication unit (110), the memory unit (130), and the additional elements (140) and controls the overall operation of the wireless device.
  • the control unit (120) may control the electrical/mechanical operation of the wireless device based on the program/code/command/information stored in the memory unit (130).
  • Wireless devices may be mobile or stationary depending on the use/service.
  • the mobile device may include a smartphone, a smart pad, a wearable device (e.g., a smartwatch, smartglasses), or a portable computer (e.g., a laptop, etc.).
  • the mobile device may be referred to as a Mobile Station (MS), a User Terminal (UT), a Mobile Subscriber Station (MSS), a Subscriber Station (SS), an Advanced Mobile Station (AMS), or a Wireless Terminal (WT).
  • MS Mobile Station
  • UT User Terminal
  • MSS Mobile Subscriber Station
  • SS Subscriber Station
  • AMS Advanced Mobile Station
  • WT Wireless Terminal
  • FIG. 46 illustrates a vehicle or autonomous vehicle applicable to various embodiments of the present disclosure.
  • Vehicles or autonomous vehicles can be implemented as mobile robots, cars, trains, manned or unmanned aerial vehicles (AVs), ships, etc.
  • AVs unmanned aerial vehicles
  • the communication unit (110) can transmit information regarding the vehicle location, autonomous driving route, driving plan, etc. to the external server.
  • External servers can predict traffic information data in advance using AI technology or other technologies based on information collected from vehicles or autonomous vehicles, and provide the predicted traffic information data to the vehicles or autonomous vehicles.
  • the communication unit (110) can transmit and receive signals (e.g., data, control signals, etc.) with other vehicles or external devices such as base stations.
  • the control unit (120) can control components of the vehicle (100) to perform various operations.
  • the memory unit (130) can store data/parameters/programs/codes/commands that support various functions of the vehicle (100).
  • the input/output unit (140a) can output AR/VR objects based on information in the memory unit (130).
  • the input/output unit (140a) can include a HUD.
  • the position measurement unit (140b) can obtain position information of the vehicle (100).
  • the position information can include absolute position information of the vehicle (100), position information within a driving line, acceleration information, position information with respect to surrounding vehicles, etc.
  • the position measurement unit (140b) can include GPS and various sensors.
  • the communication unit (110) of the vehicle (100) can receive map information, traffic information, etc. from an external server and store them in the memory unit (130).
  • the location measurement unit (140b) can obtain vehicle location information through GPS and various sensors and store the information in the memory unit (130).
  • the control unit (120) can create a virtual object based on the map information, traffic information, and vehicle location information, and the input/output unit (140a) can display the created virtual object on the vehicle window (1410, 1420).
  • the control unit (120) can determine whether the vehicle (100) is being driven normally within the driving line based on the vehicle location information.
  • control unit (120) can display a warning on the vehicle window through the input/output unit (140a). Additionally, the control unit (120) can broadcast a warning message regarding driving abnormalities to surrounding vehicles through the communication unit (110). Depending on the situation, the control unit (120) can transmit vehicle location information and information regarding driving/vehicle abnormalities to relevant authorities through the communication unit (110).
  • the communication unit (110) can transmit and receive signals (e.g., media data, control signals, etc.) with external devices such as other wireless devices, portable devices, or media servers.
  • the media data can include videos, images, sounds, etc.
  • the control unit (120) can control components of the XR device (100a) to perform various operations.
  • the control unit (120) can be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, metadata generation and processing, etc.
  • the memory unit (130) can store data/parameters/programs/codes/commands required for driving the XR device (100a)/generating XR objects.
  • the input/output unit (140a) can obtain control information, data, etc.
  • the input/output unit (140a) can include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module, etc.
  • the sensor unit (140b) can obtain the XR device status, surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar.
  • the power supply unit (140c) supplies power to the XR device (100a) and may include a wired/wireless charging circuit, a battery, etc.
  • the XR device (100a) is wirelessly connected to the mobile device (100b) through the communication unit (110), and the operation of the XR device (100a) can be controlled by the mobile device (100b).
  • the mobile device (100b) can act as a controller for the XR device (100a).
  • the XR device (100a) can obtain three-dimensional position information of the mobile device (100b), and then generate and output an XR object corresponding to the mobile device (100b).
  • the sensor unit (140b) can obtain internal information of the robot (100), surrounding environment information, user information, etc.
  • the sensor unit (140b) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, a radar, etc.
  • the driving unit (140c) may perform various physical operations such as moving the robot joints. In addition, the driving unit (140c) may enable the robot (100) to drive on the ground or fly in the air.
  • the driving unit (140c) may include an actuator, a motor, wheels, brakes, propellers, etc.
  • the memory unit (130) can store data that supports various functions of the AI device (100).
  • the memory unit (130) can store data obtained from the input unit (140a), data obtained from the communication unit (110), output data of the learning processor unit (140c), and data obtained from the sensing unit (140).
  • the memory unit (130) can store control information and/or software codes necessary for the operation/execution of the control unit (120).
  • the sensing unit (140) may include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar, etc.
  • the learning processor unit (140c) can train a model composed of an artificial neural network using learning data.
  • the learning processor unit (140c) can perform AI processing together with the learning processor unit of the AI server ( Figure W1, 400).
  • the learning processor unit (140c) can process information received from an external device via the communication unit (110) and/or information stored in the memory unit (130).
  • the output value of the learning processor unit (140c) can be transmitted to an external device via the communication unit (110) and/or stored in the memory unit (130).
  • the claims described in the various embodiments of the present disclosure may be combined in various ways.
  • the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a device, and the technical features of the device claims of the various embodiments of the present disclosure may be combined and implemented as a method.
  • the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a device, and the technical features of the method claims of the various embodiments of the present disclosure may be combined and implemented as a method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Selon divers modes de réalisation, la présente invention concerne un procédé mis en œuvre par un équipement utilisateur (UE) dans un système de communication prenant en charge une pluralité de faisceaux qui comprend les étapes consistant à : recevoir des informations de configuration de signal pilote en provenance d'une station de base (BS) ; recevoir une pluralité de signaux pilotes provenant de la BS sur la base des informations de configuration de signal pilote ; et transmettre, à la BS, un résultat de mesure de faisceau mesuré sur la base de la pluralité de signaux pilotes, la pluralité de signaux pilotes se rapportant au résultat de mesure de faisceau mesuré sur la base de la pluralité de faisceaux, et au moins deux de la pluralité de faisceaux pouvant avoir une région de chevauchement.
PCT/KR2024/003981 2024-03-28 2024-03-28 Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil Pending WO2025206432A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/003981 WO2025206432A1 (fr) 2024-03-28 2024-03-28 Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2024/003981 WO2025206432A1 (fr) 2024-03-28 2024-03-28 Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil

Publications (1)

Publication Number Publication Date
WO2025206432A1 true WO2025206432A1 (fr) 2025-10-02

Family

ID=97217826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/003981 Pending WO2025206432A1 (fr) 2024-03-28 2024-03-28 Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil

Country Status (1)

Country Link
WO (1) WO2025206432A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170311177A1 (en) * 2016-04-26 2017-10-26 Haig A. Sarkissian System and method for increasing cellular site capacity
US20180324738A1 (en) * 2017-05-05 2018-11-08 Futurewei Technologies, Inc. System and Method for Network Positioning of Devices in a Beamformed Communications System
US20230208495A1 (en) * 2021-12-23 2023-06-29 Lenovo (Singapore) Pte. Ltd. Configuring information for location determination
US20230261729A1 (en) * 2020-08-21 2023-08-17 Qualcomm Incorporated Beam indications for facilitating multicast access by reduced capability user equipment
US20230345274A1 (en) * 2022-04-04 2023-10-26 David E. Newman Beam Adjustment by Incremental Feedback in 5G and 6G

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170311177A1 (en) * 2016-04-26 2017-10-26 Haig A. Sarkissian System and method for increasing cellular site capacity
US20180324738A1 (en) * 2017-05-05 2018-11-08 Futurewei Technologies, Inc. System and Method for Network Positioning of Devices in a Beamformed Communications System
US20230261729A1 (en) * 2020-08-21 2023-08-17 Qualcomm Incorporated Beam indications for facilitating multicast access by reduced capability user equipment
US20230208495A1 (en) * 2021-12-23 2023-06-29 Lenovo (Singapore) Pte. Ltd. Configuring information for location determination
US20230345274A1 (en) * 2022-04-04 2023-10-26 David E. Newman Beam Adjustment by Incremental Feedback in 5G and 6G

Similar Documents

Publication Publication Date Title
WO2022014728A1 (fr) Procédé et appareil pour effectuer un codage de canal par un équipement utilisateur et une station de base dans un système de communication sans fil
WO2022244903A1 (fr) Procédé et dispositif d'exécution d'un apprentissage fédéré dans un système de communication sans fil
WO2022149641A1 (fr) Procédé et appareil d'apprentissage fédéré basés sur une configuration de serveur multi-antenne et d'utilisateur à antenne unique
WO2024101461A1 (fr) Appareil et procédé permettant de régler la synchronisation de transmission et de réaliser une association entre un ap et un ue dans un système d-mimo
WO2022119021A1 (fr) Procédé et dispositif d'adaptation d'un système basé sur une classe d'apprentissage à la technologie ai mimo
WO2021235563A1 (fr) Procédé de distribution de clés quantiques prêtes à l'emploi basé sur des trajets multiples et une division de longueur d'onde, et dispositif d'utilisation de ce procédé
WO2023068714A1 (fr) Dispositif et procédé permettant de réaliser, sur la base d'informations de canal, un regroupement de dispositifs pour un aircomp, basé sur un apprentissage fédéré, d'un environnement de données non iid dans un système de communication
WO2024195920A1 (fr) Appareil et procédé pour effectuer un codage de canal sur un canal d'interférence ayant des caractéristiques de bruit non local dans un système de communication quantique
WO2023113390A1 (fr) Appareil et procédé de prise en charge de groupement d'utilisateurs de système de précodage de bout en bout dans un système de communication sans fil
WO2023128604A1 (fr) Procédé et dispositif pour effectuer une correction d'erreurs sur un canal de pauli asymétrique dans un système de communication quantique
WO2024150852A1 (fr) Dispositif et procédé pour exécuter une attribution de ressources quantiques basée sur une configuration d'ensemble de liaisons dans un système de communication quantique
WO2024150850A1 (fr) Dispositif et procédé pour effectuer une attribution de ressources quantiques basées sur une sélection de trajet discontinu dans un système de communication quantique
WO2024101470A1 (fr) Appareil et procédé pour effectuer une modulation d'état quantique sur la base d'une communication directe sécurisée quantique dans un système de communication quantique
WO2022092351A1 (fr) Procédé et appareil permettant d'atténuer une limitation de puissance de transmission par transmission en chevauchement
WO2025206432A1 (fr) Procédé et appareil d'émission/de réception d'un signal dans un système de communication sans fil
WO2025211471A1 (fr) Appareil et procédé pour émettre et recevoir des signaux dans un réseau non terrestre
WO2025211464A1 (fr) Procédé et dispositif d'émission et de réception de signaux dans un système de communication sans fil
WO2025216338A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil
WO2025211472A1 (fr) Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil
WO2025206440A1 (fr) Procédé et appareil de transmission et de réception de signaux dans un système de communication sans fil
WO2025249599A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025173808A1 (fr) Dispositif et procédé de réalisation d'une commutation dans un réseau non terrestre
WO2025173806A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil
WO2025100600A1 (fr) Appareil et procédé pour effectuer une commutation dans un réseau non terrestre
WO2025211467A1 (fr) Procédé et dispositif de transmission et de réception de signaux dans un système de communication sans fil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24931344

Country of ref document: EP

Kind code of ref document: A1