We propose a finite-alphabet symbol-level precoding technique for massive multiuser multiple-input multiple-output (MU-MIMO) downlink systems which are equipped with finite-resolution digital-to-analog converters (DACs) of any precision. Using the idea of constructive interference (CI), we adopt a max-min fair design criterion which aims to maximize the minimum instantaneous received signal-to-noise ratio (SNR) among the user equipments (UEs) while ensuring a CI constraint for each UE under the restriction that the output of the precoder is a vector with finite-alphabet discrete elements. Due to this latter constraint, the design problem is an NP-hard quadratic program with discrete variables, and hence, is difficult to solve. In this paper, we tackle this difficulty by reformulating the problem in several steps into an equivalent continuous-domain biconvex form, including equivalent representations for discrete and binary constraints. Our final biconvex reformulation is obtained via an exact penalty approach and can efficiently be solved using a standard cyclic block coordinate descent algorithm. We evaluate the performance of the proposed finite-alphabet precoding design for DACs with different resolutions, where it is shown that employing low-resolution DACs can lead to higher power efficiencies. In particular, we focus on a setup with one-bit DACs and show through simulation results that compared to the existing schemes, the proposed design can achieve SNR gains of up to 2 dB. We further provide analytic and numerical analyses of complexity and show that our proposed algorithm is computationally efficient as it typically needs only a few tens of iterations to converge.
The usage of multi-input multi-output (MIMO) systems such as a MIMO radar allows the array elements to transmit different waveforms freely. This waveform diversity can lead to flexible transmit beampattern synthesis, which is useful in many applications such as radar/sonar and biomedical imaging. In the past literature most attention was paid to receive beampattern design due to the stringent constraints on waveforms in the transmit beampattern case. Recently progress has been made on MIMO transmit beampattern synthesis but mainly only for narrowband signals. In this paper we propose a new approach that can be used to efficiently synthesize MIMO waveforms in order to match a given wideband transmit beampattern, i.e., to match a transmit energy distribution in both space and frequency. The synthesized waveforms satisfy the unit-modulus or low peak-to-average power ratio (PAR) constraints that are highly desirable in practice. Several examples are provided to investigate the performance of the proposed approach.
Energy efficiency optimization of wireless systems has become urgently important due to its impact on the global carbon footprint. In this paper we investigate energy efficient multicell multiuser precoding design and consider a new criterion of weighted sum energy efficiency, which is defined as the weighted sum of the energy efficiencies of multiple cells. This objective is more general than the existing methods and can satisfy heterogeneous requirements from different kinds of cells, but it is hard to tackle due to its sum-of-ratio form. In order to address this non-convex problem, the user rate is first formulated as a polynomial optimization problem with the test conditional probabilities to be optimized. Based on that, the sum-of-ratio form of the energy efficient precoding problem is transformed into a parameterized polynomial form optimization problem, by which a solution in closed form is achieved through a two-layer optimization. We also show that the proposed iterative algorithm is guaranteed to converge. Numerical results are finally provided to confirm the effectiveness of our energy efficient beamforming algorithm. It is observed that in the low signal-to-noise ratio (SNR) region, the optimal energy efficiency and the optimal sum rate are simultaneously achieved by our algorithm; while in the middle-high SNR region, a certain performance loss in terms of the sum rate is suffered to guarantee the weighed sum energy efficiency.
In millimeter-wave (mmWave) systems, antenna architecture limitations make it difficult to apply conventional fully digital precoding techniques but call for low-cost analog radio frequency (RF) and digital baseband hybrid precoding methods. This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains. Two performance measures, maximizing the spectral efficiency and the energy efficiency of the system, are considered. We propose a codebook-based RF precoding design and obtain the channel state information via a beam sweep procedure. Via the codebook-based design, the original system is transformed into a virtual multiuser downlink system with the RF chain constraint. Consequently, we are able to simplify the complicated hybrid precoding optimization problems to joint codeword selection and precoder design (JWSPD) problems. Then, we propose efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures. Finally, extensive numerical results are provided to validate the effectiveness of the proposed hybrid precoders.
We study distributed filtering for a class of uncertain systems over corrupted communication channels. We propose a distributed robust Kalman filter with stochastic gains, through which upper bounds of the conditional mean square estimation errors are calculated online. We present a robust collective observability condition, under which the mean square error of the distributed filter is proved to be uniformly upper bounded if the network is strongly connected. For better performance, we modify the filer by introducing a switching fusion scheme based on a sliding window. It provides a smaller upper bound of the conditional mean square error. Numerical simulations are provided to validate the theoretical results and show that the filter scales to large networks.
Federated learning (FL) is a collaborative machine learning (ML) paradigm based on persistent communication between a central server and multiple edge devices. However, frequent communication of large ML models can incur considerable communication resources, especially costly for wireless network nodes. In this paper, we develop a communication-efficient protocol to reduce the number of communication instances in each round while maintaining convergence rate and asymptotic distribution properties. First, we propose a novel communication-efficient FL algorithm that utilizes an event-triggered communication mechanism, where each edge device updates the model by using stochastic gradient descent with local sampling data and the central server aggregates all local models from the devices by using model averaging. Communication can be reduced since each edge device and the central server transmits its updated model only when the difference between the current model and the last communicated model is larger than a threshold. Thresholds of the devices and server are not necessarily coordinated, and the thresholds and step sizes are not constrained to be of specific forms. Under mild conditions on loss functions, step sizes and thresholds, for the proposed algorithm, we establish asymptotic analysis results in three ways, respectively: convergence in expectation, almost-sure convergence, and asymptotic distribution of the estimation error. In addition, we show that by fine-tunning the parameters, the proposed event-triggered FL algorithm can reach the same convergence rate as state-of-the-art results from existing time-driven FL. We also establish asymptotic efficiency in the sense of Central Limit Theorem of the estimation error. Numerical simulations for linear regression and image classification problems in the literature are provided to show the effectiveness of the developed results.
We consider estimation under model misspecification where there is a model mismatch between the underlying system, which generates the data, and the model used during estimation. We propose a model misspecification framework which enables a joint treatment of the model misspecification types of having fake features as well as incorrect covariance assumptions on the unknowns and the noise. We present a decomposition of the output error into components that relate to different subsets of the model parameters corresponding to underlying, fake and missing features. Here, fake features are features which are included in the model but are not present in the underlying system. Under this framework, we characterize the estimation performance and reveal trade-offs between the number of samples, number of fake features, and the possibly incorrect noise level assumption. In contrast to existing work focusing on incorrect covariance assumptions or missing features, fake features is a central component of our framework. Our results show that fake features can significantly improve the estimation performance, even though they are not correlated with the features in the underlying system. In particular, we show that the estimation error can be decreased by including more fake features in the model, even to the point where the model is overparametrized, i.e., the model contains more unknowns than observations.
Distributed learning provides an attractive framework for scaling the learning task by sharing the computational load over multiple nodes in a network. Here, we investigate the performance of distributed learning for large-scale linear regression where the model parameters, i.e., the unknowns, are distributed over the network. We adopt a statistical learning approach. In contrast to works that focus on the performance on the training data, we focus on the generalization error, i.e., the performance on unseen data. We provide high-probability bounds on the generalization error for both isotropic and correlated Gaussian data as well as sub-gaussian data. These results reveal the dependence of the generalization performance on the partitioning of the model over the network. In particular, our results show that the generalization error of the distributed solution can be substantially higher than that of the centralized solution even when the error on the training data is at the same level for both the centralized and distributed approaches. Our numerical results illustrate the performance with both real-world image data as well as synthetic data.
We consider the LMS estimation of a channel that may be well approximated by an FIR model with only a few nonzero tap coefficients within a given delay horizon or tap length n. When the number of nonzero tap coefficients m is small compared with the delay horizon n, the performance of the LMS estimator is greatly enhanced when this specific structure is exploited. We propose a consistent algorithm that performs identification of nonzero taps only. The results are illustrated via a simulation study.
We study the error of linear regression in the face of adversarial attacks. In this framework, an adversary changes the input to the regression model in order to maximize the prediction error. We provide bounds on the prediction error in the presence of an adversary as a function of the parameter norm and the error in the absence of such an adversary. We show how these bounds make it possible to study the adversarial error using analysis from non-adversarial setups. The obtained results shed light on the robustness of overparameterized linear models to adversarial attacks. Adding features might be either a source of additional robustness or brittleness. On the one hand, we use asymptotic results to illustrate how double-descent curves can be obtained for the adversarial error. On the other hand, we derive conditions under which the adversarial error can grow to infinity as more features are added, while at the same time, the test error goes to zero. We show this behavior is caused by the fact that the norm of the parameter vector grows with the number of features. It is also established that l(infinity) and l(2)-adversarial attacks might behave fundamentally differently due to how the l(1) and l(2)-norms of random projections concentrate. We also show how our reformulation allows for solving adversarial training as a convex optimization problem. This fact is then exploited to establish similarities between adversarial training and parameter-shrinking methods and to study how the training might affect the robustness of the estimated models.
The effects of sampling and quantization on frequency estimation for a single sinusoid are investigated. Cramer-Rao bound for I-bit quantization is derived and compared with the limit of infinite quantization. It is found that I-bit quantization gives a slightly worse performance, however, with a dramatic increase of variance at certain frequencies. This can be avoided by using four times ol oversampling. The effect of sampling when using nonideal antialiasing lowpass filters is therefore investigated through derivation of the Cramer-Rao lower bounds. Finally, fast estimators for 1-bit quantization, in particular, correlation-based estimators, are derived,and their performance is investigated. The paper is concluded with simulation results for fourtimes oversampled I-bit quantization.
The basic nonlinear filtering problem for dynamical systems is considered. Approximating the optimal filter estimate by particle filter methods has become perhaps the most common and useful method in recent years. Many variants of particle filters have been suggested, and there is an extensive literature on the theoretical aspects of the quality of the approximation. Still a clear-cut result that the approximate solution, for unbounded functions, converges to the true optimal estimate as the number of particles tends to infinity seems to be lacking. It is the purpose of this contribution to give such a basic convergence result for a rather general class of unbounded functions. Furthermore, a general framework, including many of the particle filter algorithms as special cases, is given.
The particle filter has become an important tool in solving nonlinear filtering problems for dynamic systems. This correspondence extends our recent work, where we proved that the particle filter converges for unbounded functions, using L4-convergence. More specifically, the present contribution is that we prove that the particle filter converge for unbounded functions in the sense of Lp-convergence, for an arbitrary p ≥ 2.
Far-field microwave power transfer (MPT) will free wireless sensors and other mobile devices from the constraints imposed by finite battery capacities. Integrating MPT with wireless communications to support simultaneous wireless information and power transfer (SWIPT) allows the same spectrum to be used for dual purposes without compromising the quality of service. A novel approach is presented in this paper for realizing SWIPT in a broadband system where orthogonal frequency division multiplexing and transmit beamforming are deployed to create a set of parallel sub-channels for SWIPT, which simplifies resource allocation. Based on a proposed reconfigurable mobile architecture, different system configurations are considered by combining single-user/multi-user systems, downlink/uplink information transfer, and variable/fixed coding rates. Optimizing the power control for these configurations results in a new class of multi-user power-control problems featuring the circuit-power constraints, specifying that the transferred power must be sufficiently large to support the operation of the receiver circuitry. Solving these problems gives a set of power-control algorithms that exploit channel diversity in frequency for simultaneously enhancing the throughput and the MPT efficiency. For the system configurations with variable coding rates, the algorithms are variants of water-filling that account for the circuit-power constraints. The optimal algorithms for those configurations with fixed coding rates are shown to sequentially allocate mobiles their required power for decoding in ascending order until the entire budgeted power is spent. The required power for a mobile is derived as simple functions of the minimum signal-to-noise ratio for correct decoding, the circuit power and sub-channel gains.
This paper deals with the practical precoding design for a dual hop downlink with multiple-input multiple-output (MIMO) amplify-and-forward relaying. First, assuming that full channel state information (CSI) of the two hop channels is available, a suboptimal dual hop joint precoding scheme, i.e., precoding at both the base station and relay station, is investigated. Based on its structure, a scheme of limited feedback joint precoding using joint codebooks is then proposed, which uses a distributed codeword selection to concurrently choose two joint precoders such that the feedback delay is considerably decreased. Finally, the joint codebook design for the limited feedback joint precoding system is analyzed, and results reveal that independent codebook designs at the base station and relay station using the conventional Grassmannian subspace packing method is able to guarantee that the overall performance of the dual hop joint precoding scheme improves with the size of each of the two codebooks. Simulation results show that the proposed dual hop joint precoding system using distributed codeword selection scheme exhibits a rate or BER performance close to the one using the optimal centralized codeword selection scheme, while having lower computational complexity and shorter feedback delay.
Improving channel information quality at the base station (BS) is crucial for the optimization of frequency division duplexed (FDD) multi-antenna multiuser downlink systems with limited feedback. To this end, this paper proposes to estimate a particular representation of channel state information (CSI) at the BS through channel norm feedback and a newly developed channel phase codebook, where the long-term channel correlation is efficiently exploited to improve performance. In particular, the channel representation is decomposed into a gain-related part and a phase-related part, with each of them estimated separately. More specifically, the gain-related part is estimated from the channel norm and channel correlation matrix, while the phase-related part is determined using a channel phase codebook, constructed with the generalized Lloyd algorithm. Using the estimated channel representation, joint optimization of multiuser precoding and opportunistic scheduling is performed to obtain an SDMA transmit scheme. Computer simulation results confirm the advantage of the proposed scheme over state of the art limited feedback SDMA schemes under correlated channel environment.
This paper studies distributed optimization schemes for multicell joint beamforming and power allocation in time-division-duplex (TDD) multicell downlink systems where only limited-capacity intercell information exchange is permitted. With an aim to maximize the worst-user signal-to-interference-and-noise ratio (SINR), we devise a hierarchical iterative algorithm to optimize downlink beamforming and intercell power allocation jointly in a distributed manner. The proposed scheme is proved to converge to the global optimum. For fast convergence and to reduce the burden of intercell parameter exchange, we further propose to exploit previous iterations adaptively. Results illustrate that the proposed scheme can achieve near-optimal performance even with a few iterations, hence providing a good tradeoff between performance and backhaul consumption. The performance under quantized parameter exchange is also examined.
Interpolation (mapping) of data from a given antenna array onto the output of a virtual array of more suitable configuration is well known in array signal processing. This operation allows arrays of any geometry to be used with fast direction-of-arrival (DOA) estimators designed for linear arrays. Conditions for preserving DOA error variance under such mappings have been derived by several authors. However, in many cases, such as omnidirectional signal surveillance over multiple octaves, systematic mapping errors will dominate over noise effects and cause significant bias in the DOA estimates. To prevent mapping errors from unduly affecting the DOA estimates, this paper uses a geometrical interpretation of a Taylor series expansion of the DOA estimator criterion function to derive an alternative design of the mapping matrix. Verifying simulations show significant bias reduction in the DOA estimates compared with previous designs. The key feature of the proposed design is that it takes into account the orthogonality between the manifold mapping errors and certain gradients of the estimator criterion function. With the new design, mapping of narrowband signals between dissimilar array geometries over wide sectors and large frequency ranges becomes feasible.
Interpolation or mapping of data from a given real array to data from a virtual array of more suitable geometry is well known in array signal processing. This operation allows arrays of any geometry to be used with fast direction-of-arrival (DOA) estimators designed for linear arrays. In an earlier companion paper [21], a first-order condition for zero DOA bias under such mapping was derived and was also used to construct a design algorithm for the mapping matrix that minimized the DOA estimate bias. This bias-minimizing theory is now extended to minimize not only bias, but also to consider finite sample effects due to noise and reduce the DOA mean-square error (MSE). An analytical first-order expression for mapped DOA MSE is derived, and a design algorithm for the transformation matrix that minimizes this MSE is proposed. Generally, DOA MSE is not reduced by minimizing the size of the mapping errors but instead by rotating these errors and the associated noise subspace into optimal directions relative to a certain gradient of the DOA estimator criterion function. The analytical MSE expression and the design algorithm are supported by simulations that show not only conspicuous MSE,improvements in relevant scenarios, but also a more robust preprocessing for low signal-to-noise ratios (SNRs) as compared with the pure bias-minimizing design developed in the previous paper.
A frequency-selective algorithm for time delay estimation between two channel data is proposed, and its performance is studied. The adaptive stochastic gradient scheme pro\ides a direct estimate of the time delay, It has a low numerical complexity, and it is an alternative to data prefiltering and time delay estimation when the source and noise components have different spectral characteristics.
In a recent paper, a statistical analysis of Kay's weighted linear predictor frequency estimator was carried out. Here, using a different analysis technique to that employed in this same paper, the asymptotic variance of the weighted linear predictor frequency estimator is derived.
p-step ahead predictive filters for narrowband waveforms with rr, distinct spectral peaks are considered, By minimization of the noise gain, the coefficients of the optimal Lth order FIR predictor is derived, where L > 2m -1, The minimum-length FIR predictor is given by L = 2m -1, and the feedback extension of this predictor is studied. Design of feedback gains subject to different optimization criteria is studied in detail, The generalization to complex-valued signals, cascaded predictors, and adaptive predictors are also included, Several design and simulation examples are presented.
This paper analyzes the asymptotic tracking properties of an adaptive notch filter (ANF) with pole-zero constraints [1] for the cancellation or retrieval of multiple time-varying sine waves in additive noise. The asymptotic mean square error (MSE) is analyzed using the methods of Ljung and Gunnarsson [2] when the variations in the underlying frequencies are assumed to be sufficiently small. Closed-form expressions for the MSE are derived as functions of the tuning variables of the algorithm. The results give insight into the operational properties of the algorithm and are used in order to minimize the MSE with respect to the tuning variables. Computer simulations confirm the validity of the derived results.
The problem of estimating the frequency rate-of-change of complex-valued frequency-modulated signals from noisy observations is considered. The performance of four related estimators is studied, both analytically and by means of simulations, and their relationship to the estimators proposed by Djuric/Kay and Lang/Musicus is established.
High-resolution spectral Doppler is an important and powerful noninvasive tool for estimation of velocities in blood vessels using medical ultrasound scanners. Such estimates are typically formed using an averaged periodogram technique, resulting in well-known limitations in the resulting spectral resolution. Recently, we have proposed techniques to instead form high-resolution data-adaptive estimates exploiting measurements along both depth and emission. The resulting estimates gives noticeably superior velocity estimates as compared to the standard technique, but suffers from a high computational complexity, making it interesting to formulate computationally efficient implementations of the estimators. In this work, by exploiting the rich structure of the iterative adaptive approach (IAA) based estimator, we examine how these estimates can be efficiently implemented in a time-recursive manner using both exact and approximate formulations of the method. The resulting algorithms are shown to reduce the necessary computational load with several orders of magnitude without noticeable loss of performance.
Nuclear quadrupole resonance (NQR) offers an unequivocal method of detecting hidden narcotics and explosives. Unfortunately, the practical use of NQR is restricted by the low signal-to-noise ratio (SNR) and means to improve the SNR are vital to enable a rapid, reliable and convenient system. In this correspondence, we develop two multichannel detectors to counter the typically present radio frequency interference. Numerical simulations indicate that the proposed methods offers a significantly improved robustness to uncertainties in the parameters detailing the examined sample.
Nuclear quadrupole resonance (NQR) offers an unequivocal method of detecting and identifying land mines. Unfortunately, the practical use of NQR is restricted by the low signal-to-noise ratio (SNR), and the means to improve the SNR are vital to enable a rapid, reliable, and convenient system. In this paper, an approximate maximum-likelihood detector (AML) is developed, exploiting the temperature dependency of the NQR frequencies as a way to enhance the SNR. Numerical evaluation using both simulated and real NQR data indicate a significant gain in probability of accurate detection as compared with the current state-of-the-art approach.
The fixed-complexity sphere decoder (FSD) has been previously proposed for multiple-input multiple-output (MIMO) detection in order to overcome the two main drawbacks of the sphere decoder (SD), namely its variable complexity and its sequential structure. Although the FSD has shown remarkable quasi-maximum-likelihood (ML) performance and has resulted in a highly optimized real-time implementation, no analytical study of its performance existed for an arbitrary MIMO system. Herein, the error probability of the FSD is analyzed, proving that it achieves the same diversity as the maximum-likelihood detector (MLD) independent of the constellation used. In addition, it can also asymptotically yield ML performance in the high-signal-to-noise ratio (SNR) regime. Those two results, together with its fixed complexity, make the FSD a very promising algorithm for uncoded MIMO detection.
Sphere decoding has been suggested by a number of authors as an efficient algorithm to solve various detection problems in digital communications. In some cases, the algorithm is referred to as an algorithm of polynomial complexity without clearly specifying what assumptions are made about the problem structure. Another claim is that although worst-case complexity is exponential, the expected complexity of the algorithm is polynomial. Herein, we study the expected complexity where the problem size is defined to be the number of symbols jointly detected, and our main result is that the expected complexity is exponential for fixed signal-to-noise ratio (SNR), contrary to previous claims. The sphere radius, which is a parameter of the algorithm, must be chosen to ensure a nonvanishing probability of solving the detection problem. This causes the exponential complexity since the squared radius must grow linearly with problem size. The rate of linear increase is, however, dependent on the noise variance, and thus, the rate of the exponential function is strongly dependent on the SNR. Therefore sphere decoding can be efficient for some SNR and problems of moderate size, even though the number of operations required by the algorithm strictly speaking always grows as an exponential function of the problem size.
Model error sensitivity is an issue common to all high-resolution direction-of-arrival estimators. Much attention has been directed to the design of algorithms for minimum variance estimation taking only finite sample errors into account. Approaches to reduce the sensitivity due to army calibration errors have also appeared in the literature. Herein, one such approach is adopted that assumes that the errors due to finite samples and model errors are of comparable size. A weighted subspace fitting method for very general array perturbation models is derived. This method provides minimum variance estimates under the assumption that the prior distribution of the perturbation model is known. Interestingly, the method reduces to the WSF (MODE) estimator if no model errors are present, Vice versa, assuming that model errors dominate, the method specializes to the corresponding "model-errors-only subspace fitting method." Unlike previous techniques for model errors, the estimator can be implemented using a two-step procedure if the nominal array is uniform and linear, and it is also consistent even if the signals are fully correlated. The paper also contains a large sample analysis of one of the alternative methods, namely, MAPprox, It is shown that MAPprox also provides minimum variance estimates under reasonable assumptions.
The decision feedback (DF) transceiver, combining linear precoding and DF equalization, can establish point-to-point communication over a wireless multiple-input multiple-output channel. Matching the DF-transceiver design parameters to the channel characteristics can improve system performance, but requires channel knowledge. We consider the fast-fading channel scenario, with a receiver capable of tracking the channel-state variations accurately, while the transmitter only has long-term, channel-distribution information. The receiver design problem given channel-state information is well studied in the literature. We focus on transmitter optimization, which amounts to designing a statistical precoder to assist the channel-tailored DF equalizer. We develop a design framework that encompasses a wide range of performance metrics. Common cost functions for precoder optimization are analyzed, thereby identifying a structure of typical cost functions. Transmitter design is approached for typical cost functions in general, and we derive a precoder design formulation as a convex optimization problem. Two important subclasses of cost functions are considered in more detail. First, we explore a symmetry of DF transceivers with a uniform subchannel rate allocation, and derive a simplified convex optimization problem, which can be efficiently solved even as system dimensions grow. Second, we explore the tractability of a certain class of mean square error based cost functions, and solve the transmitter design problem with a simple algorithm that identifies the convex hull of a set of points in R-2. The behavior of DF transceivers with optimal precoders is investigated by numerical means.
We propose a Bayesian framework for the received-signal-strength-based cooperative localization problem with unknown path loss exponent. Our purpose is to infer the marginal posterior of each unknown parameter: the position or the path loss exponent. This probabilistic inference problem is solved using message passing algorithms that update messages and beliefs iteratively. For numerical tractability, we combine the variable discretization and Monte-Carlo-based numerical approximation schemes. To further improve computational efficiency, we develop an auxiliary importance sampler that updates the beliefs with the help of an auxiliary variable. An important ingredient of the proposed auxiliary importance sampler is the ability to sample from a normalized likelihood function. To this end, we develop a stochastic sampling strategy that mathematically interprets and corrects an existing heuristic strategy. The proposed message passing algorithms are analyzed systematically in terms of computational complexity, demonstrating the computational efficiency of the proposed auxiliary importance sampler. Various simulations are conducted to validate the overall good performance of the proposed algorithms.
This paper shows that the in-phase and quadrature (I/Q) channel mismatch problem for complex signals in zero-IF receivers (ZIFRs) is related to the real-signal channel-mismatch problem in two-channel time-interleaved analog-to-digital converters (TI-ADCs). The problems are related in that, given one of the problems, it can be converted to the other problem via relatively simple signal processing operations. This offers more options for the estimation of and compensation for I/Q and TI-ADC channel mismatches. In particular, if there is a need to perform both I/Q and TI-ADC channel mismatch correction, one can make use of the same basic estimation and/or compensation principles for both applications which may save computational resources. The use of TI-ADC mismatch estimation and/or compensation also offers real-signal processing techniques to the complex-signal mismatch estimation and compensation problem in ZIFRs.
This paper considers the problem of reconstructing a class of nonuniformly sampled bandlimited signals of which a special case occurs in, e.g., time-interleaved analog-to-digital converter (ADC) systems due to time-skew errors. To this end, we propose a synthesis system composed of digital fractional delay filters. The overall system (i.e., nonuniform sampling and the proposed synthesis system) can be viewed as a generalization of time-interleaved ADC systems to which the former reduces as a special case. Compared with existing reconstruction techniques, our method has major advantages from an implementation point of view. To be precise, 1) we can perform the reconstruction as well as desired (in a certain sense) by properly designing the digital fractional delay filters, and 2) if properly implemented, the fractional delay filters need not be redesigned in case the time skews are changed. The price to pay for these attractive features is that we need to use a slight oversampling. It should be stressed, however, that the oversampling factor is less than two as compared with the Nyquist rate. The paper includes error and quantization noise analysis. The former is useful in the analysis of the quantization noise and when designing practical fractional delay filters approximating the ideal filters.
In this correspondence, we study the achievable rate region of the multiple-input single-output (MISO) interference channel, under the assumption that all receivers treat the interference as additive Gaussian noise. Our main result is an explicit parametrization of the Pareto boundary for an arbitrary number of users and antennas. The parametrization describes the boundary in terms of a low-dimensional manifold. For the two-user case we show that a single real-valued parameter per user is sufficient to achieve all points on the Pareto boundary and that any point on the Pareto boundary corresponds to beamforming vectors that are linear combinations of the zero-forcing (ZF) and maximum-ratio transmission (MRT) beamformers. We further specialize the results to the MISO broadcast channel (BC). A numerical example illustrates the result.
The problem of weighted sum-rate maximization (WSRMax) in multicell downlink multiple-input single-output (MISO) systems is considered. The problem is known to be NP-hard. We propose a method, based on branch and bound technique, which solves globally the nonconvex WSRMax problem with an optimality certificate. Specifically, the algorithm computes a sequence of asymptotically tight upper and lower bounds and it terminates when the difference between them falls below a pre-specified tolerance. Novel bounding techniques via conic optimization are introduced and their efficiency is demonstrated by numerical simulations. The proposed method can be used to provide performance benchmarks by back-substituting it into many existing network design problems which relies on WSRMax problem. The method proposed here can be easily extended to maximize any system performance metric that can be expressed as a Lipschitz continuous and increasing function of signal-to-interference-plus-noise ratio.
We consider performance optimization in the uplink of a multiuser multiantenna communication system. Each user multiplexes data onto several independently encoded data streams, which are spatially precoded and conveyed over a fading narrowband multiple-input multiple-output (MIMO) channel. All users' data streams are decoded successively at the receiving base station using zero-forcing decision feedback equalization (ZF-DFE). We target the joint optimization of a decoding order and linear precoders for all users based on long-term channel information. For a class of general MIMO channel models, including the separable-correlation and double-scattering models, we show that the choice of precoder for a certain user does not affect the performance of the others. This leads to a particularly straightforward characterization of general user utility regions as a polyblock, or a convex polytope if time-sharing is allowed. We formulate the decoding-ordering problem under transmit-correlated Rayleigh fading as a linear assignment problem, enabling the use of existing efficient algorithms. Combining decoding ordering with single-user precoder optimization by means of alternating optimization, we propose an efficient iterative scheme that is verified numerically to converge fast and perform close to optimally, successfully reaping the benefits of both precoding and ordering in the MIMO uplink.
This paper considers a wireless multiple-input multiple-output (MIMO) communication system in a frequency-nonselective scenario with spatially uncorrelated Rayleigh fading channel coefficients and investigates the design of linear dispersive (LD) space-time block codes. Efficient LD codes are obtained by optimizing the constituent weight matrices so that an upper bound on the union bound of the codeword error probability is minimized. Interestingly, the proposed design procedure automatically generates LD codes that either correspond to, or are close to, the well-known class of orthogonal space-time block (OSTB) codes. A theoretical analysis confirms this by proving that OSTB codes are indeed optimal, when the setup under study permits their existence. Simulation results demonstrate the excellent performance of the designed codes. In particular, the importance of the codes' near-orthogonal property is illustrated by showing that low-complexity linear equalizer techniques can be used for decoding purposes while incurring a relatively moderate performance loss compared with optimal maximum-likelihood (NIL) decoding.
The problem of transmit beamforming to multiple cochannel multicast groups is considered for the important special case when the channel vectors are Vandermonde. This arises when a uniform linear antenna antenna (ULA) array is used at the transmitter under far-field line-of-sight propagation conditions, as provisioned in 802.16e and related wireless backhaul scenarios. Two design approaches are pursued: (i) minimizing the total transmitted power subject to providing at least a prescribed received signal-to-interference-plus-noise-ratio (SINR) to each intended receiver; and (ii) maximizing the minimum received SINR under a total transmit power budget. Whereas these problems have been recently shown to be NP-hard, in general, it is proven here that for Vandermonde channel vectors, it is possible to recast the optimization in terms of the autocorrelation sequences of the sought beam vectors, yielding an equivalent convex reformulation. This affords efficient optimal solution using modern interior point methods. The optimal beam vectors can then be recovered using spectral factorization. Robust extensions for the case of partial channel state information, where the direction of each receiver is known to lie in an interval, are also developed. Interestingly, these also admit convex reformulation. The various optimal designs are illustrated and contrasted in a suite of pertinent numerical experiments.
The problem of transmit beamforming to multiple cochannel multicast groups is considered, when the channel state is known at the transmitter and from two viewpoints: minimizing total transmission power while guaranteeing a prescribed minimum signal-to-interference-plus-noise ratio (SINR) at each receiver; and a "fair" approach maximizing the overall minimum SINR under a total power budget. The core problem is a multicast generalization of the multiuser downlink beamforming problem; the difference is that each transmitted stream is directed to multiple receivers, each with its own channel. Such generalization is relevant and timely, e.g., in the context of the emerging WiMAX and UMTS-LTE wireless networks. The joint problem also contains single-group multicast beamforming as a special case. The latter (and therefore also the former) is NP-hard. This motivates the pursuit of computationally efficient quasi-optimal solutions. It is shown that Lagrangian relaxation coupled with suitable randomization/cochannel multicast power control yield computationally efficient high-quality approximate solutions. For a significant fraction of problem instances, the solutions generated this way are exactly optimal. Extensive numerical results using both simulated and measured wireless channels are presented to corroborate our main findings.
A common framework for maritime surface and underwater (UW) map-aided navigation is proposed as a supplement to satellite navigation based on the global positioning system (GPS). The proposed Bayesian navigation method is based on information from a distance measuring equipment (DME) which is compared with the information obtained from various databases. As a solution to the recursive Bayesian navigation problem, the particle filter is proposed. For the described system, the fundamental navigation performance expressed as the Crameacuter-Rao lower bound (CRLB) is analyzed and an analytic solution as a function of the position is derived. Two detailed examples of different navigation applications are discussed: surface navigation using a radar sensor and a digital sea chart and UW navigation using a sonar sensor and a depth database. In extensive Monte Carlo simulations, the performance is shown to be close to the CRLB. The estimation performance for the surface navigation application is in comparison with usual GPS performance. Experimental data are also successfully applied to the UW application.
In this paper, the computational complexity of the marginalized particle filter is analyzed and a general method to perform this analysis is given. The key is the introduction of the equivalent flop measure. In an extensive Monte Carlo simulation, different computational aspects are studied and compared with the derived theoretical results.
The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. The performance bottleneck, which used to be computations, is now often communications. Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy. Recently, it has been shown how compression errors can be compensated for in the optimization algorithm to improve the solution accuracy. Even though convergence guarantees for error-compensated algorithms have been established, there is very limited theoretical support for quantifying the observed improvements in solution accuracy. In this paper, we show that Hessian-aided error compensation, unlike other existing schemes, avoids accumulation of compression errors on quadratic problems. We also present strong convergence guarantees of Hessian-based error compensation for stochastic gradient descent. Our numerical experiments highlight the benefits of Hessian-based error compensation, and demonstrate that similar convergence improvements are attained when only a diagonal Hessian approximation is used.
Noisy gradient algorithms have emerged as one of the most popular algorithms for distributed optimization with massive data. Choosing proper step-size schedules is an important task to tune in the algorithms for good performance. For the algorithms to attain fast convergence and high accuracy, it is intuitive to use large step-sizes in the initial iterations when the gradient noise is typically small compared to the algorithm-steps, and reduce the step-sizes as the algorithm progresses. This intuition has been confirmed in theory and practice for stochastic gradient descent. However, similar results are lacking for other methods using approximate gradients. This paper shows that the diminishing step-size strategies can indeed be applied for a broad class of noisy gradient algorithms. Our analysis framework is based on two classes of systems that characterize the impact of the step-sizes on the convergence performance of many algorithms. Our results show that such step-size schedules enable these algorithms to enjoy the optimal rate. We exemplify our results on stochastic compression algorithms. Our experiments validate fast convergence of these algorithms with the step decay schedules.
For cyclic prefixed single-carrier (CP-SC) spectrum sharing relay systems, the joint impact of the primary transmitter and primary receiver on the performance of secondary networks is investigated. A two-hop amplify-and-forward (AF) relay protocol is employed, and then its end-to-end signal-to-interference-ratio (e2e-SIR) is defined. An upper bound on the cumulative distribution function (CDF) of this e2e-SIR is derived for the CP-SC system. Having investigated an asymptotic expression for the CDF, the asymptotic outage diversity and coding gain can be obtained. Also, under a total transmission constraint, a suboptimal power allocation (sub-OPA) is derived for approximately minimizing the asymptotic outage probability. Monte Carlo simulations verify the derived outage probability and its asymptotic diversity gain with and without power allocation. It is shown that the primary transmitter has a major impact on the secondary network performance. Importantly, it is observed that when the interference from the primary transmitter proportionally increases with the interference power constraint at the primary receiver, the spectrum-sharing system results in no diversity gain for the full range of the SIR.