An experiment for estimating Moho depth is carried out based on satellite altimetryand topographic information using the Vening Meinesz–Moritz gravimetric isostatichypothesis. In order to investigate the possibility and quality of satellite altimetry in Mohodetermination, the DNSC08GRA global marine gravity field model and the DTM2006 globaltopography model are used to obtain a global Moho depth model over the oceans with aresolution of 1 x 1 degree. The numerical results show that the estimated Bouguer gravity disturbancevaries from 86 to 767 mGal, with a global average of 747 mGal, and the estimatedMoho depth varies from 3 to 39 km with a global average of 19 km. Comparing the Bouguergravity disturbance estimated from satellite altimetry and that derived by the gravimetricsatelliteonly model GOGRA04S shows that the two models agree to 13 mGal in root meansquare (RMS). Similarly, the estimated Moho depths from satellite altimetry andGOGRA04S agree to 0.69 km in RMS. It is also concluded that possible mean dynamictopography in the marine gravity model does not significantly affect the Moho determination.
In order to detect the geohazards, different deformation monitoring networks are usually established. It is of importance to design an optimal monitoring network to fulfil the requested precision and reliability of the network. Generally, the same observation plan is considered during different time intervals (epochs of observation). Here, we investigate the case that instrumental improvements in sense of precision are used in two successive epochs. As a case study, we perform the optimisation procedure on a GPS monitoring network around the Lilla Edet village in the southwest of Sweden. The network was designed for studying possible displacements caused by landslides. The numerical results show that the optimisation procedure yields an observation plan with significantly fewer baselines in the latter epoch, which leads to saving time and cost in the project. The precision improvement in the second epoch is tested in several steps for the Lilla Edet network. For instance, assuming two times better observation precision in the second epoch decreases the number of baselines from 215 in the first epoch to 143 in the second one.
The Earth topographic masses are compensated by an isostatic adjustment. According to the isostatic hypothesis a mountain is compensated by mass deficiency beneath it, where the crust is floating on the viscous mantle. For study of the impact of the compensating mass on the topographic mass a crustal thickness (Moho boundary) model is needed. A new gravimetricisostatic model to estimate the Moho depth, Vening MeineszMoritz model, and two wellknown Moho models (CRUST2.0 and AiryHeiskanen) are used in this study. All topographic masses cannot be compensated by simple isostatic assumption then other compensation mechanism should be considered. In fact small topographic masses can be supported by elasticity of the larger masses and deeper Earth's layers. We discuss this issue applying spatial and spectral analyses in this study. Here we are going to investigate influence of the crustal thickness and its density in compensating the topographic potential. This study shows that the compensating potential is larger than the topographic potential in lowfrequencies vs. in highfrequencies which are smaller. The study also illustrates that the Vening MeineszMoritz model compensates the topographic potential better than other models, which is more suitable for interpolation of the gravity field in comparison with two other models. In this study, two methods are presented to determine the percentage of the compensation of the topographic potential by the isostatic model. Numerical studies show that about 75% and 57% of the topographic potentials are compensated by the potential beneath it in Iran and Tibet. In addition, correlation analysis shows that there is linear relation between the topographic above the sea level and underlying topographic masses in the lowfrequencies in the crustal models. Our investigation shows that about 580 +/ 7.4 metre (in average) of the topographic heights are not compensated by variable the crustal root and density.
The gravimetric model of the Moho discontinuity is usually derived based on isostatic adjustment theories considering floating crust on the viscous mantle. In computation of such a model some a priori information about the density contrast between the crust and mantle and the mean Moho depth are required. Due to our poor knowledge about them they are assumed unrealistically constant. In this paper, our idea is to improve a computed gravimetric Moho model, by the Vening MeineszMoritz theory, using the seismic model in Fennoscandia and estimate the error of each model through a combined adjustment with variance component estimation process. Corrective surfaces of bilinear, biquadratic, bicubic and multiquadric radial based function are used to model the discrepancies between the models and estimating the errors of the models. Numerical studies show that in the case of using the bilinear surface negative variance components were come out, the biquadratic can model the difference better and delivers errors of 2.7 km and 1.5 km for the gravimetric and seismic models, respectively. These errors are 2.1 km and 1.6 km in the case of using the bicubic surface and 1 km and 1.5 km when the multiquadric radial base function is used. The combined gravimetric models will be computed based on the estimated errors and each corrective surface.
Estimation of variance in an ordinary adjustment model is straightforward, but if the model becomes unstable or illconditioned its solution and the variance of the solution will be very sensitive to the errors of observations. This sensitivity can be controlled by stabilizing methods but the results will be distorted due to stabilization. In this paper, stabilizing an unstable condition model using Tikhonov regularization, the estimations of variance of unit weight and variance components are investigated. It will be theoretically proved that the estimator of variance or variance components has not the minimum variance property when the model is stabilized, but unbiased estimation of variance is possible. A simple numerical example is provided to show the performance of the theory.

The Gravity field and steadystate Ocean Circulation Explorer (GOCE) mission is dedicated to recover spherical harmonic coefficients of the Earth's gravity field to degree and order of about 250 using its satellite gradiometric data. Since these data are contaminated with coloured noise, therefore, their inversion will not be straightforward. Unsuccessful modelling of this noise will lead to biases in the harmonic coefficients presented in the Earth's gravity models (EGMs). In this study, five of the recent EGMs of GOCE such as two direct, two timewise and one spacewise solution are used to degree and order 240 and their reliability is investigated with respect to EGM08 which is assumed as a reliable EGM. The detected unreliable coefficients and their errors are replaced by the corresponding ones from EGM08 as a combination strategy. A condition adjustment model is organised for each two corresponding coefficients of GOCE EGMs and EGM08; and errors of the GOCE EGMs are calibrated based on a scaling factor, obtained from a posteriori variance factor. When the factor is less than 2.5 it will be multiplied to the error otherwise the error of EGM08 coefficient will be considered as the calibrated one. At the end, a simple geoid estimator is presented which considers the EGMs and their errors and its outcomes are compared with the corresponding geoid heights derived from the Global Positioning System (GPS) and the levelling data (GPS/levelling data), over Fennoscandia. This comparison shows that some of the combinedcalibrated GOCE EGMs are closer to the GPS/levelling data than the original ones.
One of the problems in the singleobjective optimisation models (SOOMs) for optimising geodetic networks is the contradiction of the controlling constraints, which may lead to their violation or infeasibility in the optimisation process. One way to solve this problem is to use a biobjective optimisation model (BOOM) instead of SOOMs. In this paper, we will use the BOOM of precision and reliability and investigate the influence of the controlling constraints in a twodimensional simulated network. Our studies show that the unconstrained BOOM is a good model, which almost fulfils our precision and reliability demands of the network. This model is also economical as more observables are removed from the plan whilst adding the controlling constraints leads to including more observables, which have no significant role
There are different criteria for designing a geodetic network in an optimal way. An optimum network can be regarded as a network having high precision, reliability and low cost. Accordingly, corresponding to these criteria different singleobjective models can be defined. Each one can be subjected to two other criteria as constraints. Sometimes the constraints can be contradictory so that some of the constraints are violated. In this contribution, these models are mathematically reviewed. It is numerically shown how to prepare these mathematical models for optimization process through a simulated network. We found that the reliability model yields small position changes between those obtained using precision respectively. Elimination of some observations may happen using precision and cost model while the reliability model tries to save number of observations. In our numerical studies, no contradictions can be seen in reliability model and this model seems to be more suitable for designing of the geodetic and deformation networks.
The problem of handling outliers in a deformation monitoring network is of special importance, because the existence of outliers may lead to false deformation parameters. One of the approaches to detect the outliers is to use robust estimators. In this case the network points are computed by such a robust method, implying that the adjustment result is resisting systematic observation errors, and, in particular, it is insensitive to gross errors and even blunders. Since there are different approaches to robust estimation, the resulting estimated networks may differ. In this article, different robust estimation methods, such as the Mestimation of Huber, the “Danish”, and the L 1norm estimation methods, are reviewed and compared with the standard least squares method to view their potentials to detect outliers in the Tehran Milad tower deformation network. The numerical studies show that the L 1norm is able to detect and downweight the outliers best, so it is selected as the favourable approach, but there is a lack of uniqueness. For comparison, Baarda’s method “data snooping” can achieve similar results when the outlier magnitude of an outlier is large enough to be detected; but robust methods are faster than the sequential data snooping process.
There are different criteria for designing a geodetic network in an optimal way.An optimum network can be regarded as a network having high precision, reliabilityand low cost. Accordingly, corresponding to these criteria different singleobjectivemodels can be defined. Each one can be subjected to two other criteria as constraints.Sometimes the constraints can be contradictory so that some of the constraints areviolated. In this contribution, these models are mathematically reviewed. It is numericallyshown how to prepare these mathematical models for optimization processthrough a simulated network. We found that the reliability model yields small positionchanges between those obtained using precision respectively. Elimination ofsome observations may happen using precision and cost model while the reliabilitymodel tries to save number of observations. In our numerical studies, no contradictionscan be seen in reliability model and this model
In the global navigation satellite system (GNSS) carrier phase data processing, cycle slips are limiting factors and affect the quality of the estimators in general. When differencing phase observations, a problem in phase ambiguity parameterization may arise, namely linear relations between some of the parameters. These linear relations must be considered as additional constraints in the system of observation equations. Neglecting these constraints, results in poorer estimators. This becomes significant when ambiguity resolution is in demand. As a clue to detect the problem in GNSS processing, we focused on the equivalence of using undifferenced and differenced observation equations. With differenced observables this equivalence is preserved only if we add certain constraints, which formulate the linear relations between some of the ambiguity parameters, to the differenced observation equations. To show the necessity of the additional constraints, an example is made using real data of a permanent station from the network of the international GNSS service (IGS). The achieved results are notable to the GNSS software developers.
In precise geoid modelling the combination of terrestrial gravity data and an Earth Gravitational Model (EGM) is standard. The proper combination of these data sets is of great importance, and spectral combination is one alternative utilized here. In this method data from satellite gravity gradiometry (SGG), terrestrial gravity and an EGM are combined in a least squares sense by minimizing the expected global mean square error. The spectral filtering process also allows the SGG data to be downward continued to the Earth's surface without solving a system of equations, which is likely to be illconditioned. Each practical formula is presented as a combination of one or two integral formulas and the harmonic series of the EGM. Numerical studies show that the kernels of the integral part of the geoid and gravity anomaly estimators approach zero at a spherical distance of about 5 degrees. Also shown (by the expected root mean square errors) is the necessity to combine EGM08 with local data, such as terrestrial gravimetric data, and/or SGG data to attain the 1cm accuracy in local geoid determination.
Isostasy is a key concept in geodesy and geophysics. The classical isostatic models of Airy/Heiskanen and Pratt/Hayford imply that the topographic mass surplus and ocean mass deficit are balanced by mountain roots and antiroots in the former model and by density variations in the topography and the compensation layer below sea bottom in the latter model. In geophysics gravity inversion is an essential topic where isostasy comes to play. The main objective of this study is to compare the prediction of geoid heights from the above isostatic models based on matched asymptotic expansion with geoid heights observed by the Earth Gravitational Model 2008. Numerical computations were carried out both globally and in several regions, showing poor agreements between the theoretical and observed geoid heights. As an alternative, multiple regression analysis including several nonisostatic terms in addition to the isostatic terms was tested providing only slightly better success rates. Our main conclusion is that the geoid height cannot generally be represented by the simple formulas based on matched asymptotic expansions. This is because (a) both the geoid and isostatic compensation of the topography have regional to global contributions in addition to the pure local signal considered in the classical isostatic models, and (b) geodynamic phenomena are still likely to significantly blur the results despite that all spherical harmonic lowdegree (below degree 11) gravity signals were excluded from the study.
Repeated absolute gravity measurements in Fennoscandia have revealed that the ongoing postglacial rebound can be regarded as a pure viscous flow of mantle mass of density 3390 kg/m^{3} towards the central part of the region caused by a gravity/uplift rate of −0.167 μGal/mm. Our model estimates the rebound induced rates of changes of surface gravity and geoid height to have peaks of −1.9 μGal/yr and 1.6 mm/yr, respectively, the former being consistent with absolute gravity observations. The correlation coefficient of the spherical harmonic representations of the geoid height and uplift rate for the spectral windows between degrees 10 and 70 is estimated to −0.99±0.006, and the maximum remaining land uplift is estimated to the order of 80 m. Both the (almost) linear increase of relaxation time with degree and the linear relation between geoid height and uplift rate support a model with mass flow in the major part of the mantle and disqualify the model with a flow in a thin channel below the crust. The mean viscosity of the flow in the central uplift region is estimated to 4×10^{21} Pa s.
In precise geoid modelling the combination of terrestrial gravity data and an Earth Gravitational Model (EGM) is standard. The proper combination of these data sets is of great importance, and spectral combination is one alternative utilized here. In this method data from satellite gravity gradiometry (SGG), terrestrial gravity and an EGM are combined in a least squares sense by minimizing the expected global mean square error. The spectral filtering process also allows the SGG data to be downward continued to the Earth's surface without solving a system of equations, which is likely to be illconditioned. Each practical formula is presented as a combination of one or two integral formulas and the harmonic series of the EGM.Numerical studies show that the kernels of the integral part of the geoid and gravity anomaly estimators approach zero at a spherical distance of about 5°. Also shown (by the expected root mean square errors) is the necessity to combine EGM08 with local data, such as terrestrial gravimetric data, and/or SGG data to attain the 1cm accuracy in local geoid determination.