Change search
Refine search result
1234567 1 - 50 of 2999
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Oldest first
Select all
• 1.
Egyptian National Institute of Transport.
Egyptian National Institute of Transport.
A generic approach for in depth statistical investigation of accident characteristics and causes2001In: Proceedings of the conference Traffic Safety on Three Continents: International conference in Moscow, Russia, 19-21 September, 2001 / [ed] Asp, Kenneth, Linköping: Statens väg- och transportforskningsinstitut, 2001, Vol. 18A:3, 13- p.Conference paper (Other academic)

The main aim of this research is to develop a generic approach for the utilization of statistical methods to conduct depth investigation of road accident characteristics and causes. This approach is applied in an effort to analyse the 1998 accident database for the main rural roads in Egypt. This database is composed of traffic accident data collected for 14 road sections representing nine major roads of the Egyptian rural road network. The proposed approach is composed of two main stages of analysis. Within each stage, several analytical steps are conducted. The first stage is mainly concerned with developing cluster bar charts, where different characteristics and causes of accidents are portrayed in relation to variations in the three main accident contributing factors, namely types of roads, vehicles and drivers. The second stage is concerned with conducting in-depth statistical analysis of the collected accident data. Within this stage, four levels of statistical investigations were conducted. These are meant to examine a number of issues.

• 2.
Mälardalen University, School of Education, Culture and Communication.
Mälardalen University, School of Education, Culture and Communication.
Black-Litterman Model: Practical Asset Allocation Model Beyond Traditional Mean-Variance2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

This paper consolidates and compares the applicability and practicality of Black-Litterman model versus traditional Markowitz Mean-Variance model. Although well-known model such as Mean-Variance is academically sound and popular, it is rarely used among asset managers due to its deficiencies. To put the discussion into context we shed light on the improvement made by Fisher Black and Robert Litterman by putting the performance and practicality of both Black- Litterman and Markowitz Mean-Variance models into test. We will illustrate detailed mathematical derivations of how the models are constructed and bring clarity and profound understanding of the intuition behind the models. We generate two different portfolios, composing data from 10-Swedish equities over the course of 10-year period and respectively select 30-days Swedish Treasury Bill as a risk-free rate. The resulting portfolios orientate our discussion towards the better comparison of the performance and applicability of these two models and we will theoretically and geometrically illustrate the differences. Finally, based on extracted results of the performance of both models we demonstrate the superiority and practicality of Black-Litterman model, which in our particular case outperform traditional Mean- Variance model.

• 3.
Örebro University, Swedish Business School at Örebro University.
Örebro University, Swedish Business School at Örebro University.
Behov av stödundervisning i grundskolan: En designbaserad analys av longitudinella data2008Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
• 4.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Mathematical Statistics.
Statistical models of breast cancer tumour growth for mammography screening data2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
• 5.
Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science. Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science.
Re: Long-term survival and mortality in prostate cancer treated with noncurative intent1995In: UROLGY, Vol. 154, 460-465 p.Article in journal (Refereed)
• 6.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Numerical analysis for random processes and fields and related design problems2011Doctoral thesis, comprehensive summary (Other academic)

In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D).

In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed.

In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided.

In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity.

In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.

• 7.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Was it snowing on lake Kassjön in January 4486 BC? Functional data analysis of sediment data.2014In: Proceedings of the Third International Workshop on Functional and Operatorial Statistics (IWFOS 2014), Stresa, Italy, June 2014., 2014Conference paper (Refereed)
• 8.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy. National Sports Institute of Malaysia. MOX – Department of Mathematics, Politecnico di Milano. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. MOX – Department of Mathematics, Politecnico di Milano.
An inferential framework for domain selection in functional anova2014In: Contributions in infinite-dimensional statistics and related topics / [ed] Bongiorno, E.G., Salinelli, E., Goia, A., Vieu, P, Esculapio , 2014Conference paper (Refereed)

We present a procedure for performing an ANOVA test on functional data, including pairwise group comparisons. in a Scheff´e-like perspective. The test is based on the Interval Testing Procedure, and it selects intervals where the groups significantly differ. The procedure is applied on the 3D kinematic motion of the knee joint collected during a functional task (one leg hop) performed by three groups of individuals.

• 9.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
On the error of the Monte Carlo pricing method for Asian option2008In: Journal of Numerical and Applied Mathematics, ISSN 0868-6912, Vol. 96, no 1, 1-10 p.Article in journal (Refereed)

We consider a Monte Carlo method to price a continuous arithmetic Asian option with a given precision. Piecewise constant approximation and plain simulation are used for a wide class of models based on L\'{e}vy processes. We give bounds of the possible discretization and simulation errors. The sufficient numbers of discretization points and simulations to obtain requested accuracy are derived. To demonstrate the general approach, the Black-Scholes model is studied in more detail. We undertake the case of continuous averaging and starting time zero,  but the obtained results can be applied to the discrete case  and generalized for any time before an execution date. Some numerical experiments and comparison to the PDE based method are also presented.

• 10.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Stratified Monte Carlo quadrature for continuous random fields2015In: Methodology and Computing in Applied Probability, ISSN 1387-5841, E-ISSN 1573-7713, Vol. 17, no 1, 59-72 p.Article in journal (Refereed)

We consider the problem of numerical approximation of integrals of random fields over a unit hypercube. We use a stratified Monte Carlo quadrature and measure the approximation performance by the mean squared error. The quadrature is defined by a finite number of stratified randomly chosen observations with the partition generated by a rectangular grid (or design). We study the class of locally stationary random fields whose local behavior is like a fractional Brownian field in the mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large number of the observations. For the H¨older class of random functions, we provide an upper bound for the approximation error. Additionally, for a certain class of isotropic random functions with an isolated singularity at the origin, we construct a sequence of designs eliminating the effect of the singularity point.

• 11.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Piecewise multilinear interpolation of a random field2013In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 45, no 4, 945-959 p.Article in journal (Refereed)

We consider a piecewise-multilinear interpolation of a continuous random field on a d-dimensional cube. The approximation performance is measured using the integrated mean square error. Piecewise-multilinear interpolator is defined by N-field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field, in the mean square sense, and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields, we provide the upper bound for the approximation accuracy in the uniform mean square norm.

• 12.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Politecnico di Milano, Italy. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Politecnico di Milano, Italy. Oslo University, Norway.
Clustering misaligned dependent curves applied to varved lake sediment for climate reconstruction2017In: Stochastic environmental research and risk assessment (Print), ISSN 1436-3240, E-ISSN 1436-3259, Vol. 31, 71-85 p.Article in journal (Refereed)

In this paper we introduce a novel functional clustering method, the Bagging Voronoi K-Medoid Aligment (BVKMA) algorithm, which simultaneously clusters and aligns spatially dependent curves. It is a nonparametric statistical method that does not rely on distributional or dependency structure assumptions. The method is motivated by and applied to varved (annually laminated) sediment data from lake Kassjön in northern Sweden, aiming to infer on past environmental and climate changes. The resulting clusters and their time dynamics show great potential for seasonal climate interpretation, in particular for winter climate changes.

• 13.
Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science.
25 years of applied statistics1998In: JOURNAL OF APPLIED STATISTICS, ISSN 0266-4763, Vol. 25, no 1, 3-22 p.Article in journal (Refereed)
Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science.
Risk for endometrial cancer following breast cancer: A prospective study in Sweden1997In: Cancer Causes & Control, Vol. 8, 821-827 p.Article in journal (Refereed)
Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science.
A prospective study of smoking and risk of prostate cancer1996In: Int J Cancer, Vol. 67, 764-768 p.Article in journal (Refereed)
• 16.
Uppsala University, Humanistisk-samhällsvetenskapliga vetenskapsområdet, Faculty of Social Sciences, Department of Information Science.
Blood transfusion and non-Hodgkin lymphoma: Lack of association1997In: ANNALS OF INTERNAL MEDICINE, ISSN 0003-4819, Vol. 127, no 5, 365-& p.Article in journal (Refereed)
• 17.
Blekinge Institute of Technology, School of Engineering.
Statistical Modelling and the Fokker-Planck Equation2008Independent thesis Advanced level (degree of Master (One Year))Student thesis

A stochastic process or sometimes called random process is the counterpart to a deterministic process in theory. A stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. In this case, Instead of dealing only with one possible 'reality' of how the process might evolve under time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths are more probable and others less. However, in discrete time, a stochastic process amounts to a sequence of random variables known as a time series. Over the past decades, the problems of synergetic are concerned with the study of macroscopic quantitative changes of systems belonging to various disciplines such as natural science, physical science and electrical engineering. When such transition from one state to another take place, fluctuations i.e. (random process) may play an important role. Fluctuations in its sense are very common in a large number of fields and nearly every system is subjected to complicated external or internal influences that are often termed noise or fluctuations. Fokker-Planck equation has turned out to provide a powerful tool with which the effects of fluctuation or noise close to transition points can be adequately be treated. For this reason, in this thesis work analytical and numerical methods of solving Fokker-Planck equation, its derivation and some of its applications will be carefully treated. Emphasis will be on both for one variable and N- dimensional cases.

Stockholm University, Faculty of Social Sciences, Department of Statistics.
Empirical properties of closed- and open-economy DSGE models of the Euro area2008In: Macroeconomic dynamics (Print), ISSN 1365-1005, E-ISSN 1469-8056, Vol. 12, 2-19 p.Article in journal (Refereed)

In this paper, we compare the empirical proper-ties of closed- and open-economy DSGE models estimated on Euro area data. The comparison is made along several dimensions; we examine the models in terms of their marginal likelihoods, forecasting performance, variance decompositions, and their transmission mechanisms of monetary policy.

Stockholm University, Faculty of Social Sciences, Department of Statistics.
Forecasting performance of an open economy DSGE model2007In: Econometric Reviews, ISSN 0747-4938, E-ISSN 1532-4168, Vol. 26, no 04-feb, 289-328 p.Article in journal (Refereed)

This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and, two different Bayesian approaches, and traditional benchmark models, e.g., the random. walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in, an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between, rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.

• 20.
Örebro University, Department of Business, Economics, Statistics and Informatics.
Utvärdering av granskningssystem för SCB:s undersökningar Kortperiodisk Sysselsättningsstatistik och Konjunkturstatistik över Vakanser2007Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

I denna studie har undersökningarna Kortperiodisk Sysselsättningsstatistiks (KS) och Konjunkturstatistik över Vakansers (KV) befintliga granskningssystem utvärderats med avseende på hur effektivt det är. Processdata har framställts och analyserats. Resultaten tyder på att många av de inkomna blanketterna med misstänkt felaktiga uppgifter inte rättas upp, utan tvingas igenom trots att granskningssystemet ej accepterade uppgifterna. Det befintliga granskningssystemet har en högre träffsäkerhet avseende KS-undersökningen, men både KS och KV skulle kunnas granskas mer effektivt.

För att utvärdera det befintliga granskningssystemet ytterligare användes en poängfunktion. Till studien fanns tillgång till både helt ogranskat material och helt granskat material och dessa material användes i poängfunktionen. Det uppräknade ogranskade värdet för varje objekt jämfördes med det uppräknade granskade värdet och ställdes i relation till respektive skattade branschtotal. De poängsatta blanketterna rangordnades sedan. Därefter analyserades materialet för att försöka finna var det skulle vara lämpligt att sätta det tröskelvärde som skulle skilja det material som ”egentligen” skulle ha behövts granskas från det som kunde ha lämnats orört. Att sätta tröskelvärdet är svårt. Här gjordes det godtyckligt utifrån kriterierna att det fel som införs i skattningarna för att allt material inte granskas skulle hållas så lågt som möjligt samt att antalet blanketter som skulle behöva granskas manuellt av produktionsgruppen också skulle hållas så lågt som möjligt. Även här visade det sig att det befintliga granskningssystemet inte är så effektivt som önskas. När resultaten från denna del av utvärderingen analyserades upptäcktes problem som beror på blankettutformningen. Skulle blanketterna ses över och åtgärdas skulle det fel som införs för att allt material inte granskas kunna minskas avsevärt. Genom att minska det införda felet kan tröskelvärdet förmodligen sättas på en ny nivå vilket medför att omfattningen av granskningen skulle minska ytterligare.

Hur skulle då ett mer effektivt granskningssystem kunna se ut? I den här studien har valet fallit på att testa ”significance editing” på KS-undersökningen, det som på svenska kallas för effektgranskning. En poängfunktion användes även här, denna tilldelar de inkomna blanketterna varsin poäng och dessa poäng rangordnas därefter. Efter att poängen rangordnats bestäms en gräns, ett tröskelvärde, och de blanketter med en poäng som överstiger tröskelvärdet granskas och rättas upp av produktionsgruppen. De blanketter med en poäng som understiger det satta tröskelvärdet rättas inte upp, utan behåller sina originalvärden. Poängfunktionen jämför det inkomna ogranskade, uppräknade, värdet med ett uppräknat ”förväntat” värde och ställer denna differens i relation till den skattade branschtotalen. Svårigheten ligger ofta i att hitta ett bra förväntat värde och detta problem uppstår ideligen i urvalsundersökningar. Tanken med effektgranskning är att omfattningen av granskningen ska minska och den granskning som utförs ska ha effekt på slutresultatet.

Det var inte lätt att hitta ett bra förväntat värde på den tid som stod till förfogande. Två problem som snabbt upptäcktes var dels att i KS-undersökningen finns inte uträknade säsongs- eller trendfaktorer per variabel. Dessutom byttes en mycket stor del av urvalet ut till kvartal 2 (som denna studie har avgränsats till att behandla). Detta har fått till följd att cirka hälften av objekten i urvalet inte går att följa bakåt i tiden eftersom de inte ingått i urvalet tidigare. I studien har respektive stratums medelvärde använts som förväntat värde. Resultaten visar att det valda förväntade värdet inte skulle ha använts i praktiken, men det fungerar bra i syfte att illustrera hur det i praktiken skulle kunna gå till att införa en mer effektiv granskning.

• 21.
Örebro University, Orebro University School of Business, Örebro University, Sweden.
Örebro University, Orebro University School of Business, Örebro University, Sweden.
Ett försök till att statistiskt modellera matchutfall för fotbollens division 1 för herrar i Sverige2012Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
• 22.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Forecasting GDP Growth, or How Can Random Forests Improve Predictions in Economics?2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

GDP is used to measure the economic state of a country and accurate forecasts of it is therefore important. Using the Economic Tendency Survey we investigate forecasting quarterly GDP growth using the data mining technique Random Forest. Comparisons are made with a benchmark AR(1) and an ad hoc linear model built on the most important variables suggested by the Random Forest. Evaluation by forecasting shows that the Random Forest makes the most accurate forecast supporting the theory that there are benefits to using Random Forests on economic time series.

• 23.
Linnaeus University, Faculty of Engineering and Technology, Department of Mathematics.
Extremal dependency:The GARCH(1,1) model and an Agent based model2013Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis

This thesis focuses on stochastic processes and some of their properties are investigated which are necessary to determine the tools, the extremal index and the extremogram. Both mathematical tools measure extremal dependency within random time series. Two different models are introduced and related properties are discussed. The probability function of the Agent based model is surveyed explicitly and strong stationarity is proven. Data sets for both processes are simulated and clustering of the data is investigated with two different methods. Finally an estimation of the extremogram is used to interpret dependency of extremes within the data.

• 24.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
Internal Market Risk Modelling for Power Trading Companies2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

Since the financial crisis of 2008, the risk awareness has increased in the -financial sector. Companies are regulated with regards to risk exposure. These regulations are driven by the Basel Committee that formulates broad supervisory standards, guidelines and recommends statements of best practice in banking supervision. In these regulations companies are regulated with own funds requirements for market risks.

This thesis constructs an internal model for risk management that, according to the "Capital Requirements Regulation" (CRR) respectively the "Fundamental Review of the Trading Book" (FRTB), computes the regulatory capital requirements for market risks. The capital requirements according to CRR and FRTB are compared to show how the suggested move to an expected shortfall (ES) based model in FRTB will affect the capital requirements. All computations are performed with data that have been provided from a power trading company to make the results fit reality. In the results, when comparing the risk capital requirements according to CRR and FRTB for a power portfolio with only linear assets, it shows that the risk capital is higher using the value-at-risk (VaR) based model. This study shows that the changes in risk capital mainly depend on the different methods of calculating the risk capital according to CRR and FRTB respectively and minor on the change of risk measure.

• 25.
Totalförsvarets Forskningsinstitut, FOI, Stockholm, Sweden.
Swedish National Forensic Centre (NFC), Linköping, Sweden. Totalförsvarets Forskningsinstitut, FOI, Stockholm, Sweden.
Chemometrics comes to court: evidence evaluation of chem–bio threat agent attacks2015In: Journal of Chemometrics, ISSN 0886-9383, E-ISSN 1099-128X, Vol. 29, no 5, 267-276 p.Article in journal (Refereed)

Forensic statistics is a well-established scientific field whose purpose is to statistically analyze evidence in order to support legal decisions. It traditionally relies on methods that assume small numbers of independent variables and multiple samples. Unfortunately, such methods are less applicable when dealing with highly correlated multivariate data sets such as those generated by emerging high throughput analytical technologies. Chemometrics is a field that has a wealth of methods for the analysis of such complex data sets, so it would be desirable to combine the two fields in order to identify best practices for forensic statistics in the future. This paper provides a brief introduction to forensic statistics and describes how chemometrics could be integrated with its established methods to improve the evaluation of evidence in court.

The paper describes how statistics and chemometrics can be integrated, by analyzing a previous know forensic data set composed of bacterial communities from fingerprints. The presented strategy can be applied in cases where chemical and biological threat agents have been illegally disposed.

• 26.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Location-invariant and non-invariant tests for large dimensional covariance matrices under normality and non-normality2014Report (Other academic)

Test statistics for homogeneity, sphericity and identity of high-dimensional covariance matrices are presented under a wide variety of very general conditions when the dimension of the vector, $p$, may exceed the sample size, $n_i$, $i = 1, \ldots, g$. First, location-invariant tests are presented under normality assumption, followed by their robustness to normality by replacing the normality assumption with a mild alternative multivariate model. The two types of tests are then presented in non-invariant form, again under normality and non-normality. Tests of homogeneity of covariance matrices in all cases are immediately supplemented by the tests for sphericity and identity of the common covariance matrix under the null hypothesis. Both location-invariant and non-invariant tests are composed of estimators that are defined as $U$-statistics with kernels of different degrees. Hence, the asymptotic theory of $U$-statistics is employed to arrive at the limiting null and alternative distributions of tests for all cases. These limit distributions are derived using a very mild and practically viable set of assumptions mainly on the traces of the unknown covariance matrices. Finally, corrections and improvements of a few other tests are also presented.

• 27.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Tests for independence of vectors with large dimensionManuscript (preprint) (Other academic)

Given a random sample of $n$ iid vectors, each of dimension $p$ and partitioned into $b$ sub-vectors of sizes $p_i$, $i = 1, \ldots, b$. Location-invariant and non-invariant tests for independence of sub-vectors are presented when $p_i$ may exceed $n$ and the distribution need not be normal. The tests are composed of unbiased and consistent estimators of the Frobenius norm of the difference between the null and alternative hypotheses. Asymptotic distributions of the tests are provided for $n, p_i \rightarrow \infty$, where their finite-sample performance is demonstrated through simulations. Some related and subsequent tests are briefly described. Relations of the proposed tests to certain multivariate measures are discussed, which are of interest on their own.

• 28.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Testing homogeneity of several covariance matrices and multi-sample sphericity for high-dimensional data under non-normality2017In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 46, no 8, 3738-3753 p.Article in journal (Refereed)

A test for homogeneity of g 2 covariance matrices is presented when the dimension, p, may exceed the sample size, n(i), i = 1, ..., g, and the populations may not be normal. Under some mild assumptions on covariance matrices, the asymptotic distribution of the test is shown to be normal when n(i), p . Under the null hypothesis, the test is extended for common covariance matrix to be of a specified structure, including sphericity. Theory of U-statistics is employed in constructing the tests and deriving their limits. Simulations are used to show the accuracy of tests.

• 29.
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology.
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology.
A U-statistics Based Approach to Mean Testing for High Dimensional Multivariate Data Under Non-normality2011Report (Other academic)

A test statistic is considered for testing a hypothesis for the mean vector for multivariate data, when the dimension of the vector, p, may exceed the number of vectors, n, and the underlying distribution need not necessarily be normal. With n, p large, and under mild assumptions, the statistic is shown to asymptotically follow a normal distribution. A by product of the paper is the approximate distribution of a quadratic form, based on the reformulation of well-known Box's approximation, under high-dimensional set up.

• 30.
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology.
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology. Department of Energy and Technology, Swedish Univerity of Agricultural Sciences, SE-750 07 Uppsala, Sweden.
Some Tests of Covariance Matrices for High Dimensional Multivariate Data2011Report (Other academic)

Test statistics for sphericity and identity of the covariance matrix are presented, when the data are multivariate normal and the dimension, p, can exceed the sample size, n. Using the asymptotic theory of U-statistics, the test statistics are shown to follow an approximate normal distribution for large p, also when p >> n. The statistics are derived under very general conditions, particularly avoiding any strict assumptions on the traces of the unknown covariance matrix. Neither any relationship between n and p is assumed. The accuracy of the statistics is shown through simulation results, particularly emphasizing the case when p can be much larger than n. The validity of the commonly used assumptions for high-dimensional set up is also briefly discussed.

• 31.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Tests for high-dimensional covariance matrices using the theory of U-statistics2015In: Journal of Statistical Computation and Simulation, ISSN 0094-9655, E-ISSN 1563-5163, Vol. 85, no 13, 2619-2631 p.Article in journal (Refereed)

Test statistics for sphericity and identity of the covariance matrix are presented, when the data are multivariate normal and the dimension, p, can exceed the sample size, n. Under certain mild conditions mainly on the traces of the unknown covariance matrix, and using the asymptotic theory of U-statistics, the test statistics are shown to follow an approximate normal distribution for large p, also when p >> n. The accuracy of the statistics is shown through simulation results, particularly emphasizing the case when p can be much larger than n. A real data set is used to illustrate the application of the proposed test statistics.

• 32.
Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Statistics.
Tests of Covariance Matrices for High Dimensional Multivariate Data Under Non Normality2015In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 44, no 7, 1387-1398 p.Article in journal (Refereed)

Ahmad et al. (in press) presented test statistics for sphericity and identity of the covariance matrix of a multivariate normal distribution when the dimension, p, exceeds the sample size, n. In this note, we show that their statistics are robust to normality assumption, when normality is replaced with certain mild assumptions on the traces of the covariance matrix. Under such assumptions, the test statistics are shown to follow the same asymptotic normal distribution as under normality for large p, also whenp >> n. The asymptotic normality is proved using the theory of U-statistics, and is based on very general conditions, particularly avoiding any relationship between n and p.

• 33.
Swedish University of Agricultural Sciences, Uppsala, Sweden and Department of Statistics, Uppsala University, Sweden.
Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Mathematical Statistics . Linköping University, The Institute of Technology.
A note on mean testing for high dimensional multivariate data under non-normality2013In: Statistica neerlandica (Print), ISSN 0039-0402, E-ISSN 1467-9574, Vol. 67, no 1, 81-99 p.Article in journal (Refereed)

A test statistic is considered for testing a hypothesis for the mean vector for multivariate data, when the dimension of the vector, p, may exceed the number of vectors, n, and the underlying distribution need not necessarily be normal. With n,p→∞, and under mild assumptions, but without assuming any relationship between n and p, the statistic is shown to asymptotically follow a chi-square distribution. A by product of the paper is the approximate distribution of a quadratic form, based on the reformulation of the well-known Box's approximation, under high-dimensional set up. Using a classical limit theorem, the approximation is further extended to an asymptotic normal limit under the same high dimensional set up. The simulation results, generated under different parameter settings, are used to show the accuracy of the approximation for moderate n and large p.

• 34.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Mathematical Statistics.
Brownian Motions and Scaling Limits of Random Trees2011Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
• 35.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.
Importance Sampling for Least-Square Monte Carlo Methods2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

Pricing American style options is challenging due to early exercise opportunities. The conditional expectation in the Snell envelope, known as the continuation value is approximated by basis functions in the Least-Square Monte Carlo-algorithm, giving robust estimation for the options price. By change of measure in the underlying Geometric Brownain motion using Importance Sampling, the variance of the option price can be reduced up to 9 times. Finding the optimal estimator that gives the minimal variance requires careful consideration on the reference price without adding bias in the estimator. A stochastic algorithm is used to find the optimal drift that minimizes the second moment in the expression of the variance after change of measure. The usage of Importance Sampling shows significant variance reduction in comparison with the standard Least-Square Monte Carlo. However, Importance Sampling method may be a better alternative for more complex instruments with early exercise opportunity.

• 36. Ahmed, S. Ejaz
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Estimation of Several Intraclass Correlation Coefficients2015In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 44, no 9, 2315-2328 p.Article in journal (Refereed)

An intraclass correlation coefficient observed in several populations is estimated. The basis is a variance-stabilizing transformation. It is shown that the intraclass correlation coefficient from any elliptical distribution should be transformed in the same way. Four estimators are compared. An estimator where the components in a vector consisting of the transformed intraclass correlation coefficients are estimated separately, an estimator based on a weighted average of these components, a pretest estimator where the equality of the components is tested and then the outcome of the test is used in the estimation procedure, and a James-Stein estimator which shrinks toward the mean.

• 37.
Blekinge Institute of Technology, School of Engineering.
Blekinge Institute of Technology, School of Engineering.
Optimal Solutions Of Fuzzy Relation Equations2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis

Fuzzy relation equations are becoming extremely important in order to investigate the optimal solution of the inverse problem even though there is a restrictive condition for the availability of the solution of such inverse problems. We discussed the methods for finding the optimal (maximum and minimum) solution of inverse problem of fuzzy relation equation of the form $R \circ Q = T$ where for both cases R and Q are kept unknown interchangeably using different operators (e.g. alpha, sigma etc.). The aim of this study is to make an in-depth finding of best project among the host of projects, depending upon different factors (e.g. capital cost, risk management etc.) in the field of civil engineering. On the way to accomplish this aim, two linguistic variables are introduced to deal with the uncertainty factor which appears in civil engineering problems. Alpha-composition is used to compute the solution of fuzzy relation equation. Then the evaluation of the projects is orchestrated by defuzzifying the obtained results. The importance of adhering to such synopsis, in the field of civil engineering, is demonstrated by an example.

• 38.
Blekinge Institute of Technology, School of Engineering, Department of Telecommunication Systems.
Mobile Satellite Communications: Channel Characterization and Simulation2007Independent thesis Advanced level (degree of Master (One Year))Student thesis

• 39.
Blekinge Institute of Technology, School of Computing.
Blekinge Institute of Technology, School of Computing.
Evaluation of AODV and DSR Routing Protocols of Wireless Sensor Networks for Monitoring Applications2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis

Deployment of sensor networks are increasing either manually or randomly to monitor physical environments in different applications such as military, agriculture, medical transport, industry etc. In monitoring of physical environments, the most important application of wireless sensor network is monitoring of critical conditions. The most important in monitoring application like critical condition is the sensing of information during emergency state from the physical environment where the network of sensors is deployed. In order to respond within a fraction of seconds in case of critical conditions like explosions, fire and leaking of toxic gases, there must be a system which should be fast enough. A big challenge to sensor networks is a fast, reliable and fault tolerant channel during emergency conditions to sink (base station) that receives the events. The main focus of this thesis is to discuss and evaluate the performance of two different routing protocols like Ad hoc On Demand Distance Vector (AODV) and Dynamic Source Routing (DSR) for monitoring of critical conditions with the help of important metrics like throughput and end-to-end delay in different scenarios. On the basis of results derived from simulation a conclusion is drawn on the comparison between these two different routing protocols with parameters like end-to-end delay and throughput.

• 40.
Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), MPE-lab.
Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), MPE-lab.
Detection of the Change Point and Optimal Stopping Time by Using Control Charts on Energy Derivatives2011Independent thesis Advanced level (degree of Master (One Year)), 40 credits / 60 HE creditsStudent thesis
• 41.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
A Comparison of Tests for Ordered Alternatives With Application in Medicine1997Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

A situation frequently encountered in medical studies is the comparison of several treatments with a control. The problem is to determine whether or not a test drug has a desirable medical effect and/or to identify the minimum effective dose. In this Bachelor’s thesis, some of the methods used for testing hypotheses of ordered alternatives are reviewed and compared with respect to the power of the tests. Examples of multiple comparison procedures, maximum likelihood procedures, rank tests and different types of contrasts are presented and the properties of the methods are explored.

Depending on the degree of knowledge about the dose-responses, the aim of the study, whether the test is parametric or non-parametric and distribution-free or not, different recommendations are given which of the tests should be used. Thus, there is no single test which can be applied in all experimental situations for testing all different alternative hypotheses.

• 42.
Örebro University, Swedish Business School at Örebro University.
Likelihood prediction for generalized linear mixed models under covariate uncertainty2010Manuscript (preprint) (Other academic)

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable general-ized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

• 43.
Örebro University, Swedish Business School at Örebro University.
Industry shocks and empirical evidences on defaults comovementsManuscript (preprint) (Other academic)

It is commonly agreed that the credit defaults are correlated. However, the structure and magnitude of such dependence is not yet fully understood. This paper contributes to the current understanding about the defaults comovement in the following way. Assuming that the industries provides the basis of defaults comovement it provides empirical evidence as to how such comovements can be modeled using correlated industry shocks. Generalized linear mixed model (GLMM) with correlated random effects is used to model the defaults comovement. It is also demonstrated as to how a GLMM with complex correlation structure can be estimated through a very simple way. Empirical evidences are drawn through analyzing quarterly individual borrower level credit history data obtained from two major Swedish banks between the period 1994 and 2000. The results show that, conditional on the borrower level accounting data and macro business cycle variables, the defaults are correlated both within and between industries but not over time (quarters). A discussion has also been presented as to how a GLMM for defaults correlation can be explained.

• 44.
Örebro University, Swedish Business School at Örebro University.
Feasible estimation of generalized linear mixed models (GLMM) with weak dependency between groups2010Manuscript (preprint) (Other academic)

This paper presents a two-step pseudo likelihood estimation for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the random intractable integrals in  the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study: An application of it with binary response variable is presented using a real dara set on credit defaults from two Swedish banks. Thanks to   the use of two-step estimation technique, the proposed algorithm outperforms conventional likelihood algoritms in terms of computational time.

• 45.
Örebro University, Swedish Business School at Örebro University.
Computation and application of likelihood prediction with generalized linear and mixed modelsManuscript (preprint) (Other academic)

This paper presents the computation of likelihood prediction with the generalized linear and mixed models. The method of likelihood prediction is briefy discussed and approximate formulae are provided to make easy computation of the likelihoodprediction with generalized linear models. For complicated prediction problems, simulation methods are suggested. An R add-in package is accompanied to carryout the computation of the predictive inference with the generalized linear and mixed models. The likelihood prediction is applied to the prediction of the credit defaults using a real data set. Results show that the predictive likelihood can be a useful tool to predict portfolio credit risk.

• 46.
Örebro University, Swedish Business School at Örebro University.
Feasible computation of generalized linear mixed models with application to credit risk modelling2010Doctoral thesis, comprehensive summary (Other academic)

This thesis deals with developing and testing feasible computational procedures to facilitate the estimation of and carry out the prediction with the generalized linear mixed model (GLMM) with a scope of applying them to large data sets. The work of this thesis is motivated from an issue arising incredit risk modelling. We have access to a huge data set, consisting of about one million observations, on credit history obtained from two major Swedish banks. The principal research interest involved with the data analysis is to model the probability of credit defaults by incorporating the systematic dependencies among the default events. In order to model the dependent credit defaults we adopt the framework of GLMM which is apopular approach to model correlated binary data. However, existing computational procedures for GLMM did not offer us the flexibility to incorporate the desired correlation structure of defaults events.For the feasible estimation of the GLMM we propose two estimation techniques being the fixed effects (FE) approach and the two-step pseudolikelihood approach (2PL). The preciseness of the estimation techniques and their computational advantages are studied by Monte-Carlo simulations and by applying them to the credit risk modelling. Regarding the prediction issue, we show how to apply the likelihood principle to carryout prediction with GLMM. We also provide an R add-in package to facilitate the predictive inference for GLMM.

• 47.
Örebro University, Swedish Business School at Örebro University.
Dalarna University, SE 781 88 Borlange, Sweden.
Computationally feasible estimation of the covariance structure in generalized linear mixed models 2008In: Journal of Statistical Computation and Simulation, ISSN 0094-9655, E-ISSN 1563-5163, Vol. 78, no 12, 1229-1239 p.Article in journal (Refereed)

In this paper, we discuss how a regression model, with a non-continuous response variable, which allows for dependency between observations, should be estimated when observations are clustered and measurements on the subjects are repeated. The cluster sizes are assumed to be large.We find that the conventional estimation technique suggested by the literature on generalized linear mixed models(GLMM) is slow and sometimes fails due to non-convergence and lack of memory on standard PCs.We suggest to estimate the random effects as fixed effects by generalized linear model and to derive the covariance matrix from these estimates.A simulation study shows that our proposal is feasible in terms of mean-square error and computation time.We recommend that our proposal be implemented in the software of GLMM techniques so that the estimation procedure can switch between the conventional technique and our proposal, depending on the size of the clusters.

• 48.
Dalarna University, School of Technology and Business Studies, Statistics.
Feasible computation of the generalized linear mixed models with application to credit risk modelling2010Doctoral thesis, monograph (Other academic)

This thesis deals with developing and testing feasible computational procedures to facilitate the estimation of and carry out the prediction with the generalized linear mixed model (GLMM) with a scope of applying them to large data sets. The work of this thesis is motivated from an issue arising in credit risk modelling. We have access to a huge data set, consisting of about one million observations, on credit history obtained from two major Swedish banks. The principal research interest involved with the data analysis is to model the probability of credit defaults by incorporating the systematic dependencies among the default events. In order to model the dependent credit defaults we adopt the framework of GLMM which is a popular approach to model correlated binary data. However, existing computational procedures for GLMM did not offer us the flexibility to incorporate the desired correlation structure of defaults events. For the feasible estimation of the GLMM we propose two estimation techniques being the fixed effects (FE) approach and the two-step pseudo likelihood approach (2PL). The preciseness of the estimation techniques and their computational advantages are studied by Monte-Carlo simulations and by applying them to the credit risk modelling. Regarding the prediction issue, we show how to apply the likelihood principle to carry out prediction with GLMM. We also provide an R add-in package to facilitate the predictive inference for GLMM.

• 49.
Dalarna University, School of Technology and Business Studies, Statistics.
An efficient algorithm for the pseudo likelihood estimation of the generalized linear mixed models (GLMM) with correlated random effects2009Report (Other academic)

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with correlated random effects. The proposed estimation technique does not require reparametarisation of the model. Multivariate Taylor's approximation has been used to approximate the intractable integrals in the likelihood function of the GLMM. Based on the analytical expression for the estimator of the covariance matrix of the random effects, a condition has been presented as to when such a covariance matrix can be estimated through the estimates of the random effects. An application of the model with a binary response variable has been presented using a real data set on credit defaults from two Swedish banks. Due to the use of two-step estimation technique, proposed algorithm outperforms the conventional pseudo likelihood algorithms in terms of computational time.

• 50.
Dalarna University, School of Technology and Business Studies, Statistics.
Industry shocks and empirical evidences on defaults comovement2009Report (Other academic)

It is commonly agreed that the credit defaults are correlated. However, the mechanism of such dependence is not yet fully understood. This paper contributes to the current understanding about the defaults comovement in the following way. Assuming that the industries provides the basis of defaults comovement it provides empirical evidence as to how such comovements can be modeled using correlated industry shocks. Generalized linear mixed model (GLMM) with correlated random effects is used to model the defaults comovement. Empirical evidences are drawn through analyzing individual borrower level credit history data obtained from two major Swedish banks between the period 1994-2000. The results show that the defaults are correlated both within and between industries but not over time (quarters). A discussion has also been presented as to how a GLMM for defaults correlation can be explained.

1234567 1 - 50 of 2999
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
v. 2.23.1
|