Portfolio managers have a great interest in detecting high-performing stocks early on. Detecting outperforming stocks has for long been of interest from a research as well as financial point of view. Quantitative methods to predict stock movements have been widely studied in diverse contexts, where some present promising results. The quantitative algorithms for such prediction models can be, to name a few, support vector machines, tree-based methods, and regression models, where each one can carry different predictive power. Most previous research focuses on indices such as S&P 500 or large-cap stocks, while small- and micro-cap stocks have been examined to a lesser extent. These types of stocks also commonly share the characteristic of high volatility, with prospects that can be difficult to assess. This study examines to which extent widely studied quantitative methods such as random forest, support vector machine, and logistic regression can produce accurate predictions of stock price directions on a quarterly and yearly basis. The problem is modeled as a binary classification task, where the aim is to predict whether a stock achieves a return above or below a benchmark index. The focus lies on Asian small- and micro-cap stocks. The study concludes that the random forest method for a binary yearly prediction produces the highest accuracy of 69.64%, where all three models produced higher accuracy than a binary quarterly prediction. Although the statistical power of the models can be ruled adequate, more extensive studies are desirable to examine whether other models or variables can increase the prediction accuracy for small- and micro-cap stocks.
Special partial matchings (SPMs) are a generalisation of Brenti's special matchings. Let a pircon be a poset in which every non-trivial principal order ideal is finite and admits an SPM. Thus pircons generalise Marietti's zircons. We prove that every open interval in a pircon is a PL ball or a PL sphere. It is then demonstrated that Bruhat orders on certain twisted identities and quasiparabolic W-sets constitute pircons. Together, these results extend a result of Can, Cherniaysky, and Twelbeck, prove a conjecture of Hultman, and confirm a claim of Rains and Vazirani.
The schedule for the jobs in a real-time system can have a huge impact on how the system behave. Since real-time systems are common in safety applications it is important that the scheduling is done in a valid way. Furthermore, one can enhance the performance of the applications by minimizing data latency and jitter. A challenge is that jobs in real-time systems usually have complex constraints making it too time consuming to minimize data latency and jitter to optimality. The purpose of this report is to investigate the possibility of creating high quality schedules using heuristics, with the goal to keep the computational time under one minute. This will be done by comparing three different algorithms that will be used on real scheduling instances provided by the company Arcticus. The first algorithm is a greedy heuristic, the second one a local search and the third one is a metaheuristic, simulated annealing. The results indicate that the data latency can be reduced whilst keeping the computational time below one minute.
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced.
In this thesis a known heuristic for decreasing a node's centrality scores while maintaining influence, called ROAM, is compared to a modified version specifically designed to decrease eigenvector centrality. The performances of these heuristics are also tested against the Shapley values of a cooperative game played over the considered network, where the game is such that influential nodes receive higher Shapley values. The modified heuristic performed at least as good as the original ROAM, and in some instances even better (especially when the terrorist network behind the World Trade Center attacks was considered). Both heuristics increased the influence score for a given targeted node when applied consecutively on the WTC network, and consequently the Shapley values increased as well. Therefore the Shapley value of the game considered in this thesis seems to be well suited for discovering individuals that are assumed to actively trying to evade social network analysis.
Within demand forecasting, and specifically within the field of e-commerce, the provided data often contains erratic behaviours which are difficult to explain. This induces contradictions to the common assumptions within classical approaches for time series analysis. Yet, classical and naive approaches are still commonly used. Machine learning could be used to alleviate such problems. This thesis evaluates four models together with Swedish fin-tech company QLIRO AB. More specifically, a MLR (Multiple Linear Regression) model, a classic Box-Jenkins model (SARIMAX), an XGBoost model, and a LSTM-network (Long Short-Term Memory). The provided data consists of aggregated total daily reservations by e-merchants within the Nordic market from 2014. Some data pre processing was required and a smoothed version of the data set was created for comparison. Each model was constructed according to their specific requirements but with similar feature engineering. Evaluation was then made on a monthly level with a forecast horizon of 30 days during 2021. The results shows that both the MLR and the XGBoost provides the most consistent results together with perks for being easy to use. After these two, the LSTM-network showed the best results for November and December on the original data set but worst overall. Yet it had good performance on the smoothed data set and was then comparable to the first two. The SARIMAX was the worst performing of all the models considered in this thesis and was not as easy to implement.
Syftet med vår studie är att bidra med kunskap om på vilket sätt speciallärare arbetar språkutvecklande med elever med speciella utbildningsbehov i matematik och som samtidigt har språkstörning. Tidigare forskning visar att språkstörning har stor inverkan på elevers kunskpasinhämtning i matematik om de samtidigt har SUM. Studien genomfördes med semistrukturerade djupintervjuer, Resultaten har bearbetats mot tidigare forskning. Vi fann att alla speciallärare vi intervjuat arbetatde på ett språkutvecklande sätt i matematikundervisningen. Trots att speciallärarna hade olika kunskap om språkstörning arbetade de alla på liknande sätt med språkutveckling i matematiken.
Syftet med denna kvalitativa studie var att genom semi-strukturerade intervjuer med fem matematiklärare undersöka hur, varför och när lärarna väljer att använda digitala verktyg vid problemlösning i matematik. Detta är relevant på grund av ökade krav att använda digitala verktyg i skolan och för att forskning pekat ut nödvändigheten av att de digitala verktygen används på rätt sätt i undervisningen. Det teoretiska ramverket TPACK (teknisk, pedagogisk och ämnesmässig kunskap) har använts för att analysera lärarnas intervjusvar för att synliggöra i vilken grad lärarna besitter teknisk, pedagogisk och ämnesmässig kunskap och vilken betydelse dessa kunskaper har för den undervisning som innefattar digitala verktyg i samband med problemlösning. Resultatet i denna studie visar att digitala verktyg är ett stöd för läraren i undervisningen eftersom dessa kan öka elevernas motivation för ämnet. Genom att använda digitala verktyg kan läraren också lättare individanpassa sin undervisning för de elever som har svårt för matematik eller för de som behöver mer utmanande uppgifter. Lärarna måste på egen hand införskaffa den kunskap som krävs gällande de digitala verktygen för att kunna bedriva en så bra matematikundervisning som möjligt, vilket lärarna upplever som problematiskt. Det är tydligt att avsaknaden av utbildning om digitala verktyg hos lärarna är ett problem då de måste utbilda sig själva om de skulle behöva det.
I denna uppsats presenteras några av Euklides upptäckter inom matematikenmed fokus på talteori och i synnerhet primtal. Dessa upptäckter har haft stor betydelse för dagens matematik - men tas ibland för givna och ses som självklara. Vi kommer att se närmare på några av Euklides upptäckter för att diskutera hur de såg ut då och hur de ser ut idag, medfokus på den matematiska teorin.
This thesis primarily looks at estimation error problems and other related issues arising in connection with portfolio optimization. With some available assets, a portfolio program or optimizer seeks to distribute a fixed amount of capital among these available assets to optimize some cost function. In this regard, Markowitz portfolio selection basis defines the variance of the portfolio return to being that of the portfolio risk and tries to find an allocation that reduces or minimizes the risk subject to a target mean or expected return. Should the mean return vector and the covariance matrix of returns for the underlying assets be known, the Markowitz problem is said to have a closed-form solution.
In practice, however, an estimation is made from historical data for unknown expected returns and the covariance matrix of the returns, and this brings into the domain several problems such as estimation problems and renders the Markowitz theory impracticable in real-life portfolio applications. Estimators necessary to remedy these problems would be made bare to show how possible it is to tackle such issues.
In the concept demonstration sections, the analysis starts with the price data of 40 stocks and the S\&P index. The efficient frontier is introduced and used to show how the estimators take effect.
Finally, implementation is made possible using the R Programming Language to demonstrate the necessary concepts with the conclusion presented at the end.
The starting point of this work has been to get knowledge about teachers working in a mathematical workshop with the students. I also wanted to investigate what students think about math, both within the workshop and in traditional teaching.
To achieve this I chose to interview a teacher in charge within the framework of a mathematical workshop, and six students in the 5th grade. I also did a lesson observation with the purpose to see how well the teacher’s tutoring agrees with the results of the interview.
In this study I have discovered that the lessons in the mathematical workshop are planned after what the students are working with in the textbook. The teacher opens the lessons with the whole class and then splits it up in groups. In the interview the respondent conveys the importance of discussions among the students where they can explain their thoughts, especially for the weaker students that, thanks to the discussions are able to show their knowledge more. This is something I did not see during my lesson observation, however. The students also seemed to miss this when I spoke to them in the interview. They describe that they cannot see the connection between what they learn in the workshop and the textbook.
This thesis is devoted to the study of Hardy and spectral inequalities for the Heisenberg and the Grushin operators. It consists of five chapters. In chapter 1 we present basic notions and summarize the main results of the thesis. In chapters 2-4 we deal with different types of Hardy inequalities for Laplace and Grushin operators with magnetic and non-magnetic fields. It was shown in an article by Laptev and Weidl that for some magnetic forms in two dimensions, the Hardy inequality holds in its classical form. More precisely, by considering the Aharonov-Bohm magnetic potential, we can improve the constant in the respective Hardy inequality. In chapter 2 we establish an Lp - Hardy inequality related to Laplacians with magnetic fields with Aharonov-Bohm vector potentials. In chapter 3 we introduce a suitable notion of a vector field for the Grushin sub-elliptic operator G and obtain an improvement of the Hardy inequality, which was previously obtained in the paper of N. Garofallo and E. Lanconelli. In chapter 4 we find an Lp version of the Hardy inequality obtained in chapter 2. Finally in chapter 5 we aim to find the CLR and Lieb-Thirringbninequalities for harmonic Grushin-type operators. As the Grushin operator is non-elliptic, these inequalities will not take their classical form.
Recent years have been characterized by tremendous advances in quantum information and communication, both theoretically and experimentally. In addition, mathematical methods of quantum information and quantum probability have begun spreading to other areas of research, beyond physics. One exciting new possibility involves applying these methods to information science and computer science (without direct relation to the problems of creation of quantum computers).
The aim of this Special Volume is to encourage scientists, especially the new generation (master and PhD students), working in computer science and related mathematical fields to explore novel possibilities based on the mathematical formalisms of quantum information and probability. The contributing authors, who hail from various countries, combine extensive quantum methods expertise with real-world experience in application of these methods to computer science. The problems considered chiefly concern quantum information-probability based modeling in the following areas: information foraging; interactive quantum information access; deep convolutional neural networks; decision making, quantum dynamics; open quantum systems; and theory of contextual probability.
The book offers young scientists (students, PhD, postdocs) an essential introduction to applying the mathematical apparatus of quantum theory to computer science, information retrieval, and information processes.
This work proposes the complete design cycle for several auxetic materials where the cycle consists of three steps (i) the design of the micro-architecture, (ii) the manufacturing of the material and (iii) the testing of the material. We use topology optimization via a level-set method and asymptotic homogenization to obtain periodic micro-architectured materials with a prescribed effective elasticity tensor and Poisson’s ratio. The space of admissible micro-architectural shapes that carries orthotropic material symmetry allows to attain shapes with an effective Poisson’s ratio below −1−1. Moreover, the specimens were manufactured using a commercial stereolithography Ember printer and are mechanically tested. The observed displacement and strain fields during tensile testing obtained by digital image correlation match the predictions from the finite element simulations and demonstrate the efficiency of the design cycle.
Risk measure is a fundamental concept in finance and in the insuranceindustry. It is used to adjust life insurance rates. In this article,we will study dynamic risk measures by means of backward stochasticVolterra integral equations (BSVIEs) with jumps. We prove a comparisontheorem for such a type of equations. Since the solution of aBSVIEs is not a semimartingale in general, we will discuss some particularsemimartingale issues.
We are interested in Pontryagin’s stochastic maximum principle of controlled McKean–Vlasov stochastic differential equations. We allow the law to be anticipating, in the sense that, the coefficients (the drift and the diffusion coefficients) depend not only of the solution at the current time t, but also on the law of the future values of the solution PX(t+δ)" role="presentation">PX(t+δ), for a given positive constant δ" role="presentation">δ. We emphasise that being anticipating w.r.t. the law of the solution process does not mean being anticipative in the sense that it anticipates the driving Brownian motion. As an adjoint equation, a new type of delayed backward stochastic differential equations (BSDE) with implicit terminal condition is obtained. By using that the expectation of any random variable is a function of its law, our BSDE can be written in a simple form. Then, we prove existence and uniqueness of the solution of the delayed BSDE with implicit terminal value, i.e. with terminal value being a function of the law of the solution itself.
The purpose of this paper is to study the following topics and the relation between them: (i) Optimal singular control of mean-field stochastic differential equations with memory; (ii) reflected advanced mean-field backward stochastic differential equations; and (iii) optimal stopping of mean-field stochastic differential equations. More specifically, we do the following: (1) We prove the existence and uniqueness of the solutions of some reflected advanced memory backward stochastic differential equations; (2) we give sufficient and necessary conditions for an optimal singular control of a memory mean-field stochastic differential equation (MMSDE) with partial information; and (3) we deduce a relation between the optimal singular control of an MMSDE and the optimal stopping of such processes.
We study methods for solving stochastic control problems of systems offorward–backward mean-field equations with delay, in finite and infinite time horizon.Necessary and sufficient maximum principles under partial information are given. The resultsare applied to solve a mean-field recursive utility optimal problem.
We prove a maximum principle of optimal control of stochastic delay equations on infinite horizon. We establish first and second sufficient stochastic maximum principles as well as necessary conditions for that problem. We illustrate our results with an application to the optimal consumption rate from an economic quantity.
In this paper we study the linear mean-field backward stochastic differential equations (mean-field BSDE) of the form & nbsp;& nbsp;{dY(t) = -[alpha(1)(t)Y(t) +& nbsp;beta(1)(t)Z(t) +& nbsp;integral(R0 & nbsp;)eta(1)(t,& nbsp;zeta)K(t,& nbsp;zeta)nu(d zeta) +& nbsp;alpha(2)(t)E[Y(t)] +& nbsp;beta(2)(t)E[Z(t)] +& nbsp;integral(R0 & nbsp;)eta(2)(t,& nbsp;zeta)E[K(t,& nbsp;zeta)]nu(d zeta) +& nbsp;gamma(t)]dt + Z(t)dB(t) +& nbsp;integral K-R0 (t,& nbsp;zeta)(N) over tilde(dt, d zeta), t & nbsp;is an element of & nbsp;[0, T].Y(T) =xi.& nbsp;& nbsp;where (Y, Z, K) is the unknown solution triplet, B is a Brownian motion, (N) over tilde is a compensated Poisson random measure, independent of B. We prove the existence and uniqueness of the solution triplet (Y, Z, K) of such systems. Then we give an explicit formula for the first component Y(t) by using partial Malliavin derivatives. To illustrate our result we apply them to study a mean-field recursive utility optimization problem in finance.
The purpose of these lectures is threefold: We first give a short survey of the Hida white noise calculus, and in this context we introduce the Hida-Malliavin derivative as a stochastic gradient with values in the Hida stochastic distribution space (S. We show that this Hida-Malliavin derivative defined on L2(FT,P) is a natural extension of the classical Malliavin derivative defined on the subspace D1,2 of L2(P). The Hida-Malliavin calculus allows us to prove new results under weaker assumptions than could be obtained by the classical theory. In particular, we prove the following: (i) A general integration by parts formula and duality theorem for Skorohod integrals, (ii) a generalised fundamental theorem of stochastic calculus, and (iii) a general Clark-Ocone theorem, valid for all F∈L2(FT,P). As applications of the above theory we prove the following: A general representation theorem for backward stochastic differential equations with jumps, in terms of Hida-Malliavin derivatives; a general stochastic maximum principle for optimal control; backward stochastic Volterra integral equations; optimal control of stochastic Volterra integral equations and other stochastic systems.
The classical maximum principle for optimal stochastic control states that if a control û is optimal, then the corresponding Hamiltonian has a maximum at u = û. The first proofs for this result assumed that the control did not enter the diffusion coefficient. Moreover, it was assumed that there were no jumps in the system. Subsequently, it was discovered by Shige Peng (still assuming no jumps) that one could also allow the diffusion coefficient to depend on the control, provided that the corresponding adjoint backward stochastic differential equation (BSDE) for the first-order derivative was extended to include an extra BSDE for the second-order derivatives. In this paper, we present an alternative approach based on Hida-Malliavin calculus and white noise theory. This enables us to handle the general case with jumps, allowing both the diffusion coefficient and the jump coefficient to depend on the control, and we do not need the extra BSDE with second-order derivatives. The result is illustrated by an example of a constrained linear-quadratic optimal control.
We consider a problem of optimal control of an infinite horizon system governed by forward–backward stochastic differential equations with delay. Sufficient and necessary maximum principles for optimal control under partial information in infinite horizon are derived. We illustrate our results by an application to a problem of optimal consumption with respect to recursive utility from a cash flow with delay.
Solutions of stochastic Volterra (integral) equations are not Markov processes, and therefore, classical methods, such as dynamic programming, cannot be used to study optimal control problems for such equations. However, we show that using Malliavin calculus, it is possible to formulate modified functional types of maximum principle suitable for such systems. This principle also applies to situations where the controller has only partial information available to base her decisions upon. We present both a Mangasarian sufficient condition and a Pontryagin-type maximum principle of this type, and then, we use the results to study some specific examples. In particular, we solve an optimal portfolio problem in a financial market model with memory.
By a memory mean-field process we mean the solution X(⋅)" role="presentation">X(⋅) of a stochastic mean-field equation involving not just the current state X(t) and its law L(X(t))" role="presentation">L(X(t)) at time t, but also the state values X(s) and its law L(X(s))" role="presentation">L(X(s)) at some previous times s<t." role="presentation">s<t. Our purpose is to study stochastic control problems of memory mean-field processes. We consider the space M" role="presentation">M of measures on R" role="presentation">R with the norm ||⋅||M" role="presentation">||⋅||M introduced by Agram and Øksendal (Model uncertainty stochastic mean-field control. arXiv:1611.01385v5, [2]), and prove the existence and uniqueness of solutions of memory mean-field stochastic functional differential equations. We prove two stochastic maximum principles, one sufficient (a verification theorem) and one necessary, both under partial information. The corresponding equations for the adjoint variables are a pair of (time-advanced backward stochastic differential equations (absdes), one of them with values in the space of bounded linear functionals on path segment spaces. As an application of our methods, we solve a memory mean–variance problem as well as a linear–quadratic problem of a memory process.
We study optimal control of stochastic Volterra integral equations(SVIE) with jumps by using Hida-Malliavin calculus.
• We give conditions under which there exist unique solutions ofsuch equations.
• Then we prove both a sufficient maximum principle (a verificationtheorem) and a necessary maximum principle via Hida-Malliavincalculus.
• As an application we solve a problem of optimal consumptionfrom a cash flow modelled by an SVIE.
Syftet med denna studie är att förstå lärares resonemang om användningav laborativt material inom aritmetik i årskurs 1–3. Arbetet syftar även till att undersöka hur lärare resonerar kring vad som har betydelse för deras aritmetikundervisning med avseende pålaborativt material. För att uppnå syftet har semistrukturerade intervjuer med sju lärare genomförts. De data som samlats in har analyserats med hjälp av en tematisk analys och tolkats med den sociokulturella teorin som teoretiskt ramverk. Studiens resultat visar att lärarna använder laborativa material i olika syften där att underlätta för att eleverna ska gå från konkret till abstrakt förståelse är mest framträdande. Det som har högst betydelse i lärarnas aritmetikundervisning med laborativa material visar sig vara influenser från andra lärare och från internet. Slutsatserna som kan dras är att lärare kan ha svårt att veta hur de ska gå vidare från det laborativa materialet med elever som har svårt för aritmetik. Lärarna önskar även mer utbildning i hur de kan använda laborativa material med sina elever.
Syftet med arbetet är att ta reda på i vilken utsträckning elever ges förutsättningar att utveckla de olika förmågor som beskrivs i kursplanen för ämnet matematik, LGR11, när undervisningen bygger på ett vanligt förekommande läromedel, Pixel. Metoden är kvalitativ och det har skett en textanalys på geometriavsnitten i läromedlet. De förmågor som eleverna når upp till är att använda sig av olika matematiska begrepp, samt att formulera och lösa problem. Eleverna har goda möjligheter att själva kunna välja olika sätt att lösa uppgifter. De får även kunskap att förstå och använda olika uttrycksformer. Eleverna uppmuntras inte med hjälp av detta läromedel, att föra egna matematiska diskussioner.
Syftet med studien är att skapa förståelse för lågstadieelevers kunskaper inom ämnesområdet mätning av tid. Studien baseras på ett skriftligt kvantitativt klocktest som 13 elever i årskurs 2 och 5 elever i årskurs 3 har genomfört, samt kvalitativa elevintervjuer som fyra elever i respektive årskurs har genomfört. Klocktesten har analyserats numeriskt och elevintervjuerna har analyserats tematiskt. Den teoretiska utgångspunkten är den sociokulturella teorin med inslag av lösningsstrategier. Studiens resultat indikerar att lågstadieelever har bättre kunskaper om den analoga klockan, jämfört med den digitala klockan. När eleverna har löst uppgifter om avläsning och mätning av tid har de främst använt strategin räkna uppåt. Slutsatsen som kan dras utifrån det är att lågstadieelever har kunskaper om klockan, men det finns kunskapsskillnader mellan de olika klocktyperna och mellan eleverna. Samtliga elever kunde redogöra för minst en lösningsstrategi som användes vid klockuppgifterna men strategierna fungerade olika bra.
I denna systematiska litteraturstudie fokuseras elever med fallenhet för matematik, problemlösning samt motivation och sambanden mellan dessa. Syftet med studien är att kartlägga hur elever med matematisk fallenhet kan utmärka sig, att undersöka hur problemlösning kan utmana och utveckla dessa elever samt att undersöka sambanden mellan elever med matematisk fallenhet, problemlösning och motivation. För att besvara studiens syfte samt frågeställningar baseras studien på ett flertal vetenskapliga publikationer som har analyserats utifrån tre teoretiska perspektiv. Studiens resultatanalys visar på flera utmärkande drag hos elever med matematisk fallenhet, vissa mer förekommande än andra. Resultatanalysen i studien redogör även för kännetecken för problemlösningsuppgifter, med vilka faser en problemlösningsuppgift kan lösas samt hur arbete med problemlösning kan bidra till utmaning och fortsatt utveckling hos elever med matematisk fallenhet. Avslutningsvis redogör studiens resultatanalys för sambandet mellan elever med matematisk fallenhet, problemlösning och motivation. Studien visar att motivationen hos elever med matematisk fallenhet ökar vid arbete med problemlösning.
Syftet med examensarbetet är att med hjälp av enkätintervjuer och observationer få mer kunskap om hur man arbetar med sortering och klassificering med de yngsta barnen i förskolan. Vi har intervjuat arbetslag som nyligen fått kompetensutveckling i matematik. I vår bakgrund har vi utifrån ett teoretiskt perspektiv förklarat sortering och klassificering, pedagogens roll samt den pedagogiska miljön samt hur dessa faktorer kan påverka barns matematiska förmåga. Resultatet visar att det förekommer mycket sortering och klassificering i verksamheten på förskolor i vardagsmatematiken. Pedagogerna i undersökningen menar att de vill ha mer kunskap om matematiska begrepp för att på ett bättre sätt ge barnen kunskap i matematik. Undersökningen visar hur betydelsefull pedagogen är och denna roll är inte alltid pedagogerna ute i verksamheten medvetna om.
Syftet med detta arbete är att undersöka hur komvuxelever beskriver orsaker till sina upplevda matematiksvårigheter och hur de värderar sina lärares strategier för att hjälpa dem. Studien använde sig av semistrukturerade intervjuer för att undersöka matematiksvårigheter hos fem komvuxelever som har haft svårt med ämnet matematik under sin grundskole- eller gymnasietid. En gemensam nämnare visade sig vara att de under sin grundskoletid inte hade motivation för ämnet matematik och det ansåg de flesta av eleverna har haft en negativ inverkan på deras matematiklärande. Bristen på motivation ledde till att de missade väsentliga delar i den grundläggande matematiken trots att de fick hjälp och stöd av sina matematiklärare på grundskolan och gymnasiet. Samtliga elever berättade att deras otillräckliga grundkunskaper i matematik hade betydelse för deras svårigheter idag. Elevers brister från tidigare årskurser gör det svårt för eleverna att tillgodose sig utökade kunskaper. Alla ansåg också att det var svårt med matematikspråket. Med matematikspråk avsåg de begrepp, ord och symboler som har betydelse för att förstå matematik. Informanterna berättade att variation i undervisningen kunde ha påverkat deras motivation och inverkat positivt på deras intresse för matematik.Alla intervjuade elever sade sig idag ha varit missnöjda med den hjälp och det stöd de fick av sina Komvuxlärare, vilket betyder till att de nu hade mer svårigheter i matematik än tidigare.
Avsikten med min studie är att veta hur lärarstyrd lek påverkar matematikundervisningen samt se vilka möjligheter och begränsningar som metoden lärarstyrd lek har. För detta syfte har jag intervjuat ett antal lärare om detta arbetssätt, hur de planerar för en undervisning som bygger på lärarstyrd lek, vad de menar med lärarstyrd lek, vilka möjligheter och begränsningar de ser samt om de märker att de stimulerar eleven i matematikundervisningen med den lärarstyrda leken.Det resultatet som jag ser i min studie är att i huvudsak anser samtliga pedagoger att det finns en positiv påverkan på lärandet när lärarstyrd lek genomförs i undervisningen. De pekar på några begränsande faktorer som tidsbrist, höga arbetsbelastning, problem med elevers sociala sampel. Trots att arbetssättet möter vissa utmaningar så bedömer samtliga pedagoger att lärarstyrd lek i matematikundervisningen ökar elevens motivation i lärandet och gör ämnet roligt och intressant.
Syftet med examensarbetet är att fördjupa vår kunskap om och förståelse för formativ bedömning i matematik. Vidare vill vi undersöka om den formativa bedömningen kan förbättra eleverna i klassens kunskapsutveckling i matematik. Vi har gjort en fallstudie i en klass som arbetar formativt för att undersöka hur det kan se ut i praktiken. Underlaget för det samlade materialet består av observation och intervju för att besvara våra två frågeställningar som följer. Hur kan en formativ bedömning se ut i praktiken? Hur kan en koppling mellan lärares och elevers uppfattningar om den pedagogiska verksamheten se ut? Vi har kommit fram till att den formativa bedömningen kan förbättra eleverna i klassens kunskapsutveckling i matematik. I den formativa bedömningen har vi sett vikten av mötet mellan lärare och elev. Att arbeta formativt är tidskrävande.
Detta arbete utgår från tanken att utveckla undervisningen genom att integrera matematik med andra ämnen. Projektet har genomförts genom att designa och utvärdera lektioner tillsammans med slöjd- och teknik- och hemkunskapslärare. Studien har utförts i åk 7–9, på en skola i Västmanland och en skola i Östergötland.
Arbete syftar till att eleverna på högstadiet ska kunna dra nytta av sina matematiska kunskaper i reella situationer. Lektionerna bidrar till att utveckla elevernas tänkande och förmåga att kunna lösa olika problem.
Vi har observerat lektioner, som de deltagande lärarna har genomfört, och då noterat moment som kan användas för vårt projekt. Efter det har vi intervjuat dem och de har fått bedöma sina kunskaper i den pedagogiska modellen, samt vilka möjligheter de har att genomföra integrering med andra ämnen.
Enkätundersökning som vi genomfört, har visat deltagarnas intresse och engagemang för denna metod samt att de har motivation, att fortsätta och vidareutveckla sin undervisning enligt denna modell.
Let u(epsilon) be a solution to the system div(A(epsilon)(x)del u(epsilon)(x)) = 0 in D, u(epsilon)(x) = g(x, x/epsilon) on partial derivative D, where D subset of R-d (d >= 2), is a smooth uniformly convex domain, and g is 1-periodic in its second variable, and both A(epsilon) and g are sufficiently smooth. Our results in this paper are twofold. First we prove L-p convergence results for solutions of the above system and for the non-oscillating operator A(epsilon)(x) = A(x), with the following convergence rate for all 1 <= p < infinity parallel to u(epsilon) - u(0)parallel to (LP(D)) <= C-P {epsilon(1/2p), d = 2, (epsilon vertical bar ln epsilon vertical bar)(1/p), d = 3, epsilon(1/p), d >= 4, which we prove is (generically) sharp for d >= 4. Here u(0) is the solution to the averaging problem. Second, combining our method with the recent results due to Kenig, Lin and Shen (Commun Pure Appl Math 67(8): 1219-1262, 2014), we prove (for certain class of operators and when d >= 3) ||u(epsilon) - u(0)||(Lp(D)) <= C-p[epsilon(ln(1/epsilon))(2)](1/p) for both the oscillating operator and boundary data. For this case, we take A(epsilon) = A(x/epsilon), where A is 1-periodic as well. Some further applications of the method to the homogenization of the Neumann problem with oscillating boundary data are also considered.
Syftet med undersökningen var att observera hur lärarna arbetar med matematik i årskurserna F-3, med fokus på hur estetiska verktyg används i matematikundervisningen. Studien är baserad på klassobservationer i två årskurser i de tidigare skolåren F-3 samt elev- och lärarintervjuer och ämnade lyfta estetiska lärprocesser samt hur de spelar in på elevernas upplevelser vad gäller ämnesintegration i matematikundervisning. Resultaten visar att estetiska verktyg används mer med yngre elever och mindre ju äldre eleverna blir. Lärarna i denna studie påpekade att ett ämnesövergripande arbetssätt och att blanda in olika sinnen främjar elevernas inlärning. Majoriteten av de intervjuade lärarna betonade att det gäller att hitta en balans mellan olika metoder för att skapa en varierad undervisning.
Syftet med denna studie är att beskriva processen som sker när elever arbetar tillsammans med en problemlösningsuppgift de tidigare inte kunnat lösa vid enskilt arbete. Fokus har varit på förändrad användning av Sfards (2008) nyckelbegrepp: ämnesspecifika ord, visuella mediatorer, rutiner och berättelser samt vilka diskursförändringar som synliggjorts. Studien är kvalitativ och empirin har samlats in genom observationer av problemlösningsarbetet, ljudinspelningar under pararbeten och bildupptagningar av elevmaterialet. Totalt tio observationer av gemensamt arbete genomfördes och transkriberades och av dessa djupanalyserades fem transkriberingar med hjälp av flödesscheman och Sfards (2008) nyckelbegrepp. Resultatet visar att eleverna i studien använde fler nyckelbegrepp vid den gemensamma problemlösningen och därmed ingick i diskursförändringar som bidrog till att föra problemlösningen framåt. Visuella mediatorer och användningen av tid visade sig vara viktiga aspekter för användning av fler nyckelbegrepp och förändring av diskurser.
The global solution of a fuzzy linear system contains the crisp vector solution of a real linear system. So discussion about the global solution of a fuzzy linear system with a fuzzy number vector in the right hand side and crisp a coefficient matrix is considered. The advantage of the paper is developing a new algorithm to find the solution of such system by considering a global solution based upon the concept of a convex fuzzy numbers. At first the existence and uniqueness of the solution are introduced and then the related theorems and properties about the solution are proved in details. Finally the method is illustrated by solving some numerical examples.
To achieve a sustainable future fossil electricity is being replaced with renewable, leading to higher uncertainties in electricity production. This has resulted in an incentive for consumers to produce, sell, and store their own electricity, hence becoming prosumers. Austerland Skags is a Swedish project that explores the possibility to convert a small society into a prosumption system. The system includes solar and wind power as electricity producers and hydrogen-fueled vehicles for commodity transport. To capitalize the most on their produced electricity they want to store excess electricity. This master thesis uses Austerland Skags as a case study to develop a stochastic linear optimization model to determine the optimal energy storage solution for an energy prosumption system with both electricity and hydrogen demand.
The method used in this thesis was the sample average approximation (SAA) algorithm. The results from the SAA were compared to the expected results from the expected value problem (EEV) to show the difference between a stochastic and deterministic solution. The results from the SAA turned out to consistently outperform the EEV for the samples created.
Since hydrogen demand could only be sourced in-house, the model was forced to use an electrolyzer and hydrogen tank. The final result from the SAA showed that both a battery and fuel cell was used in addition to the electrolyzer and hydrogen tank in the optimal solution. All capacities stayed within reasonable levels showing the possibility of realizing a cost-effective prosumption system.
A benchmark problem on atmospheric sound propagation over irregular terrain has been solved using a stable fourth-order accurate finite difference approximation of a high-fidelity acoustic model. A comparison with the parabolic equation method and ray tracing methods is made. The results show that ray tracing methods can potentially be unreliable in the presence of irregular terrain.
The dynamic phasor model of a time-periodic system is used to derive a stability test involving a harmonic Lyapunov function. This reveals a new interpretation of the harmonic Lyapunov function with an appealing time-domain representation. Most importantly, it indicates that the ideas behind the harmonic Lyapunov equation can be generalized to include cyclic switching systems that have different pulse form in each period.