Change search
Refine search result
1234567 151 - 200 of 17364
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 151.
    Achichi, Manel
    et al.
    LIRMM/University of Montpellier, France.
    Cheatham, Michelle
    Wright State University, USA.
    Dragisic, Zlatan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Euzenat, Jerome
    INRIA & Univ. Grenoble Alpes, Grenoble, France.
    Faria, Daniel
    Instituto Gulbenkian de Ciencia, Lisbon, Portugal.
    Ferrara, Alfio
    Universita degli studi di Milano, Italy.
    Flouris, Giorgos
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Ivanova, Valentina
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Jiménez-Ruiz, Ernesto
    University of Oslo, Norway and University of Oxford, UK.
    Kuss, Elena
    University of Mannheim, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Leopold, Henrik
    Vrije Universiteit Amsterdam, The Netherlands.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Meilicke, Christian
    University of Mannheim, Germany.
    Montanelli, Stefano
    Universita degli studi di Milano, Italy.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    TasLab, Informatica Trentina, Trento, Italy.
    Splendiani, Andrea
    Novartis Institutes for Biomedical Research, Basel, Switzerland.
    Stuckenschmidt, Heiner
    University of Mannheim, Germany.
    Todorov, Konstantin
    LIRMM/University of Montpellier, France.
    Trojahn, Cassia
    IRIT & Universit ́ e Toulouse II, Toulouse, France.
    Zamazal, Ondřej
    University of Economics, Prague, Czech Republic.
    Results of the Ontology Alignment Evaluation Initiative 20162016In: Proceedings of the 11th International Workshop on Ontology Matching, Aachen, Germany: CEUR Workshop Proceedings , 2016, 73-129 p.Conference paper (Refereed)
  • 152.
    Acin, Medya
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Stansvik, Elvis
    KTH, School of Computer Science and Communication (CSC).
    Improving Player Engagement inTetris Through EDR Monitoring2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    When designing computer games, one is often interested in evoking feelings of

    engagement, enjoyment and challenge in the player. One way of doing so is

    dynamically adjusting the difficulty of the game. Traditionally, this adjustment

    has been based on the performance of the player. However, in recent years there

    has been an increased interest in dynamically adjusting the difficulty level of a

    game based on physiological signals from the player. In this Bachelor’s project,

    we have studied the effect of using an electrodermal activity (EDA) wristband

    sensor as the source signal for the difficulty adjustment algorithm and compared

    it to the traditional approach of using the performance of the player.

    We developed two Tetris games, one EDA controlled and one performance controlled,

    and let participants play them both. Each game session was followed

    by a questionnaire. Our results show that, although participants reported an

    increased sense of engagement and challenge when playing the EDA version,

    further research is necessary before the usefulness of EDA in this setting can be

    established.

  • 153.
    Ackland, Patrik
    KTH, School of Computer Science and Communication (CSC).
    Fast and Scalable Static Analysis using Deterministic Concurrency2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis presents an algorithm for solving a subset of static analysis data flow problems known as Interprocedural Finite Distribute Subset problems. The algorithm, called IFDS-RA, is an implementation of the IFDS algorithm which is an algorithm for solving such problems. IFDS-RA is implemented using Reactive Async which is a deterministic, concurrent, programming model. The scalability of IFDS-RA is compared to the state-of-the-art Heros implementation of the IFDS algorithm and evaluated using three different taint analyses on one to eight processing cores. The results show that IFDS-RA performs better than Heros when using multiple cores. Additionally, the results show that Heros does not take advantage of multiple cores even if there are multiple cores available on the system.

  • 154. Acx, A. G.
    et al.
    Berg, M
    Karlsson, M
    Lindström, M
    Pettersson, Stefan
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Information Technology and Media.
    Slanina, P
    Zander, J
    Radio Resource Management2000In: Third generation mobile communication systems, Boston, Mass: Artech House, 2000, 386- p.Chapter in book (Other academic)
  • 155.
    Adamala, Szymon
    et al.
    Blekinge Institute of Technology, School of Management.
    Cidrin, Linus
    Blekinge Institute of Technology, School of Management.
    Key Success Factors in Business Intelligence2011Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Business Intelligence can bring critical capabilities to an organization, but the implementation of such capabilities is often plagued with problems and issues. Why is it that certain projects fail, while others succeed? The theoretical problem and the aim of this thesis is to identify the factors that are present in successful Business Intelligence projects and organize them into a framework of critical success factors. A survey was conducted during the spring of 2011 to collect primary data on Business Intelligence projects. It was directed to a number of different professionals operating in the Business Intelligence field in large enterprises, primarily located in Poland and primarily vendors, but given the similarity of Business Intelligence initiatives across countries and increasing globalization of large enterprises, the conclusions from this thesis may well have relevance and be applicable for projects conducted in other countries. Findings confirm that Business Intelligence projects are wrestling with both technological and nontechnological problems, but the non-technological problems are found to be harder to solve as well as more time consuming than their technological counterparts. The thesis also shows that critical success factors for Business Intelligence projects are different from success factors for IS projects in general and Business Intelligences projects have critical success factors that are unique to the subject matter. Major differences can be predominately found in the non-technological factors, such as the presence of a specific business need to be addressed by the project and a clear vision to guide the project. Results show that successful projects have specific factors present more frequently than nonsuccessful. Such factors with great differences are the type of project funding, business value provided by each iteration of the project and the alignment of the project to a strategic vision for Business Intelligence. Furthermore, the thesis provides a framework of critical success factors that, according to the results of the study, explains 61% of variability of success of projects. Given these findings, managers responsible for introducing Business Intelligence capabilities should focus on a number of non-technological factors to increase the likelihood of project success. Areas which should be given special attention are: making sure that the Business Intelligence solution is built with end users in mind, that the Business Intelligence solution is closely tied to company‟s strategic vision and that the project is properly scoped and prioritized to concentrate on best opportunities first. Keywords: Critical Success Factors, Business Intelligence, Enterprise Data Warehouse Projects, Success Factors Framework, Risk Management

  • 156. Adams, Liz
    et al.
    Börstler, Jürgen
    What It's Like to Participate in an ITiCSE Working Group2011In: ACM SIGCSE Bulletin, Vol. 43, no 1Article in journal (Other academic)
  • 157. Adams, Robin
    et al.
    Fincher, Sally
    Pears, Arnold
    Börstler, Jürgen
    Boustedt, Jonas
    University of Gävle, Department of Mathematics, Natural and Computer Sciences, Ämnesavdelningen för datavetenskap.
    Dalenius, Peter
    Eken, Gunilla
    Heyer, Tim
    Jacobsson, Andreas
    Lindberg, Vanja
    Molin, Bengt
    Moström, Jan-Erik
    Wiggberg, Mattias
    What is the word for 'Engineering' in Swedish: Swedish students conceptions of their discipline2007Report (Other academic)
    Abstract [en]

    Engineering education in Sweden – as in the rest of the world – is experiencing a decline in student interest. There are concerns about the ways in which students think about engineering education, why they join an academic programme in engineering, and why they persist in their studies. In this context the aims of the Nationellt ämnesdidaktiskt Centrum för Teknikutbildning i Studenternas Sammanhang project (CeTUSS) is to investigate the student experience and to identify and support a continuing network of interested researchers, as well as in building capacity for disciplinary pedagogic investigation.

    The Stepping Stones project brings together these interests in a multi-researcher, multi-institutional study that investigates how tudents and academic staff perceive engineering in Sweden and in Swedish education. The first results of that project are reported here. As this study is situated uniquely in Swedish education, it allows for exploration of “a Swedish perspective” on conceptions of engineering. The Stepping Stones project was based on a model of research capacity-building previously instantiated in the USA and Australia (Fincher & Tenenberg, 2006).

  • 158. Adams, Robin
    et al.
    Fischer, Sally
    Pears, Arnold
    Börstler, Jürgen
    Boustedt, Jonas
    Dalenius, Peter
    Eken, Gunilla
    Heyer, Tim
    Jacobsson, Andreas
    Lindberg, Vanja
    Molin, Bemgt
    Moström, Jan-Erik
    Wiggberg, Mattias
    What is the Word for "Engineering" in Swedisch: Swedish Students Conceptions of their Discipline2007Report (Refereed)
  • 159.
    Adamsson, Marcus
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Vorkapic, Aleksandar
    KTH, School of Computer Science and Communication (CSC).
    A comparison study of Kd-tree, Vp-tree and Octree for storing neuronal morphology data with respect to performance2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this thesis we investigated performance of Kdtree, Vptree and Octree for storing neuronal morphology data. Two naive list structures were implemented to compare with the space partition data structures. The performance was measured with different sizes of neuronal networks and different types of test cases. A comparison with focus on cache misses, average search time and memory usage was made. Furthermore, measurements gathered quantitative data about each data structure. The results showed significant difference in performance of each data structure. It was concluded that Vptree is more suitable for searches in smaller populations of neurons and for specific nodes in larger populations, while Kdtree is better for volume searches in larger populations. Octree had highest average search time and memory requirement.

  • 160. Adawi, Tom
    et al.
    Berglund, Anders
    Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
    Ingerman, Åke
    Booth, Shirley
    On context in phenomenographic research on understanding heat and temperate2002In: EARLI, Bi-annual Symposium, Fribourg, Switzerland, 2002Conference paper (Refereed)
    Abstract [en]

    Starting from an empirical study of lay adults' understanding of heatand temperature, we distinguish between different meanings of "context" inphenomenographic research. To confuse the variation in ways of experiencingthe context(s) of the study with the variation in ways of experiencing thephenomenon of study is to risk losing fundamental insights. We discuss contextas experienced and as interwoven with the experience of the phenomenon, andanalyse its significance in two dimensions: (1) the stage of the research project:formulating the question, collecting data, analysing data and deploying results;and (2) "who is experiencing" the context: the individual, the collective, or theresearcher. The arguments are illustrated from the empirical study.

  • 161.
    Adebomi, OYEKANLU Emmanuel
    et al.
    Blekinge Institute of Technology, School of Computing.
    Mwela, JOHN Samson
    Blekinge Institute of Technology, School of Computing.
    Impact of Packet Losses on the Quality of Video Streaming2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    In this thesis, the impact of packet losses on the quality of received videos sent across a network that exhibit normal network perturbations such as jitters, delays, packet drops etc has been examined. Dynamic behavior of a normal network has been simulated using Linux and the Network Emulator (NetEm). Peoples’ perceptions on the quality of the received video were used in rating the qualities of several videos with differing speeds. In accordance with ITU’s guideline of using Mean Opinion Scores (MOS), the effects of packet drops were analyzed. Excel and Matlab were used as tools in analyzing the peoples’ opinions which indicates the impacts that different loss rates has on the transmitted videos. Statistical methods used for evaluation of data are mean and variance. We conclude that people have convergence of opinions when losses become extremely high on videos with highly variable scene changes

  • 162.
    Adegoke, Adekunle
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Osimosu, Emmanuel
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Service Availability in Cloud Computing: Threats and Best Practices2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cloud computing provides access to on-demand computing resources and storage space, whereby applications and data are hosted with data centers managed by third parties, on a pay-per-use price model. This allows organizations to focus on core business goals instead of managing in-house IT infrastructure.                    

    However, as more business critical applications and data are moved to the cloud, service availability is becoming a growing concern. A number of recent cloud service disruptions have questioned the reliability of cloud environments to host business critical applications and data. The impact of these disruptions varies, but, in most cases, there are financial losses and damaged reputation among consumers.        

    This thesis aims to investigate the threats to service availability in cloud computing and to provide some best practices to mitigate some of these threats. As a result, we identified eight categories of threats. They include, in no particular order: power outage, hardware failure, cyber-attack, configuration error, software bug, human error, administrative or legal dispute and network dependency. A number of systematic mitigation techniques to ensure constant availability of service by cloud providers were identified. In addition, practices that can be applied by cloud customers and users of cloud services, to improve service availability were presented.

  • 163.
    Adeyinka, Oluwaseyi
    Blekinge Institute of Technology, School of Engineering, Department of Interaction and System Design.
    Service Oriented Architecture & Web Services: Guidelines for Migrating from Legacy Systems and Financial Consideration2008Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The purpose of this study is to present guidelines that can be followed when introducing Service-oriented architecture through the use of Web services. This guideline will be especially useful for organizations migrating from their existing legacy systems where the need also arises to consider the financial implications of such an investment whether it is worthwhile or not. The proposed implementation guide aims at increasing the chances of IT departments in organizations to ensure a successful integration of SOA into their system and secure strong financial commitment from the executive management. Service oriented architecture technology is a new concept, a new way of looking at a system which has emerged in the IT world and can be implemented by several methods of which Web services is one platform. Since it is a developing technology, organizations need to be cautious on how to implement this technology to obtain maximum benefits. Though a well-designed, service-oriented environment can simplify and streamline many aspects of information technology and business, achieving this state is not an easy task. Traditionally, management finds it very difficult to justify the considerable cost of modernization, let alone shouldering the risk without achieving some benefits in terms of business value. The study identifies some common best practices of implementing SOA and the use of Web services, steps to successfully migrate from legacy systems to componentized or service enabled systems. The study also identified how to present financial return on investment and business benefits to the management in order to secure the necessary funds. This master thesis is based on academic literature study, professional research journals and publications, interview with business organizations currently working on service oriented architecture. I present guidelines that can be of assistance to migrate from legacy systems to service-oriented architecture based on the analysis from comparing information sources mentioned above.

  • 164.
    Adikari, Jithra
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Efficient non-repudiation for techno-information environment2006In: 2006 International Conference on Industrial and Information Systems, Vols 1 and 2, NEW YORK: IEEE , 2006, 454-458 p.Conference paper (Refereed)
    Abstract [en]

    Non-repudiation means that a person is unable to deny a certain action that he has done under any circumstances. There are several mechanisms, standards and protocols to achieve non-repudiation in techno-information enviromnent. However efficiency in non-repudiation in legal framework was not considerably addressed within the context of those mechanisms. Lack of efficient non-repudiation in the legal framework for techno-information environment makes legal issues when evidence is generated maintained. It can be derived that traditional non-repudiation mechanism delivers efficient non-repudiation. Efficient non-repudiation in techno-information environment is achieved by mapping traditional non-repudiation. Evaluation methodology for efficiency of non-repudiation mechanisms has been improved during this work. Further most significant finding of this research is the Efficient Non-Repudiation Protocol.

  • 165.
    Adlerborn, Björn
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Parallel Algorithms and Library Software for the Generalized Eigenvalue Problem on Distributed Memory Computer Systems2016Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    We present and discuss algorithms and library software for solving the generalized non-symmetric eigenvalue problem (GNEP) on high performance computing (HPC) platforms with distributed memory. Such problems occur frequently in computational science and engineering, and our contributions make it possible to solve GNEPs fast and accurate in parallel using state-of-the-art HPC systems. A generalized eigenvalue problem corresponds to finding scalars y and vectors x such that Ax = yBx, where A and B are real square matrices. A nonzero x that satisfies the GNEP equation is called an eigenvector of the ordered pair (A,B), and the scalar y is the associated eigenvalue. Our contributions include parallel algorithms for transforming a matrix pair (A,B) to a generalized Schur form (S,T), where S is quasi upper triangular and T is upper triangular. The eigenvalues are revealed from the diagonals of S and T. Moreover, for a specified set of eigenvalues an associated pair of deflating subspaces can be computed, which typically is requested in various applications. In the first stage the matrix pair (A,B) is reduced to a Hessenberg-triangular form (H,T), where H is upper triangular with one nonzero subdiagonal and T is upper triangular, in a finite number of steps. The second stage reduces the matrix pair further to generalized Schur form (S,T) using an iterative QZ-based method. Outgoing from a one-stage method for the reduction from (A,B) to (H,T), a novel parallel algorithm is developed. In brief, a delayed update technique is applied to several partial steps, involving low level operations, before associated accumulated transformations are applied in a blocked fashion which together with a wave-front task scheduler makes the algorithm scale when running in a parallel setting. The potential presence of infinite eigenvalues makes a generalized eigenvalue problem ill-conditioned. Therefore the parallel algorithm for the second stage, reduction to (S,T) form, continuously scan for and robustly deflate infinite eigenvalues. This will reduce the impact so that they do not interfere with other real eigenvalues or are misinterpreted as real eigenvalues. In addition, our parallel iterative QZ-based algorithm makes use of multiple implicit shifts and an aggressive early deflation (AED) technique, which radically speeds up the convergence. The multi-shift strategy is based on independent chains of so called coupled bulges and computational windows which is an important source of making the algorithm scalable. The parallel algorithms have been implemented in state-of-the-art library software. The performance is demonstrated and evaluated using up to 1600 CPU cores for problems with matrices as large as 100000 x 100000. Our library software is described in a User Guide. The software is, optionally, tunable via a set of parameters for various thresholds and buffer sizes etc. These parameters are discussed, and recommended values are specified which should result in reasonable performance on HPC systems similar to the ones we have been running on.

  • 166.
    Adlerborn, Björn
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kågström, Bo
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kressner, Daniel
    A parallel QZ algorithm for distributed memory HPC systems2014In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 36, no 5, C480-C503 p.Article in journal (Refereed)
    Abstract [en]

    Appearing frequently in applications, generalized eigenvalue problems represent one of the core problems in numerical linear algebra. The QZ algorithm of Moler and Stewart is the most widely used algorithm for addressing such problems. Despite its importance, little attention has been paid to the parallelization of the QZ algorithm. The purpose of this work is to fill this gap. We propose a parallelization of the QZ algorithm that incorporates all modern ingredients of dense eigensolvers, such as multishift and aggressive early deflation techniques. To deal with (possibly many) infinite eigenvalues, a new parallel deflation strategy is developed. Numerical experiments for several random and application examples demonstrate the effectiveness of our algorithm on two different distributed memory HPC systems.

  • 167.
    Adolfsson, John
    University of Skövde, School of Humanities and Informatics.
    Pattern Parameterization with Granules in Ship Movements: Describing identifying aspects of movement patterns with varying levels of granularity2010Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This report aims to explore a possible transparent alternative to the black box approach of machine learning in identifying a ship’s type from simple movement data, consisting of a set of coordinates with timestamps. This is achieved by an application that converts the set of coordinates to vectors and assigns them various traits, such as turn radius, speed and distance traveled, and then identifying the correlation between collections of different values of these traits, called granules, and different ship types. The results show a definite connection between certain kinds of granules and certain ship types and lay the foundation for building a more well defined syntax for ship identification.

  • 168.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Business Administration and Social Science.
    Säkerhetskapital En del av det Intellektuella Kapitalet2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [sv]

    Det saknas metoder att mäta informationssäkerhet inom företag och företagets tillgångar har förändrats från ett fokus på maskiner och råvaror till kunskap (intellektuellt kapital). Rapporten utforskar om det finns delar av företags intellektuella kapital som beskyddar företagets tillgångar och processer. Detta kapital kallas säkerhetskapital. Hur skulle företags informationssäkerhet kunna tydliggöras genom dess intellektuella kapital och hur kan begrepp inom informationssäkerhet och företagsvärdering hänga samman? Syftet med uppsatsen är att öka förståelsen hur informationssäkerhet är relaterat till intellektuellt kapital. Rapporten bygger på litteraturstudier om intellektuellt kapital och informationssäkerhet. Data har samlats in från dels börsnoterade företags årsredovisningar och dels från pressreleaser och börsinformation. Denna information har sedan analyserats både kvantitativt och kvalitativt och begreppet säkerhetskapital har växt fram. Teorier om företagsvärdering, intellektuellt kapital, risk management och informationssäkerhet presenteras och blir den referensram i vilket begreppet säkerhetskapital sätts i sitt sammanhang. Begreppet säkerhetskapital presenteras i form av modeller och situationer vari olika perspektiv på säkerhetskapital analyseras och utvärderas. Slutsatserna är främst i form av modeller och beskrivningar av hur man kan se på säkerhetskapital i förhållande till intellektuellt kapital och andra begrepp. Området är komplext men delar av resultaten (som är på en hög abstraktionsnivå) kan användas för att värdera andra typer av immateriella tillgångar.

  • 169.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    The State of the Art in Distributed Mobile Robotics2001Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Distributed Mobile Robotics (DMR) is a multidisciplinary research area with many open research questions. This is a survey of the state of the art in Distributed Mobile Robotics research. DMR is sometimes referred to as cooperative robotics or multi-robotic systems. DMR is about how multiple robots can cooperate to achieve goals and complete tasks better than single robot systems. It covers architectures, communication, learning, exploration and many other areas presented in this master thesis.

  • 170.
    ADORF, JULIUS
    KTH, School of Computer Science and Communication (CSC).
    Motion Segmentation of RGB-D Videosvia Trajectory Clustering2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Motion segmentation of RGB-D videos can be a first step towards object reconstruction in dynamic scenes. The objective in this thesis is to end an ecient motion segmentation method that can deal with a moving camera. To this end, we adopt a feature-based approach where keypoints in the images are tracked over time. The variation in the observed pairwise 3-d distances is used to determine which of the points move similarly. We then employ spectral clusteringto group trajectories into clusters with similar motion, thereby obtaining a sparse segmentation of the dynamic objectsin the scene. The results on twenty scenes from real world datasets and simulations show that while the method needs more sophistication to segment all of them, several dynamic scenes have been successfully segmented at a processing speed of multiple frames per second.

  • 171. Adrian, K.
    et al.
    Chocron, P.
    Confalonieri, R.
    Ferrer, X.
    Giraldez-Cru, J.
    KTH, School of Computer Science and Communication (CSC), Theoretical Computer Science, TCS.
    Link prediction in evolutionary graphs the case study of the CCIA network2016In: 19th International Conference of the Catalan Association for Artificial Intelligence, CCIA 2016, IOS Press, 2016, 187-196 p.Conference paper (Refereed)
    Abstract [en]

    Studying the prediction of new links in evolutionary networks is a captivating question that has received the interest of different disciplines. Link prediction allows to extract missing information and evaluate network dynamics. Some algorithms that tackle this problem with good performances are based on the sociability index, a measure of node interactions over time. In this paper, we present a case study of this predictor in the evolutionary graph that represents the CCIA co-authorship network from 2005 to 2015. Moreover, we present a generalized version of this sociability index, that takes into account the time in which such interactions occur. We show that this new index outperforms existing predictors. Finally, we use it in order to predict new co-authorships for CCIA 2016.

  • 172.
    Adzemovic, Haris
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Sandor, Alexander
    KTH, School of Computer Science and Communication (CSC).
    Comparison of user and item-based collaborative filtering on sparse data2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Recommender systems are used extensively today in many areas to help users and consumers with making decisions. Amazon recommends books based on what you have previously viewed and purchased, Netflix presents you with shows and movies you might enjoy based on your interactions with the platform and Facebook serves personalized ads to every user based on gathered browsing information. These systems are based on shared similarities and there are several ways to develop and model them. This study compares two methods, user and item-based filtering in k nearest neighbours systems.The methods are compared on how much they deviate from the true answer when predicting user ratings of movies based on sparse data. The study showed that none of the methods could be considered objectively better than the other and that the choice of system should be based on the data set.

  • 173.
    Aerts, Arend
    et al.
    Eindhoven University of Technology, Eindhoven, The Netherlands.
    Reniers, Michel A.
    Eindhoven University of Technology, Eindhoven, The Netherlands.
    Mousavi, Mohammad Reza
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Model-Based Testing of Cyber-Physical Systems2016In: Cyber-Physical Systems: Foundations, Principles and Applications / [ed] H. Song, D.B. Rawat, S. Jeschke, and Ch. Brecher, Saint Louis: Elsevier, 2016, 287-304 p.Chapter in book (Refereed)
    Abstract [en]

    Cyber-physical systems (CPSs) are the result of the integration of connected computer systems with the physical world. They feature complex interactions that go beyond traditional communication schemes and protocols in computer systems. One distinguished feature of such complex interactions is the tight coupling between discrete and continuous interactions, captured by hybrid system models.

    Due to the complexity of CPSs, providing rigorous and model-based analysis methods and tools for verifying correctness of such systems is of the utmost importance. Model-based testing (MBT) is one such verification technique that can be used for checking the conformance of an implementation of a system to its specification (model).

    In this chapter, we first review the main concepts and techniques in MBT. Subsequently, we review the most common modeling formalisms for CPSs, with focus on hybrid system models. Subsequently, we provide a brief overview of conformance relations and conformance testing techniques for CPSs. © 2017 Elsevier Inc. All rights reserved.

  • 174.
    af Sandeberg, Jonas
    KTH, School of Computer Science and Communication (CSC).
    Riksdagsval via Internet – Ett system för säkra val via Internet i Sverige.2012Independent thesis Advanced level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this essay a system for voting via the Internet in Sweden is designed. To do this the current Swedish election system is examined. Research is also done on what technologies can be used to build such a system. Lastly systems already used for Internet voting in other countries are examined. Based on the result of the research a system for voting via the Internet in Sweden is designed. The system is designed to follow all safety regulations demanded by a democtratic election. The essay shows that it is possible to design a system for voting via the Internet in Sweden and also that such a system likely would increase the turnout in elections.

  • 175.
    Afrim, Cerimi
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Norén, Joakim
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Motåtgärder vid IT-forensisk liveanalys2011Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Live Analysis is a concept that in this paper means analyzing a computer system while it is running. This can be done for several reasons, such as when there is a risk that the system has encryption which can be activated when the system shuts down. Otherwise, it is common if you want to examine network connections, active processes or other phenomena that can be volatile, i.e. disappear when the system shuts down. This work will focus on countermeasures to live forensic analysis and describe different methods and strategies that can be used for these countermeasures. For example, we wrote a program that automatically shuts down the system when you insert a USB memory stick or any other media. These are usually the media which you have your forensic programs on when you do a live analysis. Other important elements of the work are the use of encryption, timestamps and malicious code for challenging live analysis. Our analysis of the topic shows that it is relatively easy to prevent that a live analysis can be performed in a reliable way.

  • 176.
    Afroze, Tonima
    et al.
    KTH, School of Technology and Health (STH).
    Rosén Gardell, Moa
    KTH, School of Technology and Health (STH).
    Algorithm Construction for Efficient Scheduling of Advanced Health Care at Home2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Providing advanced health care at home rather than in a hospital creates a greater quality of life for patients and their families. It also lowers the risk of hospital-acquired infections and accelerates recovery. The overall cost of care per patient is decreased. Manual scheduling of patient visits by health care professionals (HCPs) has become a bottleneck for increased patient capacity at SABH, a ward providing advanced pediatric health care at home (“Sjukhusansluten Avancerad Barnsjukvård i Hemmet” in Swedish), since many parameters need to be taken into account during scheduling. This thesis aims to increase the efficiency of SABH’s daily scheduling of personnel and resources by designing an automated scheduler that constructs a daily schedule and incorporates changes in it when needed in order to remove scheduling as a limitation for increased patient capacity. Requirements on a feasible schedule are identified in cooperation with SABH and literature is investigated about similar areas where the scheduling process has been automated. The scheduling is formulated as a computerized problem and investigated from the perspective of theoretical computer science. We show that the scheduling problem is NP-hard and can therefore not be expected to be solved optimally. The algorithm for scheduling the visits minimizes violations of time windows and travel times, and maximizes person continuity and workload balancing. The algorithm constructs an initial solution that fulfills time constraints using a greedy approach and then uses local search, simulated annealing, and tabu search to iteratively improve the solution. We present an exact rescheduling algorithm that incorporates additional visits after the original schedule has been set. The scheduling algorithm was implemented and tested on real data from SABH. Although we found the algorithm to be efficient, automatic transfer of data from the patient journal system is an imperative for the scheduler to be adopted.

  • 177.
    Afshar, Sara
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lock-Based Resource Sharing in Real-Time Multiprocessor Platforms2014Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Embedded systems are typically resource constrained, i.e., resources such as processors, I/O devices, shared buffers or shared memory can be limited for tasks in the system. Therefore, techniques that enable an efficient usage of such resources are of great importance.

    In the industry, typically large and complex software systems are divided into smaller parts (applications) where each part is developed independently. Migration towards multiprocessor platforms has become inevitable from an industrial perspective. Due to such migration and to efficient use of system resources, these applications eventually may be integrated on a shared multiprocessor platform. In order to facilitate the integration phase of the applications on a shared platform, the timing and resource requirements of each application can be provided in an interface when the application is developed. The system integrator can benefit from such provided information in the interface of each application to ease the integration process. In this thesis, we have provided the resource and timing requirements of each application in their interfaces for applications that may need several processors to be allocated on when they are developed.

    Although many scheduling techniques have been studied for multiprocessor systems, these techniques are usually based on the assumption that tasks are independent, i.e. do not share resources other than the processors. This assumption is typically not true. In this thesis, we provide an extension to such systems to handle sharing of resources other than processor among tasks. Two traditional approaches exist for multiprocessor systems to schedule tasks on processors. A recent scheduling approach for multiprocessors has combined the two traditional approaches and achieved a hybrid more efficient approach compared to the two previous one. Due to the complex nature of this scheduling approach the conventional approaches for resource sharing could not be used straight forwardly. In this thesis, we have modified resource sharing approaches such that they can be used in such hybrid scheduling systems. A second concern is that enabling resource sharing in the systems can cause unpredictable delays and variations in response time of tasks which can degrade system performance. Therefore, it is of great significance to improve the resource handling techniques to reduce the effect of imposed delays caused by resource sharing in a multiprocessor platform. In this thesis we have proposed alternative techniques for resource handling that can improve system performance for special setups.

  • 178.
    Afshar, Sara
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    J. Bril, Reinder
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Per Processor Spin-Lock Priority for Partitioned Multiprocessor Real-Time SystemsIn: Leibniz Transactions on Embedded Systems, ISSN 2199-2002Article in journal (Other academic)
    Abstract [en]

    Two traditional approaches exist for a task that is blocked on a global resource; a task either performs a non-preemptive busy wait, i.e., spins, or suspends and releases the processor. Previously, we have shown that both approaches can be viewed as spinning either at the highest priority HP or at the lowest priority on the processor LP, respectively. Based on this view, previously we have generalized a task's blocking behavioral model, as spinning at any arbitrary priority level. In this paper, we focus on a particular class of spin-lock protocols from the introduced flexible spin-lock model where spinning is performed at a priority equal to or higher than the highest local ceiling of the global resources accessed on a processor referred to as CP spin-lock approach. In this paper, we assume that all tasks of a specific processor are spinning on the same priority level. Given this class and assumption, we show that there exists a spin-lock protocol in this range that dominates the classic spin-lock protocol which tasks spin on highest priority level (HP). However we show that this new approach is incomparable with the CP spin-lock approach. Moreover, we show that there may exist an intermediate spin-lock approach between the priority used by CP spin-lock approach and the new introduced spin-lock approach that can make a task set schedulable when those two cannot. We provide an extensive evaluation results comparing the HP, CP and the new proposed approach.

  • 179.
    Afshar, Sara
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Behnam, Moris
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    J. Bril, Reinder
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Per Processor Spin-Lock Priority for Partitioned Multiprocessor Real-Time Systems2014Report (Other academic)
    Abstract [en]

    Two traditional approaches exist for a task that is blocked on a global resource; a task either performs a non-preemptive busy wait, i.e., spins, or suspends and releases the processor. Previously, we have shown that both approaches can be viewed as spinning either at the highest priority HP or at the lowest priority on the processor LP, respectively. Based on this view, previously we have generalized a task's blocking behavioral model, as spinning at any arbitrary priority level. In this paper, we focus on a particular class of spin-lock protocols from the introduced flexible spin-lock model where spinning is performed at a priority equal to or higher than the highest local ceiling of the global resources accessed on a processor referred to as CP spin-lock approach. In this paper, we assume that all tasks of a specific processor are spinning on the same priority level. Given this class and assumption, we show that there exists a spin-lock protocol in this range that dominates the classic spin-lock protocol which tasks spin on highest priority level (HP). However we show that this new approach is incomparable with the CP spin-lock approach. Moreover, we show that there may exist an intermediate spin-lock approach between the priority used by CP spin-lock approach and the new introduced spin-lock approach that can make a task set schedulable when those two cannot. We provide an extensive evaluation results comparing the HP, CP and the new proposed approach.

  • 180.
    Aftab, Obaid
    et al.
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Fryknäs, Mårten
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Hassan, Saadia
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Nygren, Peter
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Radiology, Oncology and Radiation Science, Oncology.
    Larsson, Rolf
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Hammerling, Ulf
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Gustafsson, Mats
    Uppsala University, Disciplinary Domain of Medicine and Pharmacy, Faculty of Medicine, Department of Medical Sciences, Cancer Pharmacology and Computational Medicine.
    Label free quantification of time evolving morphologies using time-lapse video microscopy enables identity control of cell lines and discovery of chemically induced differential activity in iso-genic cell line pairs2015In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 141, 24-32 p.Article in journal (Refereed)
    Abstract [en]

    Label free time-lapse video microscopy based monitoring of time evolving cell population morphology has potential to offer a simple and cost effective method for identity control of cell lines. Such morphology monitoring also has potential to offer discovery of chemically induced differential changes between pairs of cell lines of interest, for example where one in a pair of cell lines is normal/sensitive and the other malignant/resistant. A new simple algorithm, pixel histogram hierarchy comparison (PHHC), for comparison of time evolving morphologies (TEM) in phase contrast time-lapse microscopy movies was applied to a set of 10 different cell lines and three different iso-genic colon cancer cell line pairs, each pair being genetically identical except for a single mutation. PHHC quantifies differences in morphology by comparing pixel histogram intensities at six different resolutions. Unsupervised clustering and machine learning based classification methods were found to accurately identify cell lines, including their respective iso-genic variants, through time-evolving morphology. Using this experimental setting, drugs with differential activity in iso-genic cell line pairs were likewise identified. Thus, this is a cost effective and expedient alternative to conventional molecular profiling techniques and might be useful as part of the quality control in research incorporating cell line models, e.g. in any cell/tumor biology or toxicology project involving drug/agent differential activity in pairs of cell line models.

  • 181.
    Aftarczuk, Kamila
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

  • 182. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

  • 183. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

  • 184. Afzal, Wasif
    Using faults-slip-through metric as a predictor of fault-proneness2010Conference paper (Refereed)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

  • 185. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, no 6, 957-976 p.Article in journal (Refereed)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 186. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Conference paper (Refereed)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

  • 187. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Gorschek, Tony
    Genetic programming for cross-release fault count predictions in large and complex software projects2010In: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, Hershey: IGI Global, Hershey, USA , 2010Chapter in book (Refereed)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 188. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Wikstrand, Greger
    Search-based prediction of fault-slip-through in large software projects2010Conference paper (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 189.
    Afzal, Zeeshan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Towards Secure Multipath TCP Communication2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The evolution in networking coupled with an increasing demand to improve user experience has led to different proposals to extend the standard TCP. Multipath TCP (MPTCP) is one such extension that has the potential to overcome few inherent limitations in the standard TCP. While MPTCP's design and deployment progresses, most of the focus has been on its compatibility. The security aspect is confined to making sure that the MPTCP protocol itself offers the same security level as the standard TCP.

    The topic of this thesis is to investigate the unexpected security implications raised by using MPTCP in the traditional networking environment. The Internet of today has security middle-boxes that perform traffic analysis to detect intrusions and attacks. Such middle-boxes make use of different assumptions about the traffic, e.g., traffic from a single connection always arrives along the same path. This along with many other assumptions may not be true anymore with the advent of MPTCP as traffic can be fragmented and sent over multiple paths simultaneously.

    We investigate how practical it is to evade a security middle-box by fragmenting and sending traffic across multiple paths using MPTCP. Realistic attack traffic is used to evaluate such attacks against Snort IDS to show that these attacks are feasible. We then go on to propose possible solutions to detect such attacks and implement them in an MPTCP proxy. The proxy aims to extend the MPTCP performance advantages to servers that only support standard TCP, while ensuring that intrusions can be detected as before. Finally, we investigate the potential MPTCP scenario where security middle-boxes only have access to some of the traffic. We propose and implement an algorithm to perform intrusion detection in such situations and achieve a nearly 90% detection accuracy. Another contribution of this work is a tool, that converts IDS rules into equivalent attack traffic to automate the evaluation of a middle-box.

  • 190.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Garcia, Johan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Partial Signature Matching in an MPTCP World using Insert-only Levenshtein DistanceManuscript (preprint) (Other academic)
    Abstract [en]

    This paper proposes a methodology consisting of a constrained version of the Levenshtein distance that can be used to detect signatures from partial traffic. The proposed algorithm is formally presented, implemented, and tested using the latest available version of the Snort ruleset. The results show that the algorithm can successfully detect all partial signatures with nearly 90% accuracy.

  • 191.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Automated Testing of IDS Rules2015In: Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, IEEE conference proceedings, 2015Conference paper (Refereed)
    Abstract [en]

    As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.

  • 192.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    IDS rule management made easy2016In: Electronics, Computers and Artificial Intelligence (ECAI), 2016 8th International Conference on, IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Signature-based intrusion detection systems (IDSs) are commonly utilized in enterprise networks to detect and possibly block a wide variety of attacks. Their application in industrial control systems (ICSs) is also growing rapidly as modem ICSs increasingly use open standard protocols instead of proprietary. Due to an ever changing threat landscape, the rulesets used by these IDSs have grown large and there is no way to verify their precision or accuracy. Such broad and non-optimized rulesets lead to false positives and an unnecessary burden on the IDS, resulting in possible degradation of the security. This work proposes a methodology consisting of a set of tools to help optimize the IDS rulesets and make rule management easier. The work also provides attack traffic data that is expected to benefit the task of IDS assessment.

  • 193.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science. Karlstad University, Faculty of Health, Science and Technology (starting 2013).
    Multipath TCP IDS Evasion and Mitigation2015In: Information Security: 18th International Conference, ISC 2015, Trondheim, Norway, September 9-11, 2015, Proceedings, Springer, 2015, Vol. 9290, 265-282 p.Conference paper (Refereed)
    Abstract [en]

    The existing network security infrastructure is not ready for future protocols such as Multipath TCP (MPTCP). The outcome is that middleboxes are configured to block such protocols. This paper studies the security risk that arises if future protocols are used over unaware infrastructures. In particular, the practicality and severity of cross-path fragmentation attacks utilizing MPTCP against the signature-matching capability of the Snort intrusion detection system (IDS) is investigated. Results reveal that the attack is realistic and opens the possibility to evade any signature-based IDS. To mitigate the attack, a solution is also proposed in the form of the MPTCP Linker tool. The work outlines the importance of MPTCP support in future network security middleboxes.

  • 194.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Brunström, Anna
    Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science. Karlstad University, Faculty of Economic Sciences, Communication and IT, Centre for HumanIT.
    Lidén, Anders
    Towards Multipath TCP Aware Security Technologies2016In: New Technologies, Mobility and Security (NTMS), 2016 8th IFIP International Conference on, IEEE conference proceedings, 2016Conference paper (Refereed)
    Abstract [en]

    Multipath TCP (MPTCP) is a proposed extension to TCP that enables a number of performance advantages that have not been offered before. While the protocol specification is close to being finalized, there still remain some unaddressed challenges regarding the deployment and security implications of the protocol. This work attempts to tackle some of these concerns by proposing and implementing MPTCP aware security services and deploying them inside a proof of concept MPTCP proxy. The aim is to enable hosts, even those without native MPTCP support, to securely benefit from the MPTCP performance advantages. Our evaluations show that the security services that are implemented enable proper intrusion detection and prevention to thwart potential attacks as well as threshold rules to prevent denial of service (DoS) attacks.

  • 195.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Lindskog, Stefan
    Karlstad University, Faculty of Economic Sciences, Communication and IT, Department of Computer Science. Karlstad University, Faculty of Health, Science and Technology (starting 2013).
    Lidén, Anders
    A Multipath TCP Proxy2015Conference paper (Refereed)
    Abstract [en]

    Multipath TCP (MPTCP) is an extension to traditionalTCP that enables a number of performance advantages,which were not offered before. While the protocol specificationis close to being finalized, there still remain some concernsregarding deployability and security. This paper describes theon going work to develop a solution that will facilitate thedeployment of MPTCP. The solution will not only allow non-MPTCP capable end-hosts to benefit from MPTCP performancegains, but also help ease the network security concerns that manymiddleboxes face due to the possibility of data stream beingfragmented across multiple subflows.

  • 196.
    Afzal, Zeeshan
    et al.
    Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science.
    Rossebø, Judith
    Integrated Operations, ABB AS, Norway.
    Chowdhury, Mohammad
    Talha, Batool
    ABB Corporate Research, ABB AS, Norway.
    A Wireless Intrusion Detection System for 802.11 networks2016In: PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), IEEE conference proceedings, 2016, 828-834 p.Conference paper (Refereed)
    Abstract [en]

    Wireless local area networks (WLANs) deployment is increasing rapidly. At the same time, WLANs have become an attractive target for many potential attackers. In spite of that, the de facto standard used to implement most WLANs (IEEE 802.11) has what appear to be residual vulnerabilities related to identity spoofing. In this paper, a pragmatic study of two common attacks on the standard is conducted. These attacks are then implemented on test beds to learn attack behavior. Finally, novel attack signatures and techniques to detect these attacks are devised and implemented in a proof of concept Wireless Intrusion Detection System (WIDS).

  • 197.
    Agardh, Johannes
    et al.
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Johansson, Martin
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Pettersson, Mårten
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Designing Future Interaction with Today's Technology1999Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Information Technology has an increasing part of our lives. In this thesis we will discuss how technology can relate to humans and human activity. We take our standing point in concepts like Calm Technology and Tacit Interaction and examine how these visions and concepts can be used in the process of designing an artifact for a real work practice. We have done work-place studies of truck-drivers and traffic leaders regarding how they find their way to the right addresses and design a truck navigation system that aims to suit the truck drivers work practice.

  • 198.
    Agelis, Sacki
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Jonsson, Magnus
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Optoelectronic router with a reconfigurable shuffle network based on micro-optoelectromechanical systems2004In: Journal of Optical Networking, ISSN 1536-5379, Vol. 4, no 1, 1-10 p.Article in journal (Refereed)
    Abstract [en]

    An optoelectronic router with a shuffle exchange network is presented and enhanced by the addition of micro-optoelectromechanical systems (MOEMS) in the network to add the ability to reconfigure the shuffle network. The MOEMS described here are fully connected any-to-any crossbar switches. The added reconfigurability provides the opportunity to adapt the system to different common application characteristics. Two representative application models are described: The first has symmetric properties, and the second has asymmetric properties. The router system is simulated with the specified applications and an analysis of the results is carried out. By use of MOEMS in the optical network, and thus reconfigurability, greater than 50% increased throughput performance and decreased average packet delay are obtained for the given application. Network congestion is avoided throughout the system if reconfigurability is used.

  • 199.
    Agerblad, Josefin
    et al.
    KTH, School of Computer Science and Communication (CSC).
    Andersen, Martin
    KTH, School of Computer Science and Communication (CSC).
    Provably Secure Pseudo-Random Generators 2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This report is a literary study on provably secure pseudo-random generators. In the report we explain what provably secure pseudo-random generators are and what they are most commonly used for. We also discuss one-way functions which are closely related to our subject. Furthermore, two well-known generators are described and compared, one generator by Blum and Micali, and one by Blum, Blum and Shub. What we have concluded is that the x

    2 mod N generator by Blum, Blum and Shub seems to be the better one concerning speed, security and application areas. You will also be able to read about how the Blum-Blum-Shub generator can be implemented and why we believe that implementation is suitable.

  • 200.
    Aggestam, Lena
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    IT-supported knowledge repositories: Increasing their Usefulness by Supporting Knowledge Capture2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Organizations use various resources to achieve business objectives, and for financial gain. In modern business, knowledge is a critical resource, and organizations cannot afford not to manage it. Knowledge Management (KM) aims to support learning and to create value for the organization. Based on three levels of inquiry (why, what, how), work presented in this thesis includes a synthesized view of the existing body of knowledge concerning KM and hence a holistic characterization of KM. This characterization reveals a strong dependency between KM and Learning Organization (LO). Neither of them can be successful without the other. We show that a KM project resulting in an IT-supported knowledge repository is a suitable way to start when the intention is to initiate KM work. Thus, our research focuses on ITsupported knowledge repositories.Large numbers of KM projects fail, and organizations lack support for their KM undertakings. These are the main problems that our research addresses. In order for an IT-supported knowledge repository to be successful, it must be used. Thus, the content of the repository is critical for success. Our work reveals that the process of capturing new knowledge is critical if the knowledge repository is to include relevant and updated knowledge. With the purpose of supporting the capture process, this thesis provides a detailed characterization of the capture process as well as guidance aiming to facilitate the implementation of the capture process in such a way that knowledge is continuously captured, also after the KM implementation project is completed. We argue that the continuous capture of new knowledge which can potentially be stored in the knowledge repository will, in the long term perspective, have a positive influence on the usefulness of the repository. This will most likely increase the number of users of the repository and accordingly increase the number of successful KM projects.All the work presented in this thesis is the result of a qualitative research process comprising a literature review and an empirical study that were carried out in parallel. The empirical study is a case study inspired by action research, which involved participation in the project Efficient Knowledge Management and Learning in Knowledge Intensive Organizations (EKLär).

1234567 151 - 200 of 17364
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf