Ändra sökning
Avgränsa sökresultatet
6789101112 401 - 450 av 48159
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 401.
    Afshar, Sara
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Lock-Based Resource Sharing in Real-Time Multiprocessor Platforms2014Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Embedded systems are typically resource constrained, i.e., resources such as processors, I/O devices, shared buffers or shared memory can be limited for tasks in the system. Therefore, techniques that enable an efficient usage of such resources are of great importance.

    In the industry, typically large and complex software systems are divided into smaller parts (applications) where each part is developed independently. Migration towards multiprocessor platforms has become inevitable from an industrial perspective. Due to such migration and to efficient use of system resources, these applications eventually may be integrated on a shared multiprocessor platform. In order to facilitate the integration phase of the applications on a shared platform, the timing and resource requirements of each application can be provided in an interface when the application is developed. The system integrator can benefit from such provided information in the interface of each application to ease the integration process. In this thesis, we have provided the resource and timing requirements of each application in their interfaces for applications that may need several processors to be allocated on when they are developed.

    Although many scheduling techniques have been studied for multiprocessor systems, these techniques are usually based on the assumption that tasks are independent, i.e. do not share resources other than the processors. This assumption is typically not true. In this thesis, we provide an extension to such systems to handle sharing of resources other than processor among tasks. Two traditional approaches exist for multiprocessor systems to schedule tasks on processors. A recent scheduling approach for multiprocessors has combined the two traditional approaches and achieved a hybrid more efficient approach compared to the two previous one. Due to the complex nature of this scheduling approach the conventional approaches for resource sharing could not be used straight forwardly. In this thesis, we have modified resource sharing approaches such that they can be used in such hybrid scheduling systems. A second concern is that enabling resource sharing in the systems can cause unpredictable delays and variations in response time of tasks which can degrade system performance. Therefore, it is of great significance to improve the resource handling techniques to reduce the effect of imposed delays caused by resource sharing in a multiprocessor platform. In this thesis we have proposed alternative techniques for resource handling that can improve system performance for special setups.

  • 402.
    Afshar, Sara
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Behnam, Moris
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Bril, R. J.
    Technische Universiteit Eindhoven, Eindhoven, Netherlands .
    Nolte, Thomas
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Flexible spin-lock model for resource sharing in multiprocessor real-time systems2014Ingår i: Proc. IEEE Int. Symp. Ind. Embedded Syst., SIES, 2014, s. 41-51Konferensbidrag (Refereegranskat)
    Abstract [en]

    Various approaches can be utilized upon resource locking for mutually exclusive resource access in multiprocessor platforms. So far two conventional approaches exist for dealing with tasks that are blocked on a global resource in a multi-processor platform. Either the blocked task performs a busy wait, i.e. spins, at the highest priority level until the resource is released, or it is suspended. Although both approaches provide mutually exclusive access to resources, they can introduce long blocking delays to tasks, which may be unacceptable for many industrial applications. In this paper, we propose a general spin-based model for resource sharing in multiprocessor platforms in which the priority of the blocked tasks during spinning can be selected arbitrarily. Moreover, we provide the analysis for two selected spin-lock priorities and we show by means of a general comparison as well as specific examples that these solutions may provide a better performance for higher priority tasks.

  • 403.
    Afshar, Sara
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Behnam, Moris
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Bril, Reinder J.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system. Technische Universiteit Eindhoven, Eindhoven, Netherlands.
    Nolte, Thomas
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Resource Sharing Under Global Scheduling with Partial Processor Bandwidth2015Ingår i: 2015 10th IEEE International Symposium on Industrial Embedded Systems, SIES 2015 - Proceedings, 2015, s. 195-206Konferensbidrag (Refereegranskat)
    Abstract [en]

    Resource efficient approaches are of great importance for resource constrained embedded systems. In this paper, we present an approach targeting systems where tasks of a critical application are partitioned on a multi-core platform and by using resource reservation techniques, the remaining bandwidth capacity on each core is utilized for one or a set of non-critical application(s). To provide a resource efficient solution and to exploit the potential parallelism of the extra applications on the multi-core processor, global scheduling is used to schedule the tasks of the non-critical applications. Recently a specific instantiation of such a system has been studied where tasks do not share resources other than the processor. In this paper, we enable semaphore-based resource sharing among tasks within critical and non-critical applications using a suspension-based synchronization protocol. Tasks of non-critical applications have partial access to the processor bandwidth. The paper provides the systems schedulability analysis where blocking due to resource sharing is bounded. Further, we perform experimental evaluations under balanced and unbalanced allocation of tasks of a critical application to cores.

  • 404.
    Afshar, Sara
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Behnam, Moris
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    J. Bril, Reinder
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Nolte, Thomas
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Per Processor Spin-Lock Priority for Partitioned Multiprocessor Real-Time Systems2014Rapport (Övrigt vetenskapligt)
    Abstract [en]

    Two traditional approaches exist for a task that is blocked on a global resource; a task either performs a non-preemptive busy wait, i.e., spins, or suspends and releases the processor. Previously, we have shown that both approaches can be viewed as spinning either at the highest priority HP or at the lowest priority on the processor LP, respectively. Based on this view, previously we have generalized a task's blocking behavioral model, as spinning at any arbitrary priority level. In this paper, we focus on a particular class of spin-lock protocols from the introduced flexible spin-lock model where spinning is performed at a priority equal to or higher than the highest local ceiling of the global resources accessed on a processor referred to as CP spin-lock approach. In this paper, we assume that all tasks of a specific processor are spinning on the same priority level. Given this class and assumption, we show that there exists a spin-lock protocol in this range that dominates the classic spin-lock protocol which tasks spin on highest priority level (HP). However we show that this new approach is incomparable with the CP spin-lock approach. Moreover, we show that there may exist an intermediate spin-lock approach between the priority used by CP spin-lock approach and the new introduced spin-lock approach that can make a task set schedulable when those two cannot. We provide an extensive evaluation results comparing the HP, CP and the new proposed approach.

  • 405.
    Afshar, Sara
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Behnam, Moris
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    J. Bril, Reinder
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Nolte, Thomas
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Per Processor Spin-Lock Priority for Partitioned Multiprocessor Real-Time SystemsIngår i: Leibniz Transactions on Embedded Systems, ISSN 2199-2002Artikel i tidskrift (Övrigt vetenskapligt)
    Abstract [en]

    Two traditional approaches exist for a task that is blocked on a global resource; a task either performs a non-preemptive busy wait, i.e., spins, or suspends and releases the processor. Previously, we have shown that both approaches can be viewed as spinning either at the highest priority HP or at the lowest priority on the processor LP, respectively. Based on this view, previously we have generalized a task's blocking behavioral model, as spinning at any arbitrary priority level. In this paper, we focus on a particular class of spin-lock protocols from the introduced flexible spin-lock model where spinning is performed at a priority equal to or higher than the highest local ceiling of the global resources accessed on a processor referred to as CP spin-lock approach. In this paper, we assume that all tasks of a specific processor are spinning on the same priority level. Given this class and assumption, we show that there exists a spin-lock protocol in this range that dominates the classic spin-lock protocol which tasks spin on highest priority level (HP). However we show that this new approach is incomparable with the CP spin-lock approach. Moreover, we show that there may exist an intermediate spin-lock approach between the priority used by CP spin-lock approach and the new introduced spin-lock approach that can make a task set schedulable when those two cannot. We provide an extensive evaluation results comparing the HP, CP and the new proposed approach.

  • 406.
    Aftab, Obaid
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Fryknäs, Mårten
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Hammerling, Ulf
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Larsson, Rolf
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Gustafsson, Mats
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Detection of cell aggregation and altered cell viability by automated label-free video microscopy: A promising alternative to endpoint viability assays in high throughput screening2015Ingår i: Journal of Biomolecular Screening, ISSN 1087-0571, E-ISSN 1552-454X, Vol. 20, nr 3, s. 372-381Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Automated phase-contrast video microscopy now makes it feasible to monitor a high-throughput (HT) screening experiment in a 384-well microtiter plate format by collecting one time-lapse video per well. Being a very cost-effective and label-free monitoring method, its potential as an alternative to cell viability assays was evaluated. Three simple morphology feature extraction and comparison algorithms were developed and implemented for analysis of differentially time-evolving morphologies (DTEMs) monitored in phase-contrast microscopy videos. The most promising layout, pixel histogram hierarchy comparison (PHHC), was able to detect several compounds that did not induce any significant change in cell viability, but made the cell population appear as spheroidal cell aggregates. According to recent reports, all these compounds seem to be involved in inhibition of platelet-derived growth factor receptor (PDGFR) signaling. Thus, automated quantification of DTEM (AQDTEM) holds strong promise as an alternative or complement to viability assays in HT in vitro screening of chemical compounds.

  • 407.
    Aftab, Obaid
    et al.
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Fryknäs, Mårten
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Hassan, Saadia
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Nygren, Peter
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för radiologi, onkologi och strålningsvetenskap, Enheten för onkologi.
    Larsson, Rolf
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Hammerling, Ulf
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Gustafsson, Mats
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för medicinska vetenskaper, Cancerfarmakologi och beräkningsmedicin.
    Label free quantification of time evolving morphologies using time-lapse video microscopy enables identity control of cell lines and discovery of chemically induced differential activity in iso-genic cell line pairs2015Ingår i: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 141, s. 24-32Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Label free time-lapse video microscopy based monitoring of time evolving cell population morphology has potential to offer a simple and cost effective method for identity control of cell lines. Such morphology monitoring also has potential to offer discovery of chemically induced differential changes between pairs of cell lines of interest, for example where one in a pair of cell lines is normal/sensitive and the other malignant/resistant. A new simple algorithm, pixel histogram hierarchy comparison (PHHC), for comparison of time evolving morphologies (TEM) in phase contrast time-lapse microscopy movies was applied to a set of 10 different cell lines and three different iso-genic colon cancer cell line pairs, each pair being genetically identical except for a single mutation. PHHC quantifies differences in morphology by comparing pixel histogram intensities at six different resolutions. Unsupervised clustering and machine learning based classification methods were found to accurately identify cell lines, including their respective iso-genic variants, through time-evolving morphology. Using this experimental setting, drugs with differential activity in iso-genic cell line pairs were likewise identified. Thus, this is a cost effective and expedient alternative to conventional molecular profiling techniques and might be useful as part of the quality control in research incorporating cell line models, e.g. in any cell/tumor biology or toxicology project involving drug/agent differential activity in pairs of cell line models.

  • 408.
    Aftarczuk, Kamila
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

  • 409.
    Afyounian, Ebrahim
    Linnéuniversitetet, Fakulteten för teknik (FTK), Institutionen för informatik (IK).
    Information and Communication Technologies in Support of Remembering: A Postphenomenological Study2014Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis aimed to study the everyday use of ICT-enabled memory aids in order to understand and to describe the technological mediations that are brought by them (i.e. how they shape/mediate experiences and actions of their users). To do this, a post-phenomenological approach was appropriated. Postphenomenology is a modified, hybrid phenomenology that tries to overcome the limitations of phenomenology. As for theoretical framework, ‘Technological Mediation’ was adopted to conduct the study. Technological Mediation as a theory provides concepts suitable for explorations of the phenomenon of human-technology relation.

    It was believed that this specific choice of approach and theoretical framework would provide a new way of exploring the use of concrete technologies in everyday life of human beings and the implications that this use might have on humans’ lives. The study was conducted in the city of Växjö, Sweden. Data was collected by conducting twelve face-to-face semi-structured interviews. Collected data was, then, analyzed by applying the concepts within the theoretical framework – Technological Mediation - to them.

    The results of this study provided a list of ICT-enabled devices and services that participants were using in their everyday life in order to support their memory such as: calendars, alarms, notes, bookmarks, etc. Furthermore, this study resulted in a detailed description of how these devices and services shaped/mediated the experiences and the actions of their users. 

  • 410.
    Afzal, Muhammad
    Högskolan i Jönköping, Tekniska Högskolan, JTH, Data- och elektroteknik.
    Modelling temporal aspects of healthcare processes with Ontologies2010Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis represents the ontological model for the Time Aspects for a Healthcare Organization. It provides information about activities which take place at different interval of time at Ryhov Hospital. These activities are series of actions which may be happen in predefined sequence and at predefined times or may be happen at any time in a General ward or in Emergency ward of a Ryhov Hospital.

    For achieving above mentioned objective, our supervisor conducts a workshop at the start of thesis. In this workshop, the domain experts explain the main idea of ward activities. From this workshop; the author got a lot of knowledge about activities and time aspects. After this, the author start literature review for achieving valuable knowledge about ward activities, time aspects and also methodology steps which are essentials for ontological model. After developing ontological model for Time Aspects, our supervisor also conducts a second workshop. In this workshop, the author presents the model for evaluation purpose.

  • 411. Afzal, Wasif
    Lessons from applying experimentation in software engineering prediction systems2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

  • 412.
    Afzal, Wasif
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Metrics in Software Test Planning and Test Design Processes2007Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [en]

    Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.

  • 413. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

  • 414. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

  • 415. Afzal, Wasif
    Using faults-slip-through metric as a predictor of fault-proneness2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

  • 416. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015Ingår i: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, nr 3, s. 844-878Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 417.
    Afzal, Wasif
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system.
    Ghazi, Nauman
    Blekinge Institute of Technolog.
    Itkonen, Juha
    Aalto University, Espoo, Finland.
    Torkar, Richard
    Chalmers University of Technology.
    Andrews, Anneliese
    University of Denver, USA.
    Bhatti, Khurram
    Blekinge Institute of Technolog.
    An experiment on the effectiveness and efficiency of exploratory testing2015Ingår i: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 20, nr 3, s. 844-878Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

  • 418. Afzal, Wasif
    et al.
    Torkar, Richard
    A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 419.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för teknik, Avdelningen för programvarusystem.
    Incorporating Metrics in an Organizational Test Strategy2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

  • 420. Afzal, Wasif
    et al.
    Torkar, Richard
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011Ingår i: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, nr 9, s. 11984-11997Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

  • 421. Afzal, Wasif
    et al.
    Torkar, Richard
    Suitability of Genetic Programming for Software Reliability Growth Modeling2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

  • 422. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, Institutionen för programvaruteknik.
    Towards benchmarking feature subset selection methods for software fault prediction2016Ingår i: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, s. 33-58Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 423.
    Afzal, Wasif
    et al.
    Mälardalens högskola, Akademin för innovation, design och teknik, Inbyggda system. Bahria University, Islamabad, Pakistan .
    Torkar, Richard
    Blekinge Institute of Technology, Karlskrona, Sweden; Chalmers University of Technology, Sweden.
    Towards benchmarking feature subset selection methods for software fault prediction2016Ingår i: Computational Intelligence and Quantitative Software Engineering / [ed] Witold Pedrycz, Giancarlo Succi and Alberto Sillitti, Springer-Verlag , 2016, s. 33-58Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrele- vant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal com- ponent analysis (PCA); correlation-based feature selection (CFS); consistency-based subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic program- ming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross- validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and na ??ve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries.

  • 424. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A Systematic Mapping Study on Non-Functional Search-Based Software Testing2008Konferensbidrag (Refereegranskat)
  • 425. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009Ingår i: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, nr 6, s. 957-976Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 426. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Prediction of fault count data using genetic programming2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.

  • 427.
    Afzal, Wasif
    et al.
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Feldt, Robert
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Resampling Methods in Software Quality Classification2012Ingår i: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940, Vol. 22, nr 2, s. 203-223Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

  • 428. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Konferensbidrag (Refereegranskat)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

  • 429. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Gorschek, Tony
    Genetic programming for cross-release fault count predictions in large and complex software projects2010Ingår i: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, Hershey: IGI Global, Hershey, USA , 2010Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 430. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Feldt, Robert
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Gorschek, Tony
    Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation.
    Prediction of faults-slip-through in large software projects: an empirical evaluation2014Ingår i: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, nr 1, s. 51-86Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

  • 431. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Wikstrand, Greger
    Search-based prediction of fault-slip-through in large software projects2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 432.
    Afzal, Zeeshan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Towards Secure Multipath TCP Communication2017Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The evolution in networking coupled with an increasing demand to improve user experience has led to different proposals to extend the standard TCP. Multipath TCP (MPTCP) is one such extension that has the potential to overcome few inherent limitations in the standard TCP. While MPTCP's design and deployment progresses, most of the focus has been on its compatibility. The security aspect is confined to making sure that the MPTCP protocol itself offers the same security level as the standard TCP.

    The topic of this thesis is to investigate the unexpected security implications raised by using MPTCP in the traditional networking environment. The Internet of today has security middle-boxes that perform traffic analysis to detect intrusions and attacks. Such middle-boxes make use of different assumptions about the traffic, e.g., traffic from a single connection always arrives along the same path. This along with many other assumptions may not be true anymore with the advent of MPTCP as traffic can be fragmented and sent over multiple paths simultaneously.

    We investigate how practical it is to evade a security middle-box by fragmenting and sending traffic across multiple paths using MPTCP. Realistic attack traffic is used to evaluate such attacks against Snort IDS to show that these attacks are feasible. We then go on to propose possible solutions to detect such attacks and implement them in an MPTCP proxy. The proxy aims to extend the MPTCP performance advantages to servers that only support standard TCP, while ensuring that intrusions can be detected as before. Finally, we investigate the potential MPTCP scenario where security middle-boxes only have access to some of the traffic. We propose and implement an algorithm to perform intrusion detection in such situations and achieve a nearly 90% detection accuracy. Another contribution of this work is a tool, that converts IDS rules into equivalent attack traffic to automate the evaluation of a middle-box.

  • 433.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Garcia, Johan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Partial Signature Matching in an MPTCP World using Insert-only Levenshtein DistanceManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    This paper proposes a methodology consisting of a constrained version of the Levenshtein distance that can be used to detect signatures from partial traffic. The proposed algorithm is formally presented, implemented, and tested using the latest available version of the Snort ruleset. The results show that the algorithm can successfully detect all partial signatures with nearly 90% accuracy.

  • 434.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Automated Testing of IDS Rules2015Ingår i: Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, IEEE conference proceedings, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.

  • 435.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    IDS rule management made easy2016Ingår i: Electronics, Computers and Artificial Intelligence (ECAI), 2016 8th International Conference on, IEEE conference proceedings, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Signature-based intrusion detection systems (IDSs) are commonly utilized in enterprise networks to detect and possibly block a wide variety of attacks. Their application in industrial control systems (ICSs) is also growing rapidly as modem ICSs increasingly use open standard protocols instead of proprietary. Due to an ever changing threat landscape, the rulesets used by these IDSs have grown large and there is no way to verify their precision or accuracy. Such broad and non-optimized rulesets lead to false positives and an unnecessary burden on the IDS, resulting in possible degradation of the security. This work proposes a methodology consisting of a set of tools to help optimize the IDS rulesets and make rule management easier. The work also provides attack traffic data that is expected to benefit the task of IDS assessment.

  • 436.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för ekonomi, kommunikation och IT, Avdelningen för datavetenskap. Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013).
    Multipath TCP IDS Evasion and Mitigation2015Ingår i: Information Security: 18th International Conference, ISC 2015, Trondheim, Norway, September 9-11, 2015, Proceedings, Springer, 2015, Vol. 9290, s. 265-282Konferensbidrag (Refereegranskat)
    Abstract [en]

    The existing network security infrastructure is not ready for future protocols such as Multipath TCP (MPTCP). The outcome is that middleboxes are configured to block such protocols. This paper studies the security risk that arises if future protocols are used over unaware infrastructures. In particular, the practicality and severity of cross-path fragmentation attacks utilizing MPTCP against the signature-matching capability of the Snort intrusion detection system (IDS) is investigated. Results reveal that the attack is realistic and opens the possibility to evade any signature-based IDS. To mitigate the attack, a solution is also proposed in the form of the MPTCP Linker tool. The work outlines the importance of MPTCP support in future network security middleboxes.

  • 437.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Brunström, Anna
    Karlstads universitet, Fakulteten för ekonomi, kommunikation och IT, Avdelningen för datavetenskap. Karlstads universitet, Fakulteten för ekonomi, kommunikation och IT, Centrum för HumanIT.
    Lidén, Anders
    Towards Multipath TCP Aware Security Technologies2016Ingår i: New Technologies, Mobility and Security (NTMS), 2016 8th IFIP International Conference on, IEEE conference proceedings, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Multipath TCP (MPTCP) is a proposed extension to TCP that enables a number of performance advantages that have not been offered before. While the protocol specification is close to being finalized, there still remain some unaddressed challenges regarding the deployment and security implications of the protocol. This work attempts to tackle some of these concerns by proposing and implementing MPTCP aware security services and deploying them inside a proof of concept MPTCP proxy. The aim is to enable hosts, even those without native MPTCP support, to securely benefit from the MPTCP performance advantages. Our evaluations show that the security services that are implemented enable proper intrusion detection and prevention to thwart potential attacks as well as threshold rules to prevent denial of service (DoS) attacks.

  • 438.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Lindskog, Stefan
    Karlstads universitet, Fakulteten för ekonomi, kommunikation och IT, Avdelningen för datavetenskap. Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013).
    Lidén, Anders
    A Multipath TCP Proxy2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    Multipath TCP (MPTCP) is an extension to traditionalTCP that enables a number of performance advantages,which were not offered before. While the protocol specificationis close to being finalized, there still remain some concernsregarding deployability and security. This paper describes theon going work to develop a solution that will facilitate thedeployment of MPTCP. The solution will not only allow non-MPTCP capable end-hosts to benefit from MPTCP performancegains, but also help ease the network security concerns that manymiddleboxes face due to the possibility of data stream beingfragmented across multiple subflows.

  • 439.
    Afzal, Zeeshan
    et al.
    Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), Institutionen för matematik och datavetenskap.
    Rossebø, Judith
    Integrated Operations, ABB AS, Norway.
    Chowdhury, Mohammad
    Talha, Batool
    ABB Corporate Research, ABB AS, Norway.
    A Wireless Intrusion Detection System for 802.11 networks2016Ingår i: PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), IEEE conference proceedings, 2016, s. 828-834Konferensbidrag (Refereegranskat)
    Abstract [en]

    Wireless local area networks (WLANs) deployment is increasing rapidly. At the same time, WLANs have become an attractive target for many potential attackers. In spite of that, the de facto standard used to implement most WLANs (IEEE 802.11) has what appear to be residual vulnerabilities related to identity spoofing. In this paper, a pragmatic study of two common attacks on the standard is conducted. These attacks are then implemented on test beds to learn attack behavior. Finally, novel attack signatures and techniques to detect these attacks are devised and implemented in a proof of concept Wireless Intrusion Detection System (WIDS).

  • 440.
    AGALOMBA, CHRISTINE AFANDI
    Örebro universitet, Handelshögskolan vid Örebro Universitet.
    Factors contributing to failure of egovernment projects in developing countries: a literature review2012Självständigt arbete på avancerad nivå (masterexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
  • 441.
    Agalomba, Christine Afandi
    et al.
    Örebro universitet, Handelshögskolan vid Örebro universitet.
    Bakibinga, Stella
    Örebro universitet, Handelshögskolan vid Örebro universitet.
    A Review of Telecentre Literature: Sustainability, Impact and Best practices2010Självständigt arbete på avancerad nivå (magisterexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
  • 442.
    Aganovic, Deni
    et al.
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Åhrberg, Cecilia
    Linnéuniversitetet, Fakultetsnämnden för naturvetenskap och teknik, Institutionen för datavetenskap, fysik och matematik, DFM.
    Att fånga en oskuld: En undersökning kring ungdomars intresse för ekonomisk information online2010Självständigt arbete på grundnivå (kandidatexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Att vid 18 års ålder tvingas ansvara för sina handlingar är för många ett stort steg, inte minst då en del av ansvaret är ekonomi. Ungdomarna vänder sig ofta till föräldrarna med frågor, men var ska de vända sig när föräldrarna inte har svaret? Bankerna har börjat söka sig lägre ner i åldrarna med sin marknadsföring för att fånga kunden innan denne blir myndig, men fortfarande är det många frågetecken kring informationen ungdomarna behöver.

    Uppdraget var att undersöka vad ungdomar behöver för information i en webbplats riktad mot ungdomar från banken. En undersökning genomfördes med en enkätundersökning med 100respondenter och tre olika fokusgrupper som genererade att ungdomarna känner att banker är alldeles för avlägsna. En minskad distans skulle medföra att ungdomarna lättare vänder sig till banken vid frågor kring sin ekonomi vilket i sin tur medför att deras intresse kring privatekonomiska frågor ökar.

    Designförslaget som presenteras fokuserar mycket på en funktion med textbaserad personlig service med co-browsingfunktioner som tillåter en bankman guida den unga kunden genom webbplatsen. Strävan är att ungdomarna ska känna en samhörighet till banken och lättare kan få alla typer av frågor besvarade då de var osäkra på vilken information de behöver. Om man inför en sådan tjänst bör användbarhetstest och utvärderingar på designförslaget genomföras för att kontrollera om distansen mellan bank och ungdom minskar.

  • 443.
    Agardh, Johannes
    et al.
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Johansson, Martin
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Pettersson, Mårten
    Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik.
    Designing Future Interaction with Today's Technology1999Självständigt arbete på avancerad nivå (magisterexamen)Studentuppsats (Examensarbete)
    Abstract [sv]

    Under vårt magisterarbete har vi under ett antal tillfällen följt med och studerat tre lastbilschaufförer. Syftet med studien var bland annat att få en förståelse för hur de hittar till rätt adress och därmed se om de skulle kunna vara behjälpta av ett navigeringsstöd. Vi gjorde ett designförslag inspirerat av vad vi fått fram då vi analyserat materialet från fältstudierna samt av designidéer såsom exempelvis Calm Technology och Tacit Interaction. I magisteruppsatsen beskriver vi vårt designförslag och diskuterar bland annat hur designparadigmen Calm Technology och Tacit Interaction kan användas i utformning av IT-artefakter. Vi kommer fram till att de nya designkoncepten Calm Technology och Tacit Interaction handlar om relationen mellan teknik, människor och mänsklig handling. Nyckelord: Människa-Dator Interaktion (HCI), Work Practice, IT-design, Calm Technology, Tacit Interaction, interaktionsdesign

  • 444.
    Agarwal, Prasoon
    Uppsala universitet, Medicinska och farmaceutiska vetenskapsområdet, Medicinska fakulteten, Institutionen för immunologi, genetik och patologi, Hematologi och immunologi.
    Regulation of Gene Expression in Multiple Myeloma Cells and Normal Fibroblasts: Integrative Bioinformatic and Experimental Approaches2014Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The work presented in this thesis applies integrative genomic and experimental approaches to investigate mechanisms involved in regulation of gene expression in the context of disease and normal cell biology.

    In papers I and II, we have explored the role of epigenetic regulation of gene expression in multiple myeloma (MM). By using a bioinformatic approach we identified the Polycomb repressive complex 2 (PRC2) to be a common denominator for the underexpressed gene signature in MM. By using inhibitors of the PRC2 we showed an activation of the genes silenced by H3K27me3 and a reduction in the tumor load and increased overall survival in the in vivo 5TMM model. Using ChIP-sequencing we defined the distribution of H3K27me3 and H3K4me3 marks in MM patients cells. In an integrated bioinformatic approach, the H3K27me3-associated genes significantly correlated to under-expression in patients with less favorable survival. Thus, our data indicates the presence of a common under-expressed gene profile and provides a rationale for implementing new therapies focusing on epigenetic alterations in MM.

    In paper III we address the existence of a small cell population in MM presenting with differential tumorigenic properties in the 5T33MM murine model. We report that the predominant population of CD138+ cells had higher engraftment potential, higher clonogenic growth, whereas the CD138- MM cells presented with less mature phenotype and higher drug resistance. Our findings suggest that while designing treatment regimes for MM, both the cellpopulations must be targeted.

    In paper IV we have studied the general mechanism of differential gene expression regulation by CGGBP1 in response to growth signals in normal human fibroblasts. We found that CGGBP1 binding affects global gene expression by RNA Polymerase II. This is mediated by Alu RNAdependentinhibition of RNA Polymerase II. In presence of growth signals CGGBP1 is retained in the nuclei and exhibits enhanced Alu binding thus inhibiting RNA Polymerase III binding on Alus. Hence we suggest a mechanism by which CGGBP1 orchestrates Alu RNA-mediated regulation of RNA Polymerase II. This thesis provides new insights for using integrative bioinformatic approaches to decipher gene expression regulation mechanisms in MM and in normal cells.

  • 445.
    Agbamuche, Joy
    Mälardalens högskola, Akademin för hållbar samhälls- och teknikutveckling.
    How does the alignment of IT to business strategy affect the organisation of the IT function?2008Studentuppsats
    Abstract [en]

    Date: 2008-06-04

    Purpose: The primary goal of this research is to describe the IT function and examine how its alignment to an organisations strategy affects the way it is organised.

    Method: The chosen method was a purely theoretical examination with the use of the case study of Windham International as primary resource and secondary resources such as book and literature review used for the thesis.

    Research Questions: How does the alignment of IT to business strategy affect the organisation of the IT function?

    Conclusion: One of the findings was that a few researchers seem to suggest that the centralized mode of organising IT was symbolic of the past, while outsourcing and decentralization are the modern approach to organising IT. Wyndham International shows the opposite, after the introduction of the CIO in 2002, centralization was the chosen mode of organisation because that was what would best fit the new strategic approach of the organisation. Insourcing rather than outsourcing proved to be a winning formula.

  • 446. Agbesi, Collinson Colin Mawunyo
    Promoting Accountable Governance Through Electronic Government2016Självständigt arbete på avancerad nivå (magisterexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Electronic government (e-Government) is a purposeful system of organized delegation of power, control, management and resource allocation in a harmonized centralized or decentralized way via networks assuring efficiency, effectiveness and transparency of processes and transactions. This new phenomenon is changing the way of business and service of governments all over the world. The betterment of service to citizens as well as other groups and the efficient management of scarce resources have meant that governments seek alternatives to rendering services and efficient management processes. Analog and mechanical processes of governing and management have proved inefficient and unproductive in recent times. The search for alternative and better ways of governing and control have revealed that digital and electronic ways of governing is the best alternative and beneficial more than the mechanical process of governing. The internet, information and communication technology (ICT/IT) have registered a significant change in governments. There has also been an increased research in the area of electronic government but the field still lacks sound theoretical framework which is necessary for a better understanding of the factors influencing the adoption of electronic government systems, and the integration of various electronic government applications.

    Also the efficient and effective allocation and distribution of scarce resources has become an issue and there has been a concerted effort globally to improve the use and management of scarce resources in the last decade. The purpose of this research is to gain an in depth and better understanding of how electronic government can be used to provide accountability, security and transparency in government decision making processes in allocation and distribution of resources in the educational sector of Ghana. Research questions have been developed to help achieve the aim. The study has also provided detailed literature review, which helped to answer research questions and guide to data collection. A quantitative and qualitative research method was chosen to collect vital information and better understand the study area issue. Both self administered questionnaire as well as interviews were used to collect data relevant to the study. Also a thorough analysis of related works was conducted.

    Finally, the research concluded by addressing research questions, discussing results and providing some vital recommendations.  It was also found that electronic government is the best faster, reliable, accountable and transparent means of communication and interaction between governments, public institutions and citizens. Thus electronic government is crucial in transforming the educational sector of Ghana for better management of resources. It has also been noted that information and communication technology (ICT) is the enabling force that helps electronic government to communicate with its citizens, support e-government operation and provide efficiency, effectiveness and better services within the educational sector of Ghana.

  • 447.
    Agelfors, Eva
    et al.
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Beskow, Jonas
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Dahlquist, Martin
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Granström, Björn
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Lundeberg, Magnus
    Salvi, Giampiero
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Spens, Karl-Erik
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Öhman, Tobias
    A synthetic face as a lip-reading support for hearing impaired telephone users - problems and positive results1999Ingår i: European audiology in 1999: proceeding of the 4th European Conference in Audiology, Oulu, Finland, June 6-10, 1999, 1999Konferensbidrag (Refereegranskat)
  • 448.
    Agelfors, Eva
    et al.
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Beskow, Jonas
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Granström, Björn
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Lundeberg, Magnus
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Salvi, Giampiero
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Spens, Karl-Erik
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Öhman, Tobias
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Synthetic visual speech driven from auditory speech1999Ingår i: Proceedings of Audio-Visual Speech Processing (AVSP'99)), 1999Konferensbidrag (Refereegranskat)
    Abstract [en]

    We have developed two different methods for using auditory, telephone speech to drive the movements of a synthetic face. In the first method, Hidden Markov Models (HMMs) were trained on a phonetically transcribed telephone speech database. The output of the HMMs was then fed into a rulebased visual speech synthesizer as a string of phonemes together with time labels. In the second method, Artificial Neural Networks (ANNs) were trained on the same database to map acoustic parameters directly to facial control parameters. These target parameter trajectories were generated by using phoneme strings from a database as input to the visual speech synthesis The two methods were evaluated through audiovisual intelligibility tests with ten hearing impaired persons, and compared to “ideal” articulations (where no recognition was involved), a natural face, and to the intelligibility of the audio alone. It was found that the HMM method performs considerably better than the audio alone condition (54% and 34% keywords correct respectively), but not as well as the “ideal” articulating artificial face (64%). The intelligibility for the ANN method was 34% keywords correct.

  • 449.
    Agelfors, Eva
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Beskow, Jonas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Karlsson, Inger
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Kewley, Jo
    Salvi, Giampiero
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Thomas, Neil
    User evaluation of the SYNFACE talking head telephone2006Ingår i: Computers Helping People With Special Needs, Proceedings / [ed] Miesenberger, K; Klaus, J; Zagler, W; Karshmer, A, 2006, Vol. 4061, s. 579-586Konferensbidrag (Refereegranskat)
    Abstract [en]

    The talking-head telephone, Synface, is a lip-reading support for people with hearing-impairment. It has been tested by 49 users with varying degrees of hearing-impaired in UK and Sweden in lab and home environments. Synface was found to give support to the users, especially in perceiving numbers and addresses and an enjoyable way to communicate. A majority deemed Synface to be a useful product.

  • 450.
    Agelis, Sacki
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Jacobsson, Sofia
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Jonsson, Magnus
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Alping, Arne
    Ericsson Microwave Systems, Mölndal, Sweden.
    Ligander, Per
    Ericsson Microwave Systems, Mölndal, Sweden.
    Modular interconnection system for optical PCB and backplane communication2002Ingår i: Parallel and Distributed Processing Symposium., Proceedings International, IPDPS 2002, Abstracts and CD-ROM, Los Alamitos, Calif.: IEEE Press, 2002, s. 245-250Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a way of building modular systems with a powerful optical interconnection network. Each module, placed on a Printed Circuit Board (PCB), has a generic optical communication interface with a simple electronic router. Together with optical switching using micro-electromechanical system (MEMS) technology, packet switching over reconfigurable topologies is possible. The interconnection system gives the possibility to integrate electronics with optics without changing existing PCB technology. Great interest from industry is therefore expected and the cost advantages are several: reuse of module designs, module upgrades without changing the PCB, low-cost conventional PCB technology, etc. In the version described in this paper, the interconnection system has 48 bidirectional optical channels for intra-PCB communication on each board. For inter-PCB communication, a backplane with 192 bidirectional optical channels supports communication between twelve PCBs. With 2.5 Gbit/s per optical channel in each direction, the aggregated intra-PCB bit rate is 120 Gbit/s full duplex (on each PCB) while the aggregated inter-PCB bit rate is 480 Gbit/s full duplex. A case study shows the feasibility of the interconnection system in a parallel processing system for radar signal processing.

6789101112 401 - 450 av 48159
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf