Digitala Vetenskapliga Arkivet

Change search
Refine search result
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Automated Context-aware Vulnerability Risk Management2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The information security landscape continually evolves with increasing publicly known vulnerabilities (e.g., 25064 new vulnerabilities in 2022). Vulnerabilities play a prominent role in all types of security related attacks, including ransomware and data breaches. Vulnerability Risk Management (VRM) is an essential cyber defense mechanism to eliminate or reduce attack surfaces in information technology. VRM is a continuous procedure of identification, classification, evaluation, and remediation of vulnerabilities. The traditional VRM procedure is time-consuming as classification, evaluation, and remediation require skills and knowledge of specific computer systems, software, network, and security policies. Activities requiring human input slow down the VRM process, increasing the risk of exploiting a vulnerability.

    The thesis introduces the Automated Context-aware Vulnerability Risk Management (ACVRM) methodology to improve VRM procedures by automating the entire VRM cycle and reducing the procedure time and experts' intervention. ACVRM focuses on the challenging stages (i.e., classification, evaluation, and remediation) of VRM to support security experts in promptly prioritizing and patching the vulnerabilities. 

    ACVRM concept is designed and implemented in a test environment for proof of concept. The efficiency of patch prioritization by ACVRM compared against a commercial vulnerability management tool (i.e., Rudder). ACVRM prioritized the vulnerability based on the patch score (i.e., the numeric representation of the vulnerability characteristic and the risk), the historical data, and dependencies. The experiments indicate that ACVRM could rank the vulnerabilities in the organization's context by weighting the criteria used in patch score calculation. The automated patch deployment is implemented with three use cases to investigate the impact of learning from historical events and dependencies on the success rate of the patch and human intervention. Our finding shows that ACVRM reduced the need for human actions, increased the ratio of successfully patched vulnerabilities, and decreased the cycle time of VRM process.

    Download full text (pdf)
    fulltext
  • 2.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Towards Secure Collaborative AI Service Chains2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.

    Download full text (pdf)
    fulltext
  • 3.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Automated Context-Aware Vulnerability Risk Management for Patch Prioritization2022In: Electronics, E-ISSN 2079-9292, Vol. 11, no 21, article id 3580Article in journal (Refereed)
    Abstract [en]

    The information-security landscape continuously evolves by discovering new vulnerabilities daily and sophisticated exploit tools. Vulnerability risk management (VRM) is the most crucial cyber defense to eliminate attack surfaces in IT environments. VRM is a cyclical practice of identifying, classifying, evaluating, and remediating vulnerabilities. The evaluation stage of VRM is neither automated nor cost-effective, as it demands great manual administrative efforts to prioritize the patch. Therefore, there is an urgent need to improve the VRM procedure by automating the entire VRM cycle in the context of a given organization. The authors propose automated context-aware VRM (ACVRM), to address the above challenges. This study defines the criteria to consider in the evaluation stage of ACVRM to prioritize the patching. Moreover, patch prioritization is customized in an organization’s context by allowing the organization to select the vulnerability management mode and weigh the selected criteria. Specifically, this study considers four vulnerability evaluation cases: (i) evaluation criteria are weighted homogeneously; (ii) attack complexity and availability are not considered important criteria; (iii) the security score is the only important criteria considered; and (iv) criteria are weighted based on the organization’s risk appetite. The result verifies the proposed solution’s efficiency compared with the Rudder vulnerability management tool (CVE-plugin). While Rudder produces a ranking independent from the scenario, ACVRM can sort vulnerabilities according to the organization’s criteria and context. Moreover, while Rudder randomly sorts vulnerabilities with the same patch score, ACVRM sorts them according to their age, giving a higher security score to older publicly known vulnerabilities. © 2022 by the authors.

    Download full text (pdf)
    fulltext
  • 4.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Sapienza University of Rome, Italy.
    Automated Patch Management: An Empirical Evaluation Study2023In: Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience, CSR 2023, IEEE, 2023, p. 321-328Conference paper (Refereed)
    Abstract [en]

    Vulnerability patch management is one of IT organizations' most complex issues due to the increasing number of publicly known vulnerabilities and explicit patch deadlines for compliance. Patch management requires human involvement in testing, deploying, and verifying the patch and its potential side effects. Hence, there is a need to automate the patch management procedure to keep the patch deadline with a limited number of available experts. This study proposed and implemented an automated patch management procedure to address mentioned challenges. The method also includes logic to automatically handle errors that might occur in patch deployment and verification. Moreover, the authors added an automated review step before patch management to adjust the patch prioritization list if multiple cumulative patches or dependencies are detected. The result indicated that our method reduced the need for human intervention, increased the ratio of successfully patched vulnerabilities, and decreased the execution time of vulnerability risk management.

    Download full text (pdf)
    fulltext
  • 5.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Normalization Framework for Vulnerability Risk Management in Cloud2021In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, IEEE, 2021, p. 99-106Conference paper (Refereed)
    Abstract [en]

    Vulnerability Risk Management (VRM) is a critical element in cloud security that directly impacts cloud providers’ security assurance levels. Today, VRM is a challenging process because of the dramatic increase of known vulnerabilities (+26% in the last five years), and because it is even more dependent on the organization’s context. Moreover, the vulnerability’s severity score depends on the Vulnerability Database (VD) selected as a reference in VRM. All these factors introduce a new challenge for security specialists in evaluating and patching the vulnerabilities. This study provides a framework to improve the classification and evaluation phases in vulnerability risk management while using multiple vulnerability databases as a reference. Our solution normalizes the severity score of each vulnerability based on the selected security assurance level. The results of our study highlighted the role of the vulnerability databases in patch prioritization, showing the advantage of using multiple VDs.

    Download full text (pdf)
    fulltext
  • 6.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB, Sweden.
    Arlos, Patrik
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Casalicchio, Emiliano
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. Sapienza University of Rome, ITA.
    Normalization of Severity Rating for Automated Context-aware Vulnerability Risk Management2020In: Proceedings - 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion, ACSOS-C 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 200-205, article id 9196350Conference paper (Refereed)
    Abstract [en]

    In the last three years, the unprecedented increase in discovered vulnerabilities ranked with critical and high severity raise new challenges in Vulnerability Risk Management (VRM). Indeed, identifying, analyzing and remediating this high rate of vulnerabilities is labour intensive, especially for enterprises dealing with complex computing infrastructures such as Infrastructure-as-a-Service providers. Hence there is a demand for new criteria to prioritize vulnerabilities remediation and new automated/autonomic approaches to VRM.

    In this paper, we address the above challenge proposing an Automated Context-aware Vulnerability Risk Management (AC- VRM) methodology that aims: to reduce the labour intensive tasks of security experts; to prioritize vulnerability remediation on the basis of the organization context rather than risk severity only. The proposed solution considers multiple vulnerabilities databases to have a great coverage on known vulnerabilities and to determine the vulnerability rank. After the description of the new VRM methodology, we focus on the problem of obtaining a single vulnerability score by normalization and fusion of ranks obtained from multiple vulnerabilities databases. Our solution is a parametric normalization that accounts for organization needs/specifications.

    Download full text (pdf)
    fulltext
  • 7.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Designing a Secure IoT System Architecture from a Virtual Premise for a Collaborative AI Lab2019Conference paper (Refereed)
    Abstract [en]

    IoT systems are increasingly composed out of flexible, programmable, virtualised, and arbitrarily chained IoT elements and services using portable code. Moreover, they might be sliced, i.e. allowing multiple logical IoT systems (network + application) to run on top of a shared physical network and compute infrastructure. However, implementing and designing particularly security mechanisms for such IoT systems is challenging since a) promising technologies are still maturing, and b) the relationships among the many requirements, technologies and components are difficult to model a-priori.

    The aim of the paper is to define design cues for the security architecture and mechanisms of future, virtualised, arbitrarily chained, and eventually sliced IoT systems. Hereby, our focus is laid on the authorisation and authentication of user, host, and code integrity in these virtualised systems. The design cues are derived from the design and implementation of a secure virtual environment for distributed and collaborative AI system engineering using so called AI pipelines. The pipelines apply chained virtual elements and services and facilitate the slicing of the system. The virtual environment is denoted for short as the virtual premise (VP). The use-case of the VP for AI design provides insight into the complex interactions in the architecture, leading us to believe that the VP concept can be generalised to the IoT systems mentioned above. In addition, the use-case permits to derive, implement, and test solutions. This paper describes the flexible architecture of the VP and the design and implementation of access and execution control in virtual and containerised environments. 

    Download full text (pdf)
    fulltext
  • 8.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Privacy and DRM Requirements for Collaborative Development of AI Application2019In: ACM International Conference Proceeding Series, Association for Computing Machinery (ACM), 2019, article id 3233268Conference paper (Refereed)
    Abstract [en]

    The use of data is essential for the capabilities of Data-driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. This data usage, however, raises intrinsically the concerns on data privacy. In addition, supporting collaborative development of AI applications across organisations has become a major need in AI system design. Digital Rights Management (DRM) is required to protect intellectual property in such collaboration. As a consequence of DRM, privacy threats and privacy-enforcing mechanisms will interact with each other.

    This paper describes the privacy and DRM requirements in collaborative AI system design using AI pipelines. It describes the relationships between DRM and privacy and outlines the threats against these non-functional features. Finally, the paper provides first security architecture to protect against the threats on DRM and privacy in collaborative AI design using AI pipelines. 

    Download full text (pdf)
    fulltext
  • 9.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Ilie, Dragos
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Towards Privacy Requirements for Collaborative Development of AI Applications2018In: 14th Swedish National Computer Networking Workshop (SNCNW), 2018Conference paper (Refereed)
    Abstract [en]

    The use of data is essential for the capabilities of Data- driven Artificial intelligence (AI), Deep Learning and Big Data analysis techniques. The use of data, however, raises intrinsically the concern of the data privacy, in particular for the individuals that provide data. Hence, data privacy is considered as one of the main non-functional features of the Next Generation Internet. This paper describes the privacy challenges and requirements for collaborative AI application development. We investigate the constraints of using digital right management for supporting collaboration to address the privacy requirements in the regulation.

    Download full text (pdf)
    fulltext
  • 10.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Flexible Privacy and High Trust in the Next Generation Internet: The Use Case of a Cloud-based Marketplace for AI2017Conference paper (Refereed)
    Abstract [en]

    Cloudified architectures facilitate resource ac-cess and sharing which is independent from physical lo-cations. They permit high availability of resources at lowoperational costs. These advantages, however, do not comefor free. End users might fear that they lose control overthe location of their data and, thus, of their autonomy indeciding to whom the data is communicate to. Thus, strongprivacy and trust concerns arise for end users.In this work we will review and investigate privacy andtrust requirements for Cloud systems in general and for acloud-based marketplace (CMP) for AI in particular. We willinvestigate whether and how the current privacy and trustdimensions can be applied to Clouds and for the design ofa CMP. We also propose the concept of a "virtual premise"for enabling "Privacy-by-Design" [1] in Clouds. The ideaof a "virtual premise" might probably not be a universalsolution for any privacy requirement. However, we expectthat it provides flexibility in designing privacy in Cloudsand thus leading to higher trust.

    Download full text (pdf)
    fulltext
  • 11.
    Ahmadi Mehri, Vida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Tutschku, Kurt
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Privacy and trust in cloud-based marketplaces for AI and data resources2017In: IFIP Advances in Information and Communication Technology, Springer New York LLC , 2017, Vol. 505, p. 223-225Conference paper (Refereed)
    Abstract [en]

    The processing of the huge amounts of information from the Internet of Things (IoT) has become challenging. Artificial Intelligence (AI) techniques have been developed to handle this task efficiently. However, they require annotated data sets for training, while manual preprocessing of the data sets is costly. The H2020 project “Bonseyes” has suggested a “Market Place for AI”, where the stakeholders can engage trustfully in business around AI resources and data sets. The MP permits trading of resources that have high privacy requirements (e.g. data sets containing patient medical information) as well as ones with low requirements (e.g. fuel consumption of cars) for the sake of its generality. In this abstract we review trust and privacy definitions and provide a first requirement analysis for them with regards to Cloud-based Market Places (CMPs). The comparison of definitions and requirements allows for the identification of the research gap that will be addressed by the main authors PhD project. © IFIP International Federation for Information Processing 2017.

  • 12.
    Koyyada, Sai Pranav
    et al.
    Blekinge Institute of Technology. student.
    Deshmukh, Denim
    Blekinge Institute of Technology. student.
    Badampudi, Deepika
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science. City Network International AB.
    Usman, Muhammad
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards automated open source assessment - An empirical study2022In: CEUR Workshop Proceedings / [ed] Lee S.,Anwar S.,, Technical University of Aachen , 2022Conference paper (Refereed)
    Abstract [en]

    The open source software (OSS) assessment has become important given the increased adoption of OSS in commercial product development. Researchers proposed many OSS assessment models. However, little is known about the industrial relevance of the models. In this study, we proposed an automated tool based on the OSS assessment attributes identified together with a European cloud provider company. We analyzed 51 repositories to observe patterns in maintenance activities over their lifetime (from inception to the latest release). Based on the analysis, we propose a novel approach for evaluating the maturity of the OSS project. Finally, we assessed the usefulness of our automated solution in a pilot study. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

  • 13.
    Tutschku, Kurt
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Chivukula, Krishna Varaynya
    CityNetwork Webbhotell AB.
    Johan, Christenson
    CityNetwork Webbhotell AB.
    On Resource Description Capabilities of On-Board Tools for Resource Management in Cloud Networking and NFV Infrastructures2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 442-447Conference paper (Refereed)
    Abstract [en]

    The rapid adoption of networks that are based on "cloudification" and Network Function Virtualisation (NFV) comes from the anticipated high cost savings of up to 70% in their build and operation. The high savings are founded in the use of general standard servers, instead of single-purpose hardware, and by efficiency resource sharing through virtualisation concepts. In this paper, we discuss the capabilities of resource description of "on-board" tools, i.e. using standard Linux commands, to enable OPEX savings. We put a focus on monitoring resources on small time-scales and on the variation observed on such scales. We introduce a QoE-based comparative concept that relates guest and host views on "utilisation" and "load" for the analysis of the variations. We describe the order of variations in "utilisation" and "load" by measurement and by graphical analysis of the measurements. We do these evaluations for different host operating systems and monitoring tools.

    Download full text (pdf)
    fulltext
  • 14.
    Tutschku, Kurt Tutschku
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Ahmadi Mehri, Vida
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Carlsson, Anders
    Blekinge Institute of Technology, Faculty of Computing, Department of Communication Systems.
    Towards Multi-layer Resource Management in Cloud Networking and NFV Infrastructures2016Conference paper (Refereed)
    Abstract [en]

    Cloud Networking (CN) and related conceptsoffer appealing novelties to Cloud Computing (CC) customers.They can do a one-stop-shopping for network-enhanced cloudservices. In addition, the costs of such services might below due to multiple customers sharing the infrastructures.Moreover, telecommunication network operators are adopt-ing the CN in theirNetwork Functions Virtualisation (NFV)framework for reducing costs and increasing the flexibility oftheir networks. The technical appeal of CN comes from thetight integration of CC and smart networks. The economicalattractiveness results from avoiding dedicated hardware, shar-ing of resources, and simplified resource management (RM) asseen by the users respectively by the applications. The visionof cheap and integrated CN services is obviously attractive,but it is also evident that it will require more complex RMprocedures for efficiently balancing the usage of all resources.In this contribution, we suggest an initial architecture forintegrated and practical RM in CN and NFV systems. TheRM concept aims at locating and analysing performancebottlenecks, efficiency problems, and eventually discover un-used resources. The suggested architecture is based on alayered view on the system. Moreover, we detail difficultiesin practical resources usage monitoring which, in turn, definerequirements for a RM architecture. The requirement analysisis based on measurements in a CN infrastructure.

    Download full text (pdf)
    fulltext
1 - 14 of 14
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf