Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Privacy-awareness in the era of Big Data and machine learning
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Database and Data Mining Group)ORCID iD: 0000-0001-8820-2405
2019 (English)Licentiate thesis, comprehensive summary (Other academic)Alternative title
Integritetsmedvetenhet i eran av Big Data och maskininlärning (Swedish)
Abstract [en]

Social Network Sites (SNS) such as Facebook and Twitter, have been playing a great role in our lives. On the one hand, they help connect people who would not otherwise be connected before. Many recent breakthroughs in AI such as facial recognition [49] were achieved thanks to the amount of available data on the Internet via SNS (hereafter Big Data). On the other hand, due to privacy concerns, many people have tried to avoid SNS to protect their privacy. Similar to the security issue of the Internet protocol, Machine Learning (ML), as the core of AI, was not designed with privacy in mind. For instance, Support Vector Machines (SVMs) try to solve a quadratic optimization problem by deciding which instances of training dataset are support vectors. This means that the data of people involved in the training process will also be published within the SVM models. Thus, privacy guarantees must be applied to the worst-case outliers, and meanwhile data utilities have to be guaranteed.

For the above reasons, this thesis studies on: (1) how to construct data federation infrastructure with privacy guarantee in the big data era; (2) how to protect privacy while learning ML models with a good trade-off between data utilities and privacy. To the first point, we proposed different frameworks em- powered by privacy-aware algorithms that satisfied the definition of differential privacy, which is the state-of-the-art privacy-guarantee algorithm by definition. Regarding (2), we proposed different neural network architectures to capture the sensitivities of user data, from which, the algorithm itself decides how much it should learn from user data to protect their privacy while achieves good performance for a downstream task. The current outcomes of the thesis are: (1) privacy-guarantee data federation infrastructure for data analysis on sensitive data; (2) privacy-guarantee algorithms for data sharing; (3) privacy-concern data analysis on social network data. The research methods used in this thesis include experiments on real-life social network dataset to evaluate aspects of proposed approaches.

Insights and outcomes from this thesis can be used by both academic and industry to guarantee privacy for data analysis and data sharing in personal data. They also have the potential to facilitate relevant research in privacy-aware representation learning and related evaluation methods.

Place, publisher, year, edition, pages
Umeå: Department of computing science, Umeå University , 2019. , p. 42
Series
Report / UMINF, ISSN 0348-0542 ; 19.06
Keywords [en]
Differential Privacy, Machine Learning, Deep Learning, Big Data
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-162182ISBN: 9789178551101 (print)OAI: oai:DiVA.org:umu-162182DiVA, id: diva2:1343260
Presentation
2019-09-09, 23:40 (English)
Supervisors
Available from: 2019-08-22 Created: 2019-08-15 Last updated: 2021-03-18Bibliographically approved
List of papers
1. Personality-based Knowledge Extraction for Privacy-preserving Data Analysis
Open this publication in new window or tab >>Personality-based Knowledge Extraction for Privacy-preserving Data Analysis
2017 (English)In: K-CAP 2017: Proceedings of the Knowledge Capture Conference, Austin, TX, USA: ACM Digital Library, 2017, article id 45Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a differential privacy preserving approach, which extracts personality-based knowledge to serve privacy guarantee data analysis on personal sensitive data. Based on the approach, we further implement an end-to-end privacy guarantee system, KaPPA, to provide researchers iterative data analysis on sensitive data. The key challenge for differential privacy is determining a reasonable amount of privacy budget to balance privacy preserving and data utility. Most of the previous work applies unified privacy budget to all individual data, which leads to insufficient privacy protection for some individuals while over-protecting others. In KaPPA, the proposed personality-based privacy preserving approach automatically calculates privacy budget for each individual. Our experimental evaluations show a significant trade-off of sufficient privacy protection and data utility.

Place, publisher, year, edition, pages
Austin, TX, USA: ACM Digital Library, 2017
Keywords
Differential Privacy, Privacy-preserving Data Analysis
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-143228 (URN)10.1145/3148011.3154479 (DOI)2-s2.0-85040625465 (Scopus ID)978-1-4503-5553-7 (ISBN)
Conference
K-CAP 2017: The 9th International Conference on Knowledge Capture, Austin, Texas, December 4-6, 2017
Projects
Privacy-aware data federation
Available from: 2017-12-19 Created: 2017-12-19 Last updated: 2025-02-07Bibliographically approved
2. Graph-based Interactive Data Federation System for Heterogeneous Data Retrieval and Analytics
Open this publication in new window or tab >>Graph-based Interactive Data Federation System for Heterogeneous Data Retrieval and Analytics
2019 (English)In: Proceedings of The World Wide Web Conference WWW 2019, New York, NY, USA: ACM Digital Library, 2019, p. 3595-3599Conference paper, Published paper (Refereed)
Abstract [en]

Given the increasing number of heterogeneous data stored in relational databases, file systems or cloud environment, it needs to be easily accessed and semantically connected for further data analytic. The potential of data federation is largely untapped, this paper presents an interactive data federation system (https://vimeo.com/ 319473546) by applying large-scale techniques including heterogeneous data federation, natural language processing, association rules and semantic web to perform data retrieval and analytics on social network data. The system first creates a Virtual Database (VDB) to virtually integrate data from multiple data sources. Next, a RDF generator is built to unify data, together with SPARQL queries, to support semantic data search over the processed text data by natural language processing (NLP). Association rule analysis is used to discover the patterns and recognize the most important co-occurrences of variables from multiple data sources. The system demonstrates how it facilitates interactive data analytic towards different application scenarios (e.g., sentiment analysis, privacyconcern analysis, community detection).

Place, publisher, year, edition, pages
New York, NY, USA: ACM Digital Library, 2019
Keywords
heterogeneous data federation, RDF, interactive data analysis
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-160892 (URN)10.1145/3308558.3314138 (DOI)000483508403101 ()2-s2.0-85066893934 (Scopus ID)978-1-4503-6674-8 (ISBN)
Conference
WWW '19, The World Wide Web Conference, San Francisco, CA, USA, May 13–17, 2019
Available from: 2019-06-25 Created: 2019-06-25 Last updated: 2025-02-07Bibliographically approved
3. Self-adaptive privacy concern detection for user-generated content
Open this publication in new window or tab >>Self-adaptive privacy concern detection for user-generated content
2023 (English)In: Computational linguistics and intelligent text processing: 19th International Conference on CiCLing 2018, Hanoi, Vietnam, March 18-24, 2018Revised selected papers, part 1 / [ed] Alexander Gelbukh, Springer Science+Business Media B.V., 2023, p. 153-167Conference paper, Published paper (Refereed)
Abstract [en]

To protect user privacy in data analysis, a state-of-the-art strategy is differential privacy in which scientific noise is injected into the real analysis output. The noise masks individual’s sensitive information contained in the dataset. However, determining the amount of noise is a key challenge, since too much noise will destroy data utility while too little noise will increase privacy risk. Though previous research works have designed some mechanisms to protect data privacy in different scenarios, most of the existing studies assume uniform privacy concerns for all individuals. Consequently, putting an equal amount of noise to all individuals leads to insufficient privacy protection for some users, while over-protecting others. To address this issue, we propose a self-adaptive approach for privacy concern detection based on user personality. Our experimental studies demonstrate the effectiveness to address a suitable personalized privacy protection for cold-start users (i.e., without their privacy-concern information in training data).

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13396
Keywords
privacy-guaranteed data analysis, deep learning, multi-layer perceptron
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-146470 (URN)10.1007/978-3-031-23793-5_14 (DOI)2-s2.0-85149699287 (Scopus ID)978-3-031-23792-8 (ISBN)
Conference
19th International Conference on Computational Linguistics and Intelligent Text Processing, Hanoi, Vietnam, March 18-24, 2018.
Projects
Privacy-aware Data Federation
Note

Preprint published 2018 at arXiv.org.

Available from: 2018-04-10 Created: 2018-04-10 Last updated: 2025-02-07Bibliographically approved
4. dpUGC: learn differentially private representation for user generated contents
Open this publication in new window or tab >>dpUGC: learn differentially private representation for user generated contents
2023 (English)In: Computational linguistics and intelligent text processing: 20th international conference, CICLing 2019, La Rochelle, France, April 7–13, 2019, revised selected papers, part I / [ed] Alexander Gelbukh, Springer, 2023, Vol. 13451, p. 316-331Conference paper, Published paper (Refereed)
Abstract [en]

This paper firstly proposes a simple yet efficient generalized approach to apply differential privacy to text representation (i.e., word embedding). Based on it, we propose a user-level approach to learn personalized differentially private word embedding model on user generated contents (UGC). To our best knowledge, this is the first work of learning user-level differentially private word embedding model from text for sharing. The proposed approaches protect the privacy of the individual from re-identification, especially provide better trade-off of privacy and data utility on UGC data for sharing. The experimental results show that the trained embedding models are applicable for the classic text analysis tasks (e.g., regression). Moreover, the proposed approaches of learning differentially private embedding models are both framework- and dataindependent, which facilitates the deployment and sharing. The source code is available at https://github.com/sonvx/dpText.

Place, publisher, year, edition, pages
Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13451
Keywords
Private word embedding, Differential privacy, UGC
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-160887 (URN)10.1007/978-3-031-24337-0_23 (DOI)2-s2.0-85149907226 (Scopus ID)978-3-031-24336-3 (ISBN)978-3-031-24337-0 (ISBN)
Conference
20th International Conference on Computational Linguistics and Intelligent Text Processing, La Rochelle, France, April 7-13, 2019.
Note

Originally included in thesis in manuscript form. 

Available from: 2019-06-25 Created: 2019-06-25 Last updated: 2025-02-07Bibliographically approved
5. Generic Multilayer Network Data Analysis with the Fusion of Content and Structure
Open this publication in new window or tab >>Generic Multilayer Network Data Analysis with the Fusion of Content and Structure
2019 (English)In: Proceedings of the 20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing), 2019, Cornell University Library, arXiv.org , 2019Conference paper, Published paper (Refereed)
Abstract [en]

Multi-feature data analysis (e.g., on Facebook, LinkedIn) is challenging especially if one wants to do it efficiently and retain the flexibility by choosing features of interest for analysis. Features (e.g., age, gender, relationship, political view etc.) can be explicitly given from datasets, but also can be derived from content (e.g., political view based on Facebook posts). Analysis from multiple perspectives is needed to understand the datasets (or subsets of it) and to infer meaningful knowledge. For example, the influence of age, location, and marital status on political views may need to be inferred separately (or in combination). In this paper, we adapt multilayer network (MLN) analysis, a nontraditional approach, to model the Facebook datasets, integrate content analysis, and conduct analysis, which is driven by a list of desired application based queries. Our experimental analysis shows the flexibility and efficiency of the proposed approach when modeling and analyzing datasets with multiple features.

Place, publisher, year, edition, pages
Cornell University Library, arXiv.org, 2019
Keywords
Social network analysis, Multilayer networks, Content analysis
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-162572 (URN)
Conference
20th International Conference on Computational Linguistics and Intelligent Text Processing, La Rochelle, France, April 7-13, 2019
Available from: 2019-08-22 Created: 2019-08-22 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(2203 kB)2690 downloads
File information
File name FULLTEXT02.pdfFile size 2203 kBChecksum SHA-512
aea67d9d9669867cdcb7ec4654d3072abf295d1d8a54821bde8ae8151d89039b161f4fc85e132f93d883bedf1cd92c298f323ef285182610322befc15b798ad0
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Vu, Xuan-Son
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 2708 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1107 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf