Publications

Publications:

Additional references: Google Scholar and DBLP.

2022

  • Johannes Lohmöller, Jan Pennekamp, Roman Matzutt, and Klaus Wehrle. On the Need for Strong Sovereignty in Data Ecosystems. In Proceedings of the 1st International Workshop on Data Ecosystems (DEco ’22), 09 2022.
    [BibTeX] [Abstract] [PDF]
    Data ecosystems are the foundation of emerging data-driven business models as they (i) enable an automated exchange between their participants and (ii) provide them with access to huge and heterogeneous data sources. However, the corresponding benefits come with unforeseen risks as also sensitive information is potentially exposed. Consequently, data security is of utmost importance and, thus, a central requirement for the successful implementation of these ecosystems. Current initiatives, such as IDS and GAIA-X, hence foster sovereign participation via a federated infrastructure where participants retain local control. However, these designs place significant trust in remote infrastructure by mostly implementing organizational security measures such as certification processes prior to admission of a participant. At the same time, due to the sensitive nature of involved data, participants are incentivized to bypass security measures to maximize their own benefit: In practice, this issue significantly weakens sovereignty guarantees. In this paper, we hence claim that data ecosystems must be extended with technical means to reestablish such guarantees. To underpin our position, we analyze promising building blocks and identify three core research directions toward stronger data sovereignty, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. We conclude that these directions are critical to securely implement data ecosystems in data-sensitive contexts.
    @inproceedings{LPMW22,
    author = {Lohm{\"o}ller, Johannes and Pennekamp, Jan and Matzutt, Roman and Wehrle, Klaus},
    title = {{On the Need for Strong Sovereignty in Data Ecosystems}},
    booktitle = {Proceedings of the 1st International Workshop on Data Ecosystems (DEco '22)},
    year = {2022},
    month = {09},
    abstract = {Data ecosystems are the foundation of emerging data-driven business models as they (i) enable an automated exchange between their participants and (ii) provide them with access to huge and heterogeneous data sources. However, the corresponding benefits come with unforeseen risks as also sensitive information is potentially exposed. Consequently, data security is of utmost importance and, thus, a central requirement for the successful implementation of these ecosystems. Current initiatives, such as IDS and GAIA-X, hence foster sovereign participation via a federated infrastructure where participants retain local control. However, these designs place significant trust in remote infrastructure by mostly implementing organizational security measures such as certification processes prior to admission of a participant. At the same time, due to the sensitive nature of involved data, participants are incentivized to bypass security measures to maximize their own benefit: In practice, this issue significantly weakens sovereignty guarantees. In this paper, we hence claim that data ecosystems must be extended with technical means to reestablish such guarantees. To underpin our position, we analyze promising building blocks and identify three core research directions toward stronger data sovereignty, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. We conclude that these directions are critical to securely implement data ecosystems in data-sensitive contexts.},
    meta = {},
    }
  • Markus Dahlmanns, Johannes Lohmöller, Jan Pennekamp, Jörn Bodenhausen, Klaus Wehrle, and Martin Henze. Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things. In Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIACCS ’22), 06 2022.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.
    @inproceedings{DLP+22,
    author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Pennekamp, Jan and Bodenhausen, J{\"o}rn and Wehrle, Klaus and Henze, Martin},
    title = {{Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things}},
    booktitle = {Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIACCS '22)},
    year = {2022},
    month = {06},
    doi = {10.1145/3488932.3497762},
    abstract = {The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space.
    Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.},
    code = {https://github.com/COMSYS/zgrab2},
    meta = {},
    }
  • Dominik Kus, Eric Wagner, Jan Pennekamp, Konrad Wolsing, Ina Berenice Fink, Markus Dahlmanns, Klaus Wehrle, and Martin Henze. A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection. In Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS ’22), 05 2022.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.
    @inproceedings{KWP+22,
    author = {Kus, Dominik and Wagner, Eric and Pennekamp, Jan and Wolsing, Konrad and Fink, Ina Berenice and Dahlmanns, Markus and Wehrle, Klaus and Henze, Martin},
    title = {{A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection}},
    booktitle = {Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS '22)},
    year = {2022},
    month = {05},
    doi = {10.1145/3494107.3522773},
    abstract = {Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.},
    code = {https://github.com/COMSYS/ML-IIDS-generalizability},
    meta = {},
    }
  • Eric Wagner, Roman Matzutt, Jan Pennekamp, Lennart Bader, Irakli Bajelidze, Klaus Wehrle, and Martin Henze. Scalable and Privacy-Focused Company-Centric Supply Chain Management. In Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC ’22), 05 2022.
    [BibTeX] [Abstract] [DOI] [PDF]
    Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.
    @inproceedings{WMP+22,
    author = {Wagner, Eric and Matzutt, Roman and Pennekamp, Jan and Bader, Lennart and Bajelidze, Irakli and Wehrle, Klaus and Henze, Martin},
    title = {{Scalable and Privacy-Focused Company-Centric Supply Chain Management}},
    booktitle = {Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC '22)},
    year = {2022},
    month = {05},
    doi = {10.1109/ICBC54727.2022.9805503},
    abstract = {Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.},
    meta = {},
    }
  • Roman Matzutt, Vincent Ahlrichs, Jan Pennekamp, Roman Karwacik, and Klaus Wehrle. A Moderation Framework for the Swift and Transparent Removal of Illicit Blockchain Content. In Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC ’22), 05 2022.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Blockchains gained tremendous attention for their capability to provide immutable and decentralized event ledgers that can facilitate interactions between mutually distrusting parties. However, precisely this immutability and the openness of permissionless blockchains raised concerns about the consequences of illicit content being irreversibly stored on them. Related work coined the notion of redactable blockchains, which allow for removing illicit content from their history without affecting the blockchain’s integrity. While honest users can safely prune identified content, current approaches either create trust issues by empowering fixed third parties to rewrite history, cannot react quickly to reported content due to using lengthy public votings, or create large per-redaction overheads. In this paper, we instead propose to outsource redactions to small and periodically exchanged juries, whose members can only jointly redact transactions using chameleon hash functions and threshold cryptography. Multiple juries are active at the same time to swiftly redact reported content. They oversee their activities via a global redaction log, which provides transparency and allows for appealing and reversing a rogue jury’s decisions. Hence, our approach establishes a framework for the swift and transparent moderation of blockchain content. Our evaluation shows that our moderation scheme can be realized with feasible per-block and per-redaction overheads, i.e., the redaction capabilities do not impede the blockchain’s normal operation.
    @inproceedings{MAPKW22,
    author = {Matzutt, Roman and Ahlrichs, Vincent and Pennekamp, Jan and Karwacik, Roman and Wehrle, Klaus},
    title = {{A Moderation Framework for the Swift and Transparent Removal of Illicit Blockchain Content}},
    booktitle = {Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC '22)},
    year = {2022},
    month = {05},
    doi = {10.1109/ICBC54727.2022.9805508},
    abstract = {Blockchains gained tremendous attention for their capability to provide immutable and decentralized event ledgers that can facilitate interactions between mutually distrusting parties. However, precisely this immutability and the openness of permissionless blockchains raised concerns about the consequences of illicit content being irreversibly stored on them. Related work coined the notion of redactable blockchains, which allow for removing illicit content from their history without affecting the blockchain's integrity. While honest users can safely prune identified content, current approaches either create trust issues by empowering fixed third parties to rewrite history, cannot react quickly to reported content due to using lengthy public votings, or create large per-redaction overheads.
    In this paper, we instead propose to outsource redactions to small and periodically exchanged juries, whose members can only jointly redact transactions using chameleon hash functions and threshold cryptography. Multiple juries are active at the same time to swiftly redact reported content. They oversee their activities via a global redaction log, which provides transparency and allows for appealing and reversing a rogue jury's decisions. Hence, our approach establishes a framework for the swift and transparent moderation of blockchain content. Our evaluation shows that our moderation scheme can be realized with feasible per-block and per-redaction overheads, i.e., the redaction capabilities do not impede the blockchain's normal operation.},
    code = {https://github.com/COMSYS/redactchain},
    meta = {},
    }
  • Philipp Brauner, Manuela Dalibor, Matthias Jarke, Ike Kunze, István Koren, Gerhard Lakemeyer, Martin Liebenberg, Judith Michael, Jan Pennekamp, Christoph Quix, Bernhard Rumpe, Wil van der Aalst, Klaus Wehrle, Andreas Wortmann, and Martina Ziefle. A Computer Science Perspective on Digital Transformation in Production. ACM Transactions on Internet of Things, 3(2), 05 2022.
    [BibTeX] [Abstract] [DOI] [PDF]
    The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.
    @article{BDJ+22,
    author = {Brauner, Philipp and Dalibor, Manuela and Jarke, Matthias and Kunze, Ike and Koren, Istv{\'a}n and Lakemeyer, Gerhard and Liebenberg, Martin and Michael, Judith and Pennekamp, Jan and Quix, Christoph and Rumpe, Bernhard and van der Aalst, Wil and Wehrle, Klaus and Wortmann, Andreas and Ziefle, Martina},
    title = {{A Computer Science Perspective on Digital Transformation in Production}},
    journal = {ACM Transactions on Internet of Things},
    year = {2022},
    volume = {3},
    number = {2},
    publisher = {ACM},
    month = {05},
    doi = {10.1145/3502265},
    issn = {2691-1914},
    abstract = {The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.},
    meta = {},
    }

2021

  • Jan Pennekamp, Erik Buchholz, Markus Dahlmanns, Ike Kunze, Stefan Braun, Eric Wagner, Matthias Brockmann, Klaus Wehrle, and Martin Henze. Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use. In Proceedings of the Workshop on Learning from Authoritative Security Experiment Results (LASER ’20), 12 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base. Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations. Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike. Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area. Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications. Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.
    @inproceedings{PBD+21,
    author = {Pennekamp, Jan and Buchholz, Erik and Dahlmanns, Markus and Kunze, Ike and Braun, Stefan and Wagner, Eric and Brockmann, Matthias and Wehrle, Klaus and Henze, Martin},
    title = {{Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use}},
    booktitle = {Proceedings of the Workshop on Learning from Authoritative Security Experiment Results (LASER '20)},
    year = {2021},
    month = {12},
    doi = {10.14722/laser-acsac.2020.23088},
    abstract = {Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base.
    Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations.
    Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike.
    Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area.
    Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications.
    Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.},
    meta = {},
    }
  • Raphael Kiesel, Falk Boehm, Jan Pennekamp, and Robert H. Schmitt. Development of a Model to Evaluate the Potential of 5G Technology for Latency-Critical Applications in Production. In Proceedings of the 28th IEEE International Conference on Industrial Engineering and Engineering Management (IEEM ’21), 12 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    Latency-critical applications in production promise to be essential enablers for performance improvement in production. However, they require the right and often wireless communication system. 5G technology appears to be an effective way to achieve communication system for these applications. Its estimated economic benefit on production gross domestic product is immense ({\$}740 billion Euro until 2030). However, 55{\%} of production companies state that 5G technology deployment is currently not a subject matter for them and mainly state the lack of knowledge on benefits as a reason. Currently, it is missing an approach or model for a use case specific, data-based evaluation of 5G technology influence on the performance of production applications. Therefore, this paper presents a model to evaluate the potential of 5G technology for latency-critical applications in production. First, we derive requirements for the model to fulfill the decision-makers’ needs. Second, we analyze existing evaluation approaches regarding their fulfillment of the derived requirements. Third, based on outlined research gaps, we develop a model fulfilling the requirements. Fourth, we give an outlook for further research needs.
    @inproceedings{KBPS21,
    author = {Kiesel, Raphael and Boehm, Falk and Pennekamp, Jan and Schmitt, Robert H.},
    title = {{Development of a Model to Evaluate the Potential of 5G Technology for Latency-Critical Applications in Production}},
    booktitle = {Proceedings of the 28th IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '21)},
    year = {2021},
    month = {12},
    doi = {10.1109/IEEM50564.2021.9673074},
    abstract = {Latency-critical applications in production promise to be essential enablers for performance improvement in production. However, they require the right and often wireless communication system. 5G technology appears to be an effective way to achieve communication system for these applications. Its estimated economic benefit on production gross domestic product is immense ({\$}740 billion Euro until 2030). However, 55{\%} of production companies state that 5G technology deployment is currently not a subject matter for them and mainly state the lack of knowledge on benefits as a reason. Currently, it is missing an approach or model for a use case specific, data-based evaluation of 5G technology influence on the performance of production applications. Therefore, this paper presents a model to evaluate the potential of 5G technology for latency-critical applications in production. First, we derive requirements for the model to fulfill the decision-makers' needs. Second, we analyze existing evaluation approaches regarding their fulfillment of the derived requirements. Third, based on outlined research gaps, we develop a model fulfilling the requirements. Fourth, we give an outlook for further research needs.},
    meta = {},
    }
  • Asya Mitseva, Jan Pennekamp, Johannes Lohmöller, Torsten Ziemann, Carl Hoerchner, Klaus Wehrle, and Andriy Panchenko. POSTER: How Dangerous is My Click? Boosting Website Fingerprinting By Considering Sequences of Webpages. In Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS ’21), 11 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    Website fingerprinting (WFP) is a special case of traffic analysis, where a passive attacker infers information about the content of encrypted and anonymized connections by observing patterns of data flows. Although modern WFP attacks pose a serious threat to online privacy of users, including Tor users, they usually aim to detect single pages only. By ignoring the browsing behavior of users, the attacker excludes valuable information: users visit multiple pages of a single website consecutively, e.g., by following links. In this paper, we propose two novel methods that can take advantage of the consecutive visits of multiple pages to detect websites. We show that two up to three clicks within a site allow attackers to boost the accuracy by more than 20{\%} and to dramatically increase the threat to users’ privacy. We argue that WFP defenses have to consider this new dimension of the attack surface.
    @inproceedings{MPL+21,
    author = {Mitseva, Asya and Pennekamp, Jan and Lohm{\"o}ller, Johannes and Ziemann, Torsten and Hoerchner, Carl and Wehrle, Klaus and Panchenko, Andriy},
    title = {{POSTER: How Dangerous is My Click? Boosting Website Fingerprinting By Considering Sequences of Webpages}},
    booktitle = {Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS '21)},
    year = {2021},
    month = {11},
    doi = {10.1145/3460120.3485347},
    abstract = {Website fingerprinting (WFP) is a special case of traffic analysis, where a passive attacker infers information about the content of encrypted and anonymized connections by observing patterns of data flows. Although modern WFP attacks pose a serious threat to online privacy of users, including Tor users, they usually aim to detect single pages only. By ignoring the browsing behavior of users, the attacker excludes valuable information: users visit multiple pages of a single website consecutively, e.g., by following links. In this paper, we propose two novel methods that can take advantage of the consecutive visits of multiple pages to detect websites. We show that two up to three clicks within a site allow attackers to boost the accuracy by more than 20{\%} and to dramatically increase the threat to users' privacy. We argue that WFP defenses have to consider this new dimension of the attack surface.},
    meta = {},
    }
  • Jan Pennekamp, Frederik Fuhrmann, Markus Dahlmanns, Timo Heutmann, Alexander Kreppein, Dennis Grunert, Christoph Lange, Robert H. Schmitt, and Klaus Wehrle. Confidential Computing-Induced Privacy Benefits for the Bootstrapping of New Business Relationships. Technical Report RWTH-2021-09499, RWTH Aachen University, 11 2021. Blitz Talk at the 2021 Cloud Computing Security Workshop (CCSW ’21).
    [BibTeX] [Abstract] [DOI] [PDF]
    In addition to quality improvements and cost reductions, dynamic and flexible business relationships are expected to become more important in the future to account for specific customer change requests or small-batch production. Today, despite reservation, sensitive information must be shared upfront between buyers and sellers. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness following information leaks or breaches of their privacy. To address this issue, the concepts of confidential computing and cloud computing come to mind as they promise to offer scalable approaches that preserve the privacy of participating companies. In particular, designs building on confidential computing can help to technically enforce privacy. Moreover, cloud computing constitutes an elegant design choice to scale these novel protocols to industry needs while limiting the setup and management overhead for practitioners. Thus, novel approaches in this area can advance the status quo of bootstrapping new relationships as they provide privacy-preserving alternatives that are suitable for immediate deployment.
    @techreport{PFD+21,
    author = {Pennekamp, Jan and Fuhrmann, Frederik and Dahlmanns, Markus and Heutmann, Timo and Kreppein, Alexander and Grunert, Dennis and Lange, Christoph and Schmitt, Robert H. and Wehrle, Klaus},
    title = {{Confidential Computing-Induced Privacy Benefits for the Bootstrapping of New Business Relationships}},
    institution = {RWTH Aachen University},
    year = {2021},
    number = {RWTH-2021-09499},
    month = {11},
    note = {Blitz Talk at the 2021 Cloud Computing Security Workshop (CCSW '21)},
    doi = {10.18154/RWTH-2021-09499},
    abstract = {In addition to quality improvements and cost reductions, dynamic and flexible business relationships are expected to become more important in the future to account for specific customer change requests or small-batch production. Today, despite reservation, sensitive information must be shared upfront between buyers and sellers. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness following information leaks or breaches of their privacy. To address this issue, the concepts of confidential computing and cloud computing come to mind as they promise to offer scalable approaches that preserve the privacy of participating companies. In particular, designs building on confidential computing can help to technically enforce privacy. Moreover, cloud computing constitutes an elegant design choice to scale these novel protocols to industry needs while limiting the setup and management overhead for practitioners. Thus, novel approaches in this area can advance the status quo of bootstrapping new relationships as they provide privacy-preserving alternatives that are suitable for immediate deployment.},
    meta = {},
    }
  • Michael Kretschmer, Jan Pennekamp, and Klaus Wehrle. Cookie Banners and Privacy Policies: Measuring the Impact of the GDPR on the Web. ACM Transactions on the Web, 15(4), 11 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    The General Data Protection Regulation (GDPR) is in effect since May of 2018. As one of the most comprehensive pieces of legislation concerning privacy, it sparked a lot of discussion on the effect it would have on users and providers of online services in particular, due to the large amount of personal data processed in this context. Almost three years later, we are interested in revisiting this question to summarize the impact this new regulation has had on actors in the World Wide Web. Using Scopus, we obtain a vast corpus of academic work to survey studies related to changes on websites since and around the time, the GDPR went into force. Our findings show that the emphasis on privacy increased w.r.t. online services, but plenty potential for improvements remains. Although online services are on average more transparent regarding data processing practices in their public data policies, a majority of these policies still either lack information required by the GDPR (e.g., contact information for users to file privacy inquiries), or do not provide this information in a user-friendly form. Additionally, we summarize that online services more often provide means for their users to opt out of data processing, but regularly obstruct convenient access to such means through unnecessarily complex and sometimes illegitimate interface design. Our survey further details that this situation contradicts the preferences expressed by users both verbally and through their actions, and researchers have proposed multiple approaches to facilitate GDPR-conform data processing without negatively impacting the user experience. Thus, we compiled reoccurring points of criticism by privacy researchers and data protection authorities into a list of four guidelines for service providers to consider.
    @article{KPW21,
    author = {Kretschmer, Michael and Pennekamp, Jan and Wehrle, Klaus},
    title = {{Cookie Banners and Privacy Policies: Measuring the Impact of the GDPR on the Web}},
    journal = {ACM Transactions on the Web},
    year = {2021},
    volume = {15},
    number = {4},
    publisher = {ACM},
    month = {11},
    doi = {10.1145/3466722},
    issn = {1559-1131},
    abstract = {The General Data Protection Regulation (GDPR) is in effect since May of 2018. As one of the most comprehensive pieces of legislation concerning privacy, it sparked a lot of discussion on the effect it would have on users and providers of online services in particular, due to the large amount of personal data processed in this context. Almost three years later, we are interested in revisiting this question to summarize the impact this new regulation has had on actors in the World Wide Web. Using Scopus, we obtain a vast corpus of academic work to survey studies related to changes on websites since and around the time, the GDPR went into force. Our findings show that the emphasis on privacy increased w.r.t. online services, but plenty potential for improvements remains. Although online services are on average more transparent regarding data processing practices in their public data policies, a majority of these policies still either lack information required by the GDPR (e.g., contact information for users to file privacy inquiries), or do not provide this information in a user-friendly form. Additionally, we summarize that online services more often provide means for their users to opt out of data processing, but regularly obstruct convenient access to such means through unnecessarily complex and sometimes illegitimate interface design. Our survey further details that this situation contradicts the preferences expressed by users both verbally and through their actions, and researchers have proposed multiple approaches to facilitate GDPR-conform data processing without negatively impacting the user experience. Thus, we compiled reoccurring points of criticism by privacy researchers and data protection authorities into a list of four guidelines for service providers to consider.},
    meta = {},
    }
  • Sebastian Reuter, Jens Hiller, Jan Pennekamp, Andriy Panchenko, and Klaus Wehrle. Demo: Traffic Splitting for Tor –- A Defense against Fingerprinting Attacks. In Proceedings of the 2021 International Conference on Networked Systems (NetSys ’21), 09 2021.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Website fingerprinting (WFP) attacks on the anonymity network Tor have become ever more effective. Furthermore, research discovered that proposed defenses are insufficient or cause high overhead. In previous work, we presented a new WFP defense for Tor that incorporates multipath transmissions to repel malicious Tor nodes from conducting WFP attacks. In this demo, we showcase the operation of our traffic splitting defense by visually illustrating the underlying Tor multipath transmission using LED-equipped Raspberry Pis.
    @inproceedings{RHP+21,
    author = {Reuter, Sebastian and Hiller, Jens and Pennekamp, Jan and Panchenko, Andriy and Wehrle, Klaus},
    title = {{Demo: Traffic Splitting for Tor --- A Defense against Fingerprinting Attacks}},
    booktitle = {Proceedings of the 2021 International Conference on Networked Systems (NetSys '21)},
    year = {2021},
    month = {09},
    doi = {10.14279/tuj.eceasst.80.1151},
    abstract = {Website fingerprinting (WFP) attacks on the anonymity network Tor have become ever more effective. Furthermore, research discovered that proposed defenses are insufficient or cause high overhead. In previous work, we presented a new WFP defense for Tor that incorporates multipath transmissions to repel malicious Tor nodes from conducting WFP attacks. In this demo, we showcase the operation of our traffic splitting defense by visually illustrating the underlying Tor multipath transmission using LED-equipped Raspberry Pis.},
    code = {https://github.com/TrafficSliver/trafficsliver-net-demo},
    journal = {Electronic Communications of the EASST},
    meta = {},
    }
  • Jan Pennekamp, Roman Matzutt, Salil S. Kanhere, Jens Hiller, and Klaus Wehrle. The Road to Accountable and Dependable Manufacturing. Automation, 2(3), 09 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    The Internet of Things provides manufacturing with rich data for increased automation. Beyond company-internal data exploitation, the sharing of product and manufacturing process data along and across supply chains enables more efficient production flows and product lifecycle management. Even more, data-based automation facilitates short-lived ad hoc collaborations, realizing highly dynamic business relationships for sustainable exploitation of production resources and capacities. However, the sharing and use of business data across manufacturers and with end customers add requirements on data accountability, verifiability, and reliability and needs to consider security and privacy demands. While research has already identified blockchain technology as a key technology to address these challenges, current solutions mainly evolve around logistics or focus on established business relationships instead of automated but highly dynamic collaborations that cannot draw upon long-term trust relationships. We identify three open research areas on the road to such a truly accountable and dependable manufacturing enabled by blockchain technology: blockchain-inherent challenges, scenario-driven challenges, and socio-economic challenges. Especially tackling the scenario-driven challenges, we discuss requirements and options for realizing a blockchain-based trustworthy information store and outline its use for automation to achieve a reliable sharing of product information, efficient and dependable collaboration, and dynamic distributed markets without requiring established long-term trust.
    @article{PMK+21,
    author = {Pennekamp, Jan and Matzutt, Roman and Kanhere, Salil S. and Hiller, Jens and Wehrle, Klaus},
    title = {{The Road to Accountable and Dependable Manufacturing}},
    journal = {Automation},
    year = {2021},
    volume = {2},
    number = {3},
    publisher = {MDPI},
    month = {09},
    doi = {10.3390/automation2030013},
    issn = {2673-4052},
    abstract = {The Internet of Things provides manufacturing with rich data for increased automation. Beyond company-internal data exploitation, the sharing of product and manufacturing process data along and across supply chains enables more efficient production flows and product lifecycle management. Even more, data-based automation facilitates short-lived ad hoc collaborations, realizing highly dynamic business relationships for sustainable exploitation of production resources and capacities. However, the sharing and use of business data across manufacturers and with end customers add requirements on data accountability, verifiability, and reliability and needs to consider security and privacy demands. While research has already identified blockchain technology as a key technology to address these challenges, current solutions mainly evolve around logistics or focus on established business relationships instead of automated but highly dynamic collaborations that cannot draw upon long-term trust relationships. We identify three open research areas on the road to such a truly accountable and dependable manufacturing enabled by blockchain technology: blockchain-inherent challenges, scenario-driven challenges, and socio-economic challenges. Especially tackling the scenario-driven challenges, we discuss requirements and options for realizing a blockchain-based trustworthy information store and outline its use for automation to achieve a reliable sharing of product information, efficient and dependable collaboration, and dynamic distributed markets without requiring established long-term trust.},
    meta = {},
    }
  • Roman Matzutt, Benedikt Kalde, Jan Pennekamp, Arthur Drichel, Martin Henze, and Klaus Wehrle. CoinPrune: Shrinking Bitcoin’s Blockchain Retrospectively. IEEE Transactions on Network and Service Management, 18(3), 09 2021.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin’s set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot’s correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.
    @article{MKP+21,
    author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus},
    title = {{CoinPrune: Shrinking Bitcoin's Blockchain Retrospectively}},
    journal = {IEEE Transactions on Network and Service Management},
    year = {2021},
    volume = {18},
    number = {3},
    publisher = {IEEE},
    month = {09},
    doi = {10.1109/TNSM.2021.3073270},
    issn = {1932-4537},
    abstract = {Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin's set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot's correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.},
    code = {https://github.com/COMSYS/coinprune},
    meta = {},
    }
  • Jan Pennekamp, Martin Henze, and Klaus Wehrle. Unlocking Secure Industrial Collaborations through Privacy-Preserving Computation. ERCIM News, 126, 07 2021.
    [BibTeX] [Abstract] [PDF]
    In industrial settings, significant process improvements can be achieved when utilising and sharing information across stakeholders. However, traditionally conservative companies impose significant confidentiality requirements for any (external) data processing. We discuss how privacy-preserving computation can unlock secure and private collaborations even in such competitive environments.
    @article{PHW21,
    author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{Unlocking Secure Industrial Collaborations through Privacy-Preserving Computation}},
    journal = {ERCIM News},
    year = {2021},
    volume = {126},
    publisher = {ERCIM EEIG},
    month = {07},
    issn = {0926-4981},
    abstract = {In industrial settings, significant process improvements can be achieved when utilising and sharing information across stakeholders. However, traditionally conservative companies impose significant confidentiality requirements for any (external) data processing. We discuss how privacy-preserving computation can unlock secure and private collaborations even in such competitive environments.},
    meta = {},
    }
  • Simon Mangel, Lars Gleim, Jan Pennekamp, Klaus Wehrle, and Stefan Decker. Data Reliability and Trustworthiness through Digital Transmission Contracts. In Proceedings of the 18th Extended Semantic Web Conference (ESWC ’21), 06 2021.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare’s performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today’s approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation.
    @inproceedings{MGPWD21,
    author = {Mangel, Simon and Gleim, Lars and Pennekamp, Jan and Wehrle, Klaus and Decker, Stefan},
    title = {{Data Reliability and Trustworthiness through Digital Transmission Contracts}},
    booktitle = {Proceedings of the 18th Extended Semantic Web Conference (ESWC '21)},
    year = {2021},
    month = {06},
    doi = {10.1007/978-3-030-77385-4_16},
    abstract = {As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare's performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today's approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation.},
    code = {https://git.rwth-aachen.de/i5/factdag/factcheck.js},
    code2 = {http://i5.pages.rwth-aachen.de/factdag/reshare-ontology/0.1/},
    meta = {},
    }
  • Lars Gleim, Jan Pennekamp, Liam Tirpitz, Sascha Welten, Florian Brillowski, and Stefan Decker. FactStack: Interoperable Data Management and Preservation for the Web and Industry 4.0. In Proceedings of the 19th Symposium for Database Systems for Business, Technology and Web (BTW ’21), 05 2021.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Data exchange throughout the supply chain is essential for the agile and adaptive manufacturing processes of Industry 4.0. As companies employ numerous, frequently mutually incompatible data management and preservation approaches, interorganizational data sharing and reuse regularly requires human interaction and is thus associated with high overhead costs. An interoperable system, supporting the unified management, preservation and exchange of data across organizational boundaries is missing to date. We propose FactStack, a unified approach to data management and preservation based upon a novel combination of existing Web-standards and tightly integrated with the HTTP protocol itself. Based on the FactDAG model, FactStack guides and supports the full data lifecycle in a FAIR and interoperable manner, independent of individual software solutions and backward-compatible with existing resource oriented architectures. We describe our reference implementation of the approach and evaluate its performance, showcasing scalability even to high-throughput applications. We analyze the system’s applicability to industry using a representative real-world use case in aircraft manufacturing based on principal requirements identified in prior work. We conclude that FactStack fulfills all requirements and provides a promising solution for the on-demand integration of persistence and provenance into existing resource-oriented architectures, facilitating data management and preservation for the agile and interorganizational manufacturing processes of Industry 4.0. Through its open source distribution, it is readily available for adoption by the community, paving the way for improved utility and usability of data management and preservation in digital manufacturing and supply chains.
    @inproceedings{GPT+21,
    author = {Gleim, Lars and Pennekamp, Jan and Tirpitz, Liam and Welten, Sascha and Brillowski, Florian and Decker, Stefan},
    title = {{FactStack: Interoperable Data Management and Preservation for the Web and Industry 4.0}},
    booktitle = {Proceedings of the 19th Symposium for Database Systems for Business, Technology and Web (BTW '21)},
    year = {2021},
    month = {05},
    doi = {10.18420/btw2021-20},
    abstract = {Data exchange throughout the supply chain is essential for the agile and adaptive manufacturing processes of Industry 4.0. As companies employ numerous, frequently mutually incompatible data management and preservation approaches, interorganizational data sharing and reuse regularly requires human interaction and is thus associated with high overhead costs. An interoperable system, supporting the unified management, preservation and exchange of data across organizational boundaries is missing to date. We propose FactStack, a unified approach to data management and preservation based upon a novel combination of existing Web-standards and tightly integrated with the HTTP protocol itself. Based on the FactDAG model, FactStack guides and supports the full data lifecycle in a FAIR and interoperable manner, independent of individual software solutions and backward-compatible with existing resource oriented architectures. We describe our reference implementation of the approach and evaluate its performance, showcasing scalability even to high-throughput applications. We analyze the system's applicability to industry using a representative real-world use case in aircraft manufacturing based on principal requirements identified in prior work. We conclude that FactStack fulfills all requirements and provides a promising solution for the on-demand integration of persistence and provenance into existing resource-oriented architectures, facilitating data management and preservation for the agile and interorganizational manufacturing processes of Industry 4.0. Through its open source distribution, it is readily available for adoption by the community, paving the way for improved utility and usability of data management and preservation in digital manufacturing and supply chains.},
    code = {https://git.rwth-aachen.de/i5/factdag/factlibjs},
    code2 = {https://git.rwth-aachen.de/i5/factdag/trellis},
    meta = {},
    }
  • Armin F. Buckhorst, Benjamin Montavon, Dominik Wolfschläger, Melanie Buchsbaum, Amir Shahidi, Henning Petruck, Ike Kunze, Jan Pennekamp, Christian Brecher, Mathias Hüsing, Burkhard Corves, Verena Nitsch, Klaus Wehrle, and Robert H. Schmitt. Holarchy for Line-less Mobile Assembly Systems Operation in the Context of the Internet of Production. Procedia CIRP, 99, 05 2021. Proceedings of the 14th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME ’20).
    [BibTeX] [Abstract] [DOI] [PDF]
    Assembly systems must provide maximum flexibility qualified by organization and technology to offer cost-compliant performance features to differentiate themselves from competitors in buyers’ markets. By mobilization of multipurpose resources and dynamic planning, Line-less Mobile Assembly Systems (LMASs) offer organizational reconfigurability. By proposing a holarchy to combine LMASs with the concept of an Internet of Production (IoP), we enable LMASs to source valuable information from cross-level production networks, physical resources, software nodes, and data stores that are interconnected in an IoP. The presented holarchy provides a concept of how to address future challenges, meet the requirements of shorter lead times, and unique lifecycle support. The paper suggests an application of decision making, distributed sensor services, recommender-based data reduction, and in-network computing while considering safety and human usability alike.
    @article{BMW+21,
    author = {Buckhorst, Armin F. and Montavon, Benjamin and Wolfschl{\"a}ger, Dominik and Buchsbaum, Melanie and Shahidi, Amir and Petruck, Henning and Kunze, Ike and Pennekamp, Jan and Brecher, Christian and H{\"u}sing, Mathias and Corves, Burkhard and Nitsch, Verena and Wehrle, Klaus and Schmitt, Robert H.},
    title = {{Holarchy for Line-less Mobile Assembly Systems Operation in the Context of the Internet of Production}},
    journal = {Procedia CIRP},
    year = {2021},
    volume = {99},
    publisher = {Elsevier},
    month = {05},
    doi = {10.1016/j.procir.2021.03.064},
    issn = {2212-8271},
    note = {Proceedings of the 14th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME '20)},
    abstract = {Assembly systems must provide maximum flexibility qualified by organization and technology to offer cost-compliant performance features to differentiate themselves from competitors in buyers' markets. By mobilization of multipurpose resources and dynamic planning, Line-less Mobile Assembly Systems (LMASs) offer organizational reconfigurability. By proposing a holarchy to combine LMASs with the concept of an Internet of Production (IoP), we enable LMASs to source valuable information from cross-level production networks, physical resources, software nodes, and data stores that are interconnected in an IoP. The presented holarchy provides a concept of how to address future challenges, meet the requirements of shorter lead times, and unique lifecycle support. The paper suggests an application of decision making, distributed sensor services, recommender-based data reduction, and in-network computing while considering safety and human usability alike.},
    meta = {},
    }
  • Lennart Bader, Jan Pennekamp, Roman Matzutt, David Hedderich, Markus Kowalski, Volker Lücken, and Klaus Wehrle. Blockchain-Based Privacy Preservation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability. Information Processing & Management, 58(3), 05 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    The benefits of information sharing along supply chains are well known for improving productivity and reducing costs. However, with the shift towards more dynamic and flexible supply chains, privacy concerns severely challenge the required information retrieval. A lack of trust between the different involved stakeholders inhibits advanced, multi-hop information flows, as valuable information for tracking and tracing products and parts is either unavailable or only retained locally. Our extensive literature review of previous approaches shows that these needs for cross-company information retrieval are widely acknowledged, but related work currently only addresses them insufficiently. To overcome these concerns, we present PrivAccIChain, a secure, privacy-preserving architecture for improving the multi-hop information retrieval with stakeholder accountability along supply chains. To address use case-specific needs, we particularly introduce an adaptable configuration of transparency and data privacy within our design. Hence, we enable the benefits of information sharing as well as multi-hop tracking and tracing even in supply chains that include mutually distrusting stakeholders. We evaluate the performance of PrivAccIChain and demonstrate its real-world feasibility based on the information of a purchasable automobile, the e.GO Life. We further conduct an in-depth security analysis and propose tunable mitigations against common attacks. As such, we attest PrivAccIChain’s practicability for information management even in complex supply chains with flexible and dynamic business relationships.
    @article{BPM+21,
    author = {Bader, Lennart and Pennekamp, Jan and Matzutt, Roman and Hedderich, David and Kowalski, Markus and L{\"u}cken, Volker and Wehrle, Klaus},
    title = {{Blockchain-Based Privacy Preservation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability}},
    journal = {Information Processing {\&} Management},
    year = {2021},
    volume = {58},
    number = {3},
    publisher = {Elsevier},
    month = {05},
    doi = {10.1016/j.ipm.2021.102529},
    issn = {0306-4573},
    abstract = {The benefits of information sharing along supply chains are well known for improving productivity and reducing costs. However, with the shift towards more dynamic and flexible supply chains, privacy concerns severely challenge the required information retrieval. A lack of trust between the different involved stakeholders inhibits advanced, multi-hop information flows, as valuable information for tracking and tracing products and parts is either unavailable or only retained locally. Our extensive literature review of previous approaches shows that these needs for cross-company information retrieval are widely acknowledged, but related work currently only addresses them insufficiently. To overcome these concerns, we present PrivAccIChain, a secure, privacy-preserving architecture for improving the multi-hop information retrieval with stakeholder accountability along supply chains. To address use case-specific needs, we particularly introduce an adaptable configuration of transparency and data privacy within our design. Hence, we enable the benefits of information sharing as well as multi-hop tracking and tracing even in supply chains that include mutually distrusting stakeholders. We evaluate the performance of PrivAccIChain and demonstrate its real-world feasibility based on the information of a purchasable automobile, the e.GO Life. We further conduct an in-depth security analysis and propose tunable mitigations against common attacks. As such, we attest PrivAccIChain's practicability for information management even in complex supply chains with flexible and dynamic business relationships.},
    meta = {},
    }
  • Markus Dahlmanns, Jan Pennekamp, Ina Berenice Fink, Bernd Schoolmann, Klaus Wehrle, and Martin Henze. Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems. In Proceedings of the 1st ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS ’21), 04 2021.
    [BibTeX] [Abstract] [DOI] [PDF]
    The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.
    @inproceedings{DPF+21,
    author = {Dahlmanns, Markus and Pennekamp, Jan and Fink, Ina Berenice and Schoolmann, Bernd and Wehrle, Klaus and Henze, Martin},
    title = {{Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems}},
    booktitle = {Proceedings of the 1st ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS '21)},
    year = {2021},
    month = {04},
    doi = {10.1145/3445969.3450423},
    abstract = {The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.},
    meta = {},
    }

2020

  • Jan Pennekamp, Patrick Sapel, Ina Berenice Fink, Simon Wagner, Sebastian Reuter, Christian Hopmann, Klaus Wehrle, and Martin Henze. Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking. In Proceedings of the 8th Workshop on Encrypted Computing & Applied Homomorphic Cryptography (WAHC ’20), 12 2020.
    [BibTeX] [Abstract] [DOI] [PDF]
    Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today’s applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB –- a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy –- which is specifically tailored for realistic real-world applications in which we protect companies’ sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB’s performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.
    @inproceedings{PSF+20,
    author = {Pennekamp, Jan and Sapel, Patrick and Fink, Ina Berenice and Wagner, Simon and Reuter, Sebastian and Hopmann, Christian and Wehrle, Klaus and Henze, Martin},
    title = {{Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking}},
    booktitle = {Proceedings of the 8th Workshop on Encrypted Computing {\&} Applied Homomorphic Cryptography (WAHC '20)},
    year = {2020},
    month = {12},
    doi = {10.25835/0072999},
    abstract = {Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today's applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB --- a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy --- which is specifically tailored for realistic real-world applications in which we protect companies' sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB's performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.},
    meta = {},
    }
  • Jan Pennekamp, Erik Buchholz, Yannik Lockner, Markus Dahlmanns, Tiandong Xi, Marcel Fey, Christian Brecher, Christian Hopmann, and Klaus Wehrle. Privacy-Preserving Production Process Parameter Exchange. In Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC ’20), 12 2020.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Nowadays, collaborations between industrial companies always go hand in hand with trust issues, i.e., exchanging valuable production data entails the risk of improper use of potentially sensitive information. Therefore, companies hesitate to offer their production data, e.g., process parameters that would allow other companies to establish new production lines faster, against a quid pro quo. Nevertheless, the expected benefits of industrial collaboration, data exchanges, and the utilization of external knowledge are significant. In this paper, we introduce our Bloom filter-based Parameter Exchange (BPE), which enables companies to exchange process parameters privacy-preservingly. We demonstrate the applicability of our platform based on two distinct real-world use cases: injection molding and machine tools. We show that BPE is both scalable and deployable for different needs to foster industrial collaborations. Thereby, we reward data-providing companies with payments while preserving their valuable data and reducing the risks of data leakage.
    @inproceedings{PBL+20,
    author = {Pennekamp, Jan and Buchholz, Erik and Lockner, Yannik and Dahlmanns, Markus and Xi, Tiandong and Fey, Marcel and Brecher, Christian and Hopmann, Christian and Wehrle, Klaus},
    title = {{Privacy-Preserving Production Process Parameter Exchange}},
    booktitle = {Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC '20)},
    year = {2020},
    month = {12},
    doi = {10.1145/3427228.3427248},
    abstract = {Nowadays, collaborations between industrial companies always go hand in hand with trust issues, i.e., exchanging valuable production data entails the risk of improper use of potentially sensitive information. Therefore, companies hesitate to offer their production data, e.g., process parameters that would allow other companies to establish new production lines faster, against a quid pro quo. Nevertheless, the expected benefits of industrial collaboration, data exchanges, and the utilization of external knowledge are significant.
    In this paper, we introduce our Bloom filter-based Parameter Exchange (BPE), which enables companies to exchange process parameters privacy-preservingly. We demonstrate the applicability of our platform based on two distinct real-world use cases: injection molding and machine tools. We show that BPE is both scalable and deployable for different needs to foster industrial collaborations. Thereby, we reward data-providing companies with payments while preserving their valuable data and reducing the risks of data leakage.},
    code = {https://github.com/COMSYS/parameter-exchange},
    meta = {},
    }
  • Lars Gleim, Liam Tirpitz, Jan Pennekamp, and Stefan Decker. Expressing FactDAG Provenance with PROV-O. In Proceedings of the 6th Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW ’20), 11 2020.
    [BibTeX] [Abstract] [PDF]
    To foster data sharing and reuse across organizational boundaries, provenance tracking is of vital importance for the establishment of trust and accountability, especially in industrial applications, but often neglected due to associated overhead. The abstract FactDAG data interoperability model strives to address this challenge by simplifying the creation of provenance-linked knowledge graphs of revisioned (and thus immutable) resources. However, to date, it lacks a practical provenance implementation. In this work, we present a concrete alignment of all roles and relations in the FactDAG model to the W3C PROV provenance standard, allowing future software implementations to directly produce standard-compliant provenance information. Maintaining compatibility with existing PROV tooling, an implementation of this mapping will pave the way for practical FactDAG implementations and deployments, improving trust and accountability for Open Data through simplified provenance management.
    @inproceedings{GTPD20,
    author = {Gleim, Lars and Tirpitz, Liam and Pennekamp, Jan and Decker, Stefan},
    title = {{Expressing FactDAG Provenance with PROV-O}},
    booktitle = {Proceedings of the 6th Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW '20)},
    year = {2020},
    month = {11},
    abstract = {To foster data sharing and reuse across organizational boundaries, provenance tracking is of vital importance for the establishment of trust and accountability, especially in industrial applications, but often neglected due to associated overhead. The abstract FactDAG data interoperability model strives to address this challenge by simplifying the creation of provenance-linked knowledge graphs of revisioned (and thus immutable) resources. However, to date, it lacks a practical provenance implementation.
    In this work, we present a concrete alignment of all roles and relations in the FactDAG model to the W3C PROV provenance standard, allowing future software implementations to directly produce standard-compliant provenance information. Maintaining compatibility with existing PROV tooling, an implementation of this mapping will pave the way for practical FactDAG implementations and deployments, improving trust and accountability for Open Data through simplified provenance management.},
    meta = {},
    }
  • Wladimir De la Cadena, Asya Mitseva, Jens Hiller, Jan Pennekamp, Sebastian Reuter, Julian Filter, Klaus Wehrle, Thomas Engel, and Andriy Panchenko. TrafficSliver: Fighting Website Fingerprinting Attacks with Traffic Splitting. In Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security (CCS ’20), 11 2020.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Website fingerprinting (WFP) aims to infer information about the content of encrypted and anonymized connections by observing patterns of data flows based on the size and direction of packets. By collecting traffic traces at a malicious Tor entry node –- one of the weakest adversaries in the attacker model of Tor –- a passive eavesdropper can leverage the captured meta-data to reveal the websites visited by a Tor user. As recently shown, WFP is significantly more effective and realistic than assumed. Concurrently, former WFP defenses are either infeasible for deployment in real-world settings or defend against specific WFP attacks only. To limit the exposure of Tor users to WFP, we propose novel lightweight WFP defenses, TrafficSliver, which successfully counter today’s WFP classifiers with reasonable bandwidth and latency overheads and, thus, make them attractive candidates for adoption in Tor. Through user-controlled splitting of traffic over multiple Tor entry nodes, TrafficSliver limits the data a single entry node can observe and distorts repeatable traffic patterns exploited by WFP attacks. We first propose a network-layer defense, in which we apply the concept of multipathing entirely within the Tor network. We show that our network-layer defense reduces the accuracy from more than 98{\%} to less than 16{\%} for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. We further suggest an elegant client-side application-layer defense, which is independent of the underlying anonymization network. By sending single HTTP requests for different web objects over distinct Tor entry nodes, our application-layer defense reduces the detection rate of WFP classifiers by almost 50 percentage points. Although it offers lower protection than our network-layer defense, it provides a security boost at the cost of a very low implementation overhead and is fully compatible with today’s Tor network.
    @inproceedings{DMH+20,
    author = {De la Cadena, Wladimir and Mitseva, Asya and Hiller, Jens and Pennekamp, Jan and Reuter, Sebastian and Filter, Julian and Wehrle, Klaus and Engel, Thomas and Panchenko, Andriy},
    title = {{TrafficSliver: Fighting Website Fingerprinting Attacks with Traffic Splitting}},
    booktitle = {Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security (CCS '20)},
    year = {2020},
    month = {11},
    doi = {10.1145/3372297.3423351},
    abstract = {Website fingerprinting (WFP) aims to infer information about the content of encrypted and anonymized connections by observing patterns of data flows based on the size and direction of packets. By collecting traffic traces at a malicious Tor entry node --- one of the weakest adversaries in the attacker model of Tor --- a passive eavesdropper can leverage the captured meta-data to reveal the websites visited by a Tor user. As recently shown, WFP is significantly more effective and realistic than assumed. Concurrently, former WFP defenses are either infeasible for deployment in real-world settings or defend against specific WFP attacks only.
    To limit the exposure of Tor users to WFP, we propose novel lightweight WFP defenses, TrafficSliver, which successfully counter today's WFP classifiers with reasonable bandwidth and latency overheads and, thus, make them attractive candidates for adoption in Tor. Through user-controlled splitting of traffic over multiple Tor entry nodes, TrafficSliver limits the data a single entry node can observe and distorts repeatable traffic patterns exploited by WFP attacks. We first propose a network-layer defense, in which we apply the concept of multipathing entirely within the Tor network. We show that our network-layer defense reduces the accuracy from more than 98{\%} to less than 16{\%} for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. We further suggest an elegant client-side application-layer defense, which is independent of the underlying anonymization network. By sending single HTTP requests for different web objects over distinct Tor entry nodes, our application-layer defense reduces the detection rate of WFP classifiers by almost 50 percentage points. Although it offers lower protection than our network-layer defense, it provides a security boost at the cost of a very low implementation overhead and is fully compatible with today's Tor network.},
    code = {https://github.com/TrafficSliver},
    meta = {},
    }
  • Markus Dahlmanns, Johannes Lohmöller, Ina Berenice Fink, Jan Pennekamp, Klaus Wehrle, and Martin Henze. Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments. In Proceedings of the Internet Measurement Conference (IMC ’20), 10 2020.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24{\%} of hosts), disabled security functionality (24{\%}), or use of deprecated cryptographic primitives (25{\%}) on in total 92{\%} of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.
    @inproceedings{DLF+20,
    author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Fink, Ina Berenice and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin},
    title = {{Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments}},
    booktitle = {Proceedings of the Internet Measurement Conference (IMC '20)},
    year = {2020},
    month = {10},
    doi = {10.1145/3419394.3423666},
    abstract = {Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24{\%} of hosts), disabled security functionality (24{\%}), or use of deprecated cryptographic primitives (25{\%}) on in total 92{\%} of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.},
    code = {https://github.com/COMSYS/opcua},
    code2 = {https://github.com/COMSYS/zgrab2},
    meta = {},
    }
  • Roman Matzutt, Jan Pennekamp, Erik Buchholz, and Klaus Wehrle. Utilizing Public Blockchains for the Sybil-Resistant Bootstrapping of Distributed Anonymity Services. In Proceedings of the 15th ACM ASIA Conference on Computer and Communications Security (ASIACCS ’20), 10 2020.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Distributed anonymity services, such as onion routing networks or cryptocurrency tumblers, promise privacy protection without trusted third parties. While the security of these services is often well-researched, security implications of their required bootstrapping processes are usually neglected: Users either jointly conduct the anonymization themselves or they need to rely on a set of non-colluding privacy peers. However, the typically small number of privacy peers enable single adversaries to mimic distributed services. We thus present AnonBoot, a Sybil-resistant medium to securely bootstrap distributed anonymity services via public blockchains. AnonBoot enforces that peers periodically create a small proof of work to refresh their eligibility of providing secure anonymity services. A pseudo-random, locally replicable bootstrapping process using on-chain entropy then prevents biasing the election of eligible peers. Our evaluation using Bitcoin as AnonBoot’s underlying blockchain shows its feasibility to maintain a trustworthy repository of 1000 peers with only a small storage footprint while supporting arbitrarily large user bases on top of most blockchains.
    @inproceedings{MPBW20,
    author = {Matzutt, Roman and Pennekamp, Jan and Buchholz, Erik and Wehrle, Klaus},
    title = {{Utilizing Public Blockchains for the Sybil-Resistant Bootstrapping of Distributed Anonymity Services}},
    booktitle = {Proceedings of the 15th ACM ASIA Conference on Computer and Communications Security (ASIACCS '20)},
    year = {2020},
    month = {10},
    doi = {10.1145/3320269.3384729},
    abstract = {Distributed anonymity services, such as onion routing networks or cryptocurrency tumblers, promise privacy protection without trusted third parties. While the security of these services is often well-researched, security implications of their required bootstrapping processes are usually neglected: Users either jointly conduct the anonymization themselves or they need to rely on a set of non-colluding privacy peers. However, the typically small number of privacy peers enable single adversaries to mimic distributed services. We thus present AnonBoot, a Sybil-resistant medium to securely bootstrap distributed anonymity services via public blockchains. AnonBoot enforces that peers periodically create a small proof of work to refresh their eligibility of providing secure anonymity services. A pseudo-random, locally replicable bootstrapping process using on-chain entropy then prevents biasing the election of eligible peers. Our evaluation using Bitcoin as AnonBoot's underlying blockchain shows its feasibility to maintain a trustworthy repository of 1000 peers with only a small storage footprint while supporting arbitrarily large user bases on top of most blockchains.},
    code = {https://github.com/COMSYS/anonboot},
    meta = {},
    }
  • Philipp Niemietz, Jan Pennekamp, Ike Kunze, Daniel Trauth, Klaus Wehrle, and Thomas Bergs. Stamping Process Modelling in an Internet of Production. Procedia Manufacturing, 49, 07 2020. Proceedings of the 8th International Conference on Through-Life Engineering Service (TESConf ’19).
    [BibTeX] [Abstract] [DOI] [PDF]
    Sharing data between companies throughout the supply chain is expected to be beneficial for product quality as well as for the economical savings in the manufacturing industry. To utilize the available data in the vision of an Internet of Production (IoP) a precise condition monitoring of manufacturing and production processes that facilitates the quantification of influences throughout the supply chain is inevitable. In this paper, we consider stamping processes in the context of an Internet of Production and the preliminaries for analytical models that utilize the ever-increasing available data. Three research objectives to cope with the amount of data and for a methodology to monitor, analyze and evaluate the influence of available data onto stamping processes have been identified: (i) State detection based on cyclic sensor signals, (ii) mapping of in- and output parameter variations onto process states, and (iii) models for edge and in-network computing approaches. After discussing state-of-the-art approaches to monitor stamping processes and the introduction of the fineblanking process as an exemplary stamping process, a research roadmap for an IoP enabling modeling framework is presented.
    @article{NPK+20,
    author = {Niemietz, Philipp and Pennekamp, Jan and Kunze, Ike and Trauth, Daniel and Wehrle, Klaus and Bergs, Thomas},
    title = {{Stamping Process Modelling in an Internet of Production}},
    journal = {Procedia Manufacturing},
    year = {2020},
    volume = {49},
    publisher = {Elsevier},
    month = {07},
    doi = {10.1016/j.promfg.2020.06.012},
    issn = {2351-9789},
    note = {Proceedings of the 8th International Conference on Through-Life Engineering Service (TESConf '19)},
    abstract = {Sharing data between companies throughout the supply chain is expected to be beneficial for product quality as well as for the economical savings in the manufacturing industry. To utilize the available data in the vision of an Internet of Production (IoP) a precise condition monitoring of manufacturing and production processes that facilitates the quantification of influences throughout the supply chain is inevitable. In this paper, we consider stamping processes in the context of an Internet of Production and the preliminaries for analytical models that utilize the ever-increasing available data. Three research objectives to cope with the amount of data and for a methodology to monitor, analyze and evaluate the influence of available data onto stamping processes have been identified: (i) State detection based on cyclic sensor signals, (ii) mapping of in- and output parameter variations onto process states, and (iii) models for edge and in-network computing approaches. After discussing state-of-the-art approaches to monitor stamping processes and the introduction of the fineblanking process as an exemplary stamping process, a research roadmap for an IoP enabling modeling framework is presented.},
    meta = {},
    }
  • Jan Pennekamp, Fritz Alder, Roman Matzutt, Jan Tobias Mühlberg, Frank Piessens, and Klaus Wehrle. Secure End-to-End Sensing in Supply Chains. In Proceedings of the 5th International Workshop on Cyber-Physical Systems Security (CPS-Sec ’20). IEEE, 07 2020.
    [BibTeX] [Abstract] [DOI] [PDF]
    Trust along digitalized supply chains is challenged by the aspect that monitoring equipment may not be trustworthy or unreliable as respective measurements originate from potentially untrusted parties. To allow for dynamic relationships along supply chains, we propose a blockchain-backed supply chain monitoring architecture relying on trusted hardware. Our design provides a notion of secure end-to-end sensing of interactions even when originating from untrusted surroundings. Due to attested checkpointing, we can identify misinformation early on and reliably pinpoint the origin. A blockchain enables long-term verifiability for all (now trustworthy) IoT data within our system even if issues are detected only after the fact. Our feasibility study and cost analysis further show that our design is indeed deployable in and applicable to today’s supply chain settings.
    @inproceedings{PAM+20,
    author = {Pennekamp, Jan and Alder, Fritz and Matzutt, Roman and M{\"u}hlberg, Jan Tobias and Piessens, Frank and Wehrle, Klaus},
    title = {{Secure End-to-End Sensing in Supply Chains}},
    booktitle = {Proceedings of the 5th International Workshop on Cyber-Physical Systems Security (CPS-Sec '20)},
    year = {2020},
    publisher = {IEEE},
    month = {07},
    doi = {10.1109/CNS48642.2020.9162337},
    abstract = {Trust along digitalized supply chains is challenged by the aspect that monitoring equipment may not be trustworthy or unreliable as respective measurements originate from potentially untrusted parties. To allow for dynamic relationships along supply chains, we propose a blockchain-backed supply chain monitoring architecture relying on trusted hardware. Our design provides a notion of secure end-to-end sensing of interactions even when originating from untrusted surroundings. Due to attested checkpointing, we can identify misinformation early on and reliably pinpoint the origin. A blockchain enables long-term verifiability for all (now trustworthy) IoT data within our system even if issues are detected only after the fact. Our feasibility study and cost analysis further show that our design is indeed deployable in and applicable to today's supply chain settings.},
    meta = {},
    }
  • Roman Matzutt, Benedikt Kalde, Jan Pennekamp, Arthur Drichel, Martin Henze, and Klaus Wehrle. How to Securely Prune Bitcoin’s Blockchain. In Proceedings of the 19th IFIP Networking 2020 Conference (NETWORKING ’20), 06 2020.
    [BibTeX] [Abstract] [PDF] [CODE]
    Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots’ correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.
    @inproceedings{MKP+20,
    author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus},
    title = {{How to Securely Prune Bitcoin's Blockchain}},
    booktitle = {Proceedings of the 19th IFIP Networking 2020 Conference (NETWORKING '20)},
    year = {2020},
    month = {06},
    abstract = {Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots' correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.},
    code = {https://github.com/COMSYS/coinprune},
    meta = {},
    }
  • Jan Pennekamp, Lennart Bader, Roman Matzutt, Philipp Niemietz, Daniel Trauth, Martin Henze, Thomas Bergs, and Klaus Wehrle. Private Multi-Hop Accountability for Supply Chains. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops ’20), 1st Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS ’20), 06 2020.
    [BibTeX] [Abstract] [DOI] [PDF]
    Today’s supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers’ demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.
    @inproceedings{PBM+20,
    author = {Pennekamp, Jan and Bader, Lennart and Matzutt, Roman and Niemietz, Philipp and Trauth, Daniel and Henze, Martin and Bergs, Thomas and Wehrle, Klaus},
    title = {{Private Multi-Hop Accountability for Supply Chains}},
    booktitle = {Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops '20), 1st Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS '20)},
    year = {2020},
    month = {06},
    doi = {10.1109/ICCWorkshops49005.2020.9145100},
    abstract = {Today's supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers' demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.},
    meta = {},
    }
  • Lars Gleim, Jan Pennekamp, Martin Liebenberg, Melanie Buchsbaum, Philipp Niemietz, Simon Knape, Alexander Epple, Simon Storms, Daniel Trauth, Thomas Bergs, Christian Brecher, Stefan Decker, Gerhard Lakemeyer, and Klaus Wehrle. FactDAG: Formalizing Data Interoperability in an Internet of Production. IEEE Internet of Things Journal, 7(4), 04 2020.
    [BibTeX] [Abstract] [DOI] [PDF]
    In the production industry, the volume, variety and velocity of data as well as the number of deployed protocols increase exponentially due to the influences of IoT advances. While hundreds of isolated solutions exist to utilize this data, e.g., optimizing processes or monitoring machine conditions, the lack of a unified data handling and exchange mechanism hinders the implementation of approaches to improve the quality of decisions and processes in such an interconnected environment. The vision of an Internet of Production promises the establishment of a Worldwide Lab, where data from every process in the network can be utilized, even interorganizational and across domains. While numerous existing approaches consider interoperability from an interface and communication system perspective, fundamental questions of data and information interoperability remain insufficiently addressed. In this paper, we identify ten key issues, derived from three distinctive real-world use cases, that hinder large-scale data interoperability for industrial processes. Based on these issues we derive a set of five key requirements for future (IoT) data layers, building upon the FAIR data principles. We propose to address them by creating FactDAG, a conceptual data layer model for maintaining a provenance-based, directed acyclic graph of facts, inspired by successful distributed version-control and collaboration systems. Eventually, such a standardization should greatly shape the future of interoperability in an interconnected production industry.
    @article{GPL+20,
    author = {Gleim, Lars and Pennekamp, Jan and Liebenberg, Martin and Buchsbaum, Melanie and Niemietz, Philipp and Knape, Simon and Epple, Alexander and Storms, Simon and Trauth, Daniel and Bergs, Thomas and Brecher, Christian and Decker, Stefan and Lakemeyer, Gerhard and Wehrle, Klaus},
    title = {{FactDAG: Formalizing Data Interoperability in an Internet of Production}},
    journal = {IEEE Internet of Things Journal},
    year = {2020},
    volume = {7},
    number = {4},
    publisher = {IEEE},
    month = {04},
    doi = {10.1109/JIOT.2020.2966402},
    issn = {2327-4662},
    abstract = {In the production industry, the volume, variety and velocity of data as well as the number of deployed protocols increase exponentially due to the influences of IoT advances. While hundreds of isolated solutions exist to utilize this data, e.g., optimizing processes or monitoring machine conditions, the lack of a unified data handling and exchange mechanism hinders the implementation of approaches to improve the quality of decisions and processes in such an interconnected environment.
    The vision of an Internet of Production promises the establishment of a Worldwide Lab, where data from every process in the network can be utilized, even interorganizational and across domains. While numerous existing approaches consider interoperability from an interface and communication system perspective, fundamental questions of data and information interoperability remain insufficiently addressed.
    In this paper, we identify ten key issues, derived from three distinctive real-world use cases, that hinder large-scale data interoperability for industrial processes. Based on these issues we derive a set of five key requirements for future (IoT) data layers, building upon the FAIR data principles. We propose to address them by creating FactDAG, a conceptual data layer model for maintaining a provenance-based, directed acyclic graph of facts, inspired by successful distributed version-control and collaboration systems. Eventually, such a standardization should greatly shape the future of interoperability in an interconnected production industry.},
    meta = {},
    }
  • Linus Roepert, Markus Dahlmanns, Ina Berenice Fink, Jan Pennekamp, and Martin Henze. Assessing the Security of OPC UA Deployments. In Proceedings of the 1st ITG Workshop on IT Security (ITSec ’20), 04 2020.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.
    @inproceedings{RDF+20,
    author = {Roepert, Linus and Dahlmanns, Markus and Fink, Ina Berenice and Pennekamp, Jan and Henze, Martin},
    title = {{Assessing the Security of OPC UA Deployments}},
    booktitle = {Proceedings of the 1st ITG Workshop on IT Security (ITSec '20)},
    year = {2020},
    month = {04},
    doi = {10.15496/publikation-41813},
    abstract = {To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.},
    code = {https://github.com/COMSYS/msf-opcua},
    meta = {},
    }
  • Samuel Mann, Jan Pennekamp, Tobias Brockhoff, Anahita Farhang, Mahsa Pourbafrani, Lukas Oster, Merih Seran Uysal, Rahul Sharma, Uwe Reisgen, Klaus Wehrle, and Wil van der Aalst. Connected, digitalized welding production –- Secure, ubiquitous utilization of data across process layers. Advanced Structured Materials, 125, 2020. Proceedings of the 1st International Conference on Advanced Joining Processes (AJP ’19).
    [BibTeX] [Abstract] [DOI] [PDF]
    A connected, digitalized welding production unlocks vast and dynamic potentials: from improving state of the art welding to new business models in production. For this reason, offering frameworks, which are capable of addressing multiple layers of applications on the one hand and providing means of data security and privacy for ubiquitous dataflows on the other hand, is an important step to enable the envisioned advances. In this context, welding production has been introduced from the perspective of interlaced process layers connecting information sources across various entities. Each layer has its own distinct challenges from both a process view and a data perspective. Besides, investigating each layer promises to reveal insight into (currently unknown) process interconnections. This approach has been substantiated by methods for data security and privacy to draw a line between secure handling of data and the need of trustworthy dealing with sensitive data among different parties and therefore partners. In conclusion, the welding production has to develop itself from an accumulation of local and isolated data sources towards a secure industrial collaboration in an Internet of Production.
    @article{MPB+20,
    author = {Mann, Samuel and Pennekamp, Jan and Brockhoff, Tobias and Farhang, Anahita and Pourbafrani, Mahsa and Oster, Lukas and Uysal, Merih Seran and Sharma, Rahul and Reisgen, Uwe and Wehrle, Klaus and van der Aalst, Wil},
    title = {{Connected, digitalized welding production --- Secure, ubiquitous utilization of data across process layers}},
    journal = {Advanced Structured Materials},
    year = {2020},
    volume = {125},
    publisher = {Springer},
    doi = {10.1007/978-981-15-2957-3_8},
    issn = {1869-8433},
    note = {Proceedings of the 1st International Conference on Advanced Joining Processes (AJP '19)},
    abstract = {A connected, digitalized welding production unlocks vast and dynamic potentials: from improving state of the art welding to new business models in production. For this reason, offering frameworks, which are capable of addressing multiple layers of applications on the one hand and providing means of data security and privacy for ubiquitous dataflows on the other hand, is an important step to enable the envisioned advances. In this context, welding production has been introduced from the perspective of interlaced process layers connecting information sources across various entities. Each layer has its own distinct challenges from both a process view and a data perspective. Besides, investigating each layer promises to reveal insight into (currently unknown) process interconnections. This approach has been substantiated by methods for data security and privacy to draw a line between secure handling of data and the need of trustworthy dealing with sensitive data among different parties and therefore partners. In conclusion, the welding production has to develop itself from an accumulation of local and isolated data sources towards a secure industrial collaboration in an Internet of Production.},
    meta = {},
    }
  • Roman Matzutt, Jan Pennekamp, and Klaus Wehrle. A Secure and Practical Decentralized Ecosystem for Shareable Education Material. In Proceedings of the 34th International Conference on Information Networking (ICOIN ’20), 01 2020.
    [BibTeX] [Abstract] [DOI] [PDF]
    Traditionally, the university landscape is highly federated, which hinders potentials for coordinated collaborations. While the lack of a strict hierarchy on the inter-university level is critical for ensuring free research and higher education, this concurrency limits the access to high-quality education materials. Especially regarding resources such as lecture notes or exercise tasks we observe a high susceptibility to redundant work and lacking quality assessment of material created in isolation by individual university institutes. To remedy this situation, in this paper we propose CORALIS, a decentralized marketplace for offering, acquiring, discussing, and improving education resources across university borders. Our design is based on a permissioned blockchain to (a) realize accountable access control via simple on-chain license terms, (b) trace the evolution of encrypted containers accumulating bundles of shareable education resources, and (c) record user comments and ratings for further improving the quality of offered education material.
    @inproceedings{MPW20,
    author = {Matzutt, Roman and Pennekamp, Jan and Wehrle, Klaus},
    title = {{A Secure and Practical Decentralized Ecosystem for Shareable Education Material}},
    booktitle = {Proceedings of the 34th International Conference on Information Networking (ICOIN '20)},
    year = {2020},
    month = {01},
    doi = {10.1109/ICOIN48656.2020.9016478},
    abstract = {Traditionally, the university landscape is highly federated, which hinders potentials for coordinated collaborations. While the lack of a strict hierarchy on the inter-university level is critical for ensuring free research and higher education, this concurrency limits the access to high-quality education materials. Especially regarding resources such as lecture notes or exercise tasks we observe a high susceptibility to redundant work and lacking quality assessment of material created in isolation by individual university institutes. To remedy this situation, in this paper we propose CORALIS, a decentralized marketplace for offering, acquiring, discussing, and improving education resources across university borders. Our design is based on a permissioned blockchain to (a) realize accountable access control via simple on-chain license terms, (b) trace the evolution of encrypted containers accumulating bundles of shareable education resources, and (c) record user comments and ratings for further improving the quality of offered education material.},
    meta = {},
    }

2019

  • Jan Pennekamp, Markus Dahlmanns, Lars Gleim, Stefan Decker, and Klaus Wehrle. Security Considerations for Collaborations in an Industrial IoT-based Lab of Labs. In Proceedings of the 3rd IEEE Global Conference on Internet of Things (GCIoT ’19), 12 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    The productivity and sustainability advances for (smart) manufacturing resulting from (globally) interconnected Industrial IoT devices in a lab of labs are expected to be significant. While such visions introduce opportunities for the involved parties, the associated risks must be considered as well. In particular, security aspects are crucial challenges and remain unsolved. So far, single stakeholders only had to consider their local view on security. However, for a global lab, we identify several fundamental research challenges in (dynamic) scenarios with multiple stakeholders: While information security mandates that models must be adapted wrt. confidentiality to address these new influences on business secrets, from a network perspective, the drastically increasing amount of possible attack vectors challenges today’s approaches. Finally, concepts addressing these security challenges should provide backwards compatibility to enable a smooth transition from today’s isolated landscape towards globally interconnected IIoT environments.
    @inproceedings{PDGDW19,
    author = {Pennekamp, Jan and Dahlmanns, Markus and Gleim, Lars and Decker, Stefan and Wehrle, Klaus},
    title = {{Security Considerations for Collaborations in an Industrial IoT-based Lab of Labs}},
    booktitle = {Proceedings of the 3rd IEEE Global Conference on Internet of Things (GCIoT '19)},
    year = {2019},
    month = {12},
    doi = {10.1109/GCIoT47977.2019.9058413},
    abstract = {The productivity and sustainability advances for (smart) manufacturing resulting from (globally) interconnected Industrial IoT devices in a lab of labs are expected to be significant. While such visions introduce opportunities for the involved parties, the associated risks must be considered as well. In particular, security aspects are crucial challenges and remain unsolved. So far, single stakeholders only had to consider their local view on security. However, for a global lab, we identify several fundamental research challenges in (dynamic) scenarios with multiple stakeholders: While information security mandates that models must be adapted wrt. confidentiality to address these new influences on business secrets, from a network perspective, the drastically increasing amount of possible attack vectors challenges today's approaches. Finally, concepts addressing these security challenges should provide backwards compatibility to enable a smooth transition from today's isolated landscape towards globally interconnected IIoT environments.},
    meta = {},
    }
  • Jan Pennekamp, Martin Henze, Simo Schmidt, Philipp Niemietz, Marcel Fey, Daniel Trauth, Thomas Bergs, Christian Brecher, and Klaus Wehrle. Dataflow Challenges in an Internet of Production: A Security & Privacy Perspective. In Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC ’19), co-located with the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS ’19), 11 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop. Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.
    @inproceedings{PHS+19,
    author = {Pennekamp, Jan and Henze, Martin and Schmidt, Simo and Niemietz, Philipp and Fey, Marcel and Trauth, Daniel and Bergs, Thomas and Brecher, Christian and Wehrle, Klaus},
    title = {{Dataflow Challenges in an Internet of Production: A Security {\&} Privacy Perspective}},
    booktitle = {Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC '19), co-located with the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS '19)},
    year = {2019},
    month = {11},
    doi = {10.1145/3338499.3357357},
    abstract = {The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop.
    Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.},
    meta = {},
    }
  • Wladimir De la Cadena, Asya Mitseva, Jan Pennekamp, Jens Hiller, Fabian Lanze, Thomas Engel, Klaus Wehrle, and Andriy Panchenko. POSTER: Traffic Splitting to Counter Website Fingerprinting. In Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS ’19), 11 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    Website fingerprinting (WFP) is a special type of traffic analysis, which aims to infer the websites visited by a user. Recent studies have shown that WFP targeting Tor users is notably more effective than previously expected. Concurrently, state-of-the-art defenses have been proven to be less effective. In response, we present a novel WFP defense that splits traffic over multiple entry nodes to limit the data a single malicious entry can use. Here, we explore several traffic-splitting strategies to distribute user traffic. We establish that our \emph{weighted random} strategy dramatically reduces the accuracy from nearly 95\% to less than 35\% for \emph{four} state-of-the-art WFP attacks without adding any artificial delays or dummy traffic.
    @inproceedings{DMP+19,
    author = {De la Cadena, Wladimir and Mitseva, Asya and Pennekamp, Jan and Hiller, Jens and Lanze, Fabian and Engel, Thomas and Wehrle, Klaus and Panchenko, Andriy},
    title = {{POSTER: Traffic Splitting to Counter Website Fingerprinting}},
    booktitle = {Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS '19)},
    year = {2019},
    month = {11},
    doi = {10.1145/3319535.3363249},
    abstract = {Website fingerprinting (WFP) is a special type of traffic analysis, which aims to infer the websites visited by a user. Recent studies have shown that WFP targeting Tor users is notably more effective than previously expected. Concurrently, state-of-the-art defenses have been proven to be less effective. In response, we present a novel WFP defense that splits traffic over multiple entry nodes to limit the data a single malicious entry can use. Here, we explore several traffic-splitting strategies to distribute user traffic. We establish that our \emph{weighted random} strategy dramatically reduces the accuracy from nearly 95\% to less than 35\% for \emph{four} state-of-the-art WFP attacks without adding any artificial delays or dummy traffic.},
    meta = {},
    }
  • Jens Hiller, Jan Pennekamp, Markus Dahlmanns, Martin Henze, Andriy Panchenko, and Klaus Wehrle. Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet. Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible. To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.
    @inproceedings{HPD+19,
    author = {Hiller, Jens and Pennekamp, Jan and Dahlmanns, Markus and Henze, Martin and Panchenko, Andriy and Wehrle, Klaus},
    title = {{Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments}},
    booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)},
    year = {2019},
    month = {10},
    doi = {10.1109/ICNP.2019.8888033},
    abstract = {An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet.
    Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible.
    To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.},
    code = {https://github.com/COMSYS/tor4iot-tor},
    code2 = {https://github.com/COMSYS/tor4iot-contiki},
    meta = {},
    }
  • Jan Pennekamp, Jens Hiller, Sebastian Reuter, Wladimir De la Cadena, Asya Mitseva, Martin Henze, Thomas Engel, Klaus Wehrle, and Andriy Panchenko. Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client’s identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.
    @inproceedings{PHR+19,
    author = {Pennekamp, Jan and Hiller, Jens and Reuter, Sebastian and De la Cadena, Wladimir and Mitseva, Asya and Henze, Martin and Engel, Thomas and Wehrle, Klaus and Panchenko, Andriy},
    title = {{Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing}},
    booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)},
    year = {2019},
    month = {10},
    doi = {10.1109/ICNP.2019.8888029},
    abstract = {Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client's identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.},
    meta = {},
    }
  • Markus Dahlmanns, Chris Dax, Roman Matzutt, Jan Pennekamp, Jens Hiller, and Klaus Wehrle. Privacy-Preserving Remote Knowledge System. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    More and more traditional services, such as malware detectors or collaboration services in industrial scenarios, move to the cloud. However, this behavior poses a risk for the privacy of clients since these services are able to generate profiles containing very sensitive information, e.g., vulnerability information or collaboration partners. Hence, a rising need for protocols that enable clients to obtain knowledge without revealing their requests exists. To address this issue, we propose a protocol that enables clients (i) to query large cloud-based knowledge systems in a privacy-preserving manner using Private Set Intersection and (ii) to subsequently obtain individual knowledge items without leaking the client’s requests via few Oblivious Transfers. With our preliminary design, we allow clients to save a significant amount of time in comparison to performing Oblivious Transfers only.
    @inproceedings{DDM+19,
    author = {Dahlmanns, Markus and Dax, Chris and Matzutt, Roman and Pennekamp, Jan and Hiller, Jens and Wehrle, Klaus},
    title = {{Privacy-Preserving Remote Knowledge System}},
    booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)},
    year = {2019},
    month = {10},
    doi = {10.1109/ICNP.2019.8888121},
    abstract = {More and more traditional services, such as malware detectors or collaboration services in industrial scenarios, move to the cloud. However, this behavior poses a risk for the privacy of clients since these services are able to generate profiles containing very sensitive information, e.g., vulnerability information or collaboration partners. Hence, a rising need for protocols that enable clients to obtain knowledge without revealing their requests exists. To address this issue, we propose a protocol that enables clients (i) to query large cloud-based knowledge systems in a privacy-preserving manner using Private Set Intersection and (ii) to subsequently obtain individual knowledge items without leaking the client's requests via few Oblivious Transfers. With our preliminary design, we allow clients to save a significant amount of time in comparison to performing Oblivious Transfers only.},
    meta = {},
    }
  • Jan Pennekamp, Martin Henze, Oliver Hohlfeld, and Andriy Panchenko. Hi Doppelgänger: Towards Detecting Manipulation in News Comments. In Companion Proceedings of the 2019 World Wide Web Conference (WWW ’19 Companion), 4th Workshop on Computational Methods in Online Misbehavior (CyberSafety ’19), 05 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging. In this paper, we evaluate whether stylometric features allow a detection of such doppelgängers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelgängers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelgängers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelgängers in real world data sets.
    @inproceedings{PHHP19,
    author = {Pennekamp, Jan and Henze, Martin and Hohlfeld, Oliver and Panchenko, Andriy},
    title = {{Hi Doppelg{\"a}nger: Towards Detecting Manipulation in News Comments}},
    booktitle = {Companion Proceedings of the 2019 World Wide Web Conference (WWW '19 Companion), 4th Workshop on Computational Methods in Online Misbehavior (CyberSafety '19)},
    year = {2019},
    month = {05},
    doi = {10.1145/3308560.3316496},
    abstract = {Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging.
    In this paper, we evaluate whether stylometric features allow a detection of such doppelg{\"a}ngers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelg{\"a}ngers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelg{\"a}ngers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelg{\"a}ngers in real world data sets.},
    meta = {},
    }
  • Jan Pennekamp, René Glebke, Martin Henze, Tobias Meisen, Christoph Quix, Rihan Hai, Lars Gleim, Philipp Niemietz, Maximilian Rudack, Simon Knape, Alexander Epple, Daniel Trauth, Uwe Vroomen, Thomas Bergs, Christian Brecher, Andreas Bührig-Polaczek, Matthias Jarke, and Klaus Wehrle. Towards an Infrastructure Enabling the Internet of Production. In Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS ’19), 05 2019.
    [BibTeX] [Abstract] [DOI] [PDF]
    New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today’s infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.
    @inproceedings{PGH+19,
    author = {Pennekamp, Jan and Glebke, Ren{\'e} and Henze, Martin and Meisen, Tobias and Quix, Christoph and Hai, Rihan and Gleim, Lars and Niemietz, Philipp and Rudack, Maximilian and Knape, Simon and Epple, Alexander and Trauth, Daniel and Vroomen, Uwe and Bergs, Thomas and Brecher, Christian and B{\"u}hrig-Polaczek, Andreas and Jarke, Matthias and Wehrle, Klaus},
    title = {{Towards an Infrastructure Enabling the Internet of Production}},
    booktitle = {Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS '19)},
    year = {2019},
    month = {05},
    doi = {10.1109/ICPHYS.2019.8780276},
    abstract = {New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today's infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.},
    meta = {},
    }

2017

  • Jan Pennekamp, Martin Henze, and Klaus Wehrle. A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead. Pervasive and Mobile Computing, 42, 12 2017.
    [BibTeX] [Abstract] [DOI] [PDF]
    With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users’ privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications’ permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.
    @article{PHW17,
    author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead}},
    journal = {Pervasive and Mobile Computing},
    year = {2017},
    volume = {42},
    publisher = {Elsevier},
    month = {12},
    doi = {10.1016/j.pmcj.2017.09.005},
    issn = {1574-1192},
    abstract = {With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users' privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications' permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.},
    meta = {},
    }
  • Martin Henze, Jan Pennekamp, David Hellmanns, Erik Mühmer, Jan Henrik Ziegeldorf, Arthur Drichel, and Klaus Wehrle. CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps. In Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 11 2017.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users’ contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.
    @inproceedings{HPH+17,
    author = {Henze, Martin and Pennekamp, Jan and Hellmanns, David and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and Drichel, Arthur and Wehrle, Klaus},
    title = {{CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps}},
    booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)},
    year = {2017},
    month = {11},
    doi = {10.1145/3144457.3144471},
    abstract = {Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users' contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.},
    code = {https://github.com/COMSYS/CloudAnalyzer},
    meta = {},
    }
  • Jan Henrik Ziegeldorf, Jan Pennekamp, David Hellmanns, Felix Schwinger, Ike Kunze, Martin Henze, Jens Hiller, Roman Matzutt, and Klaus Wehrle. BLOOM: BLoom filter based Oblivious Outsourced Matchings. BMC Medical Genomics, 10(Suppl 2), 07 2017. Proceedings of the 5th iDASH Privacy and Security Workshop 2016.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose FHE-Bloom and PHE-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-Bloom is fully secure in the semi-honest model while PHE-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.
    @article{ZPH+17,
    author = {Ziegeldorf, Jan Henrik and Pennekamp, Jan and Hellmanns, David and Schwinger, Felix and Kunze, Ike and Henze, Martin and Hiller, Jens and Matzutt, Roman and Wehrle, Klaus},
    title = {{BLOOM: BLoom filter based Oblivious Outsourced Matchings}},
    journal = {BMC Medical Genomics},
    year = {2017},
    volume = {10},
    number = {Suppl 2},
    month = {07},
    doi = {10.1186/s12920-017-0277-y},
    issn = {1755-8794},
    note = {Proceedings of the 5th iDASH Privacy and Security Workshop 2016},
    abstract = {Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations.
    We propose FHE-Bloom and PHE-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-Bloom is fully secure in the semi-honest model while PHE-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance.
    We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries.
    Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.},
    code = {https://github.com/COMSYS/bloom},
    meta = {},
    }

2016

  • Andriy Panchenko, Fabian Lanze, Andreas Zinnen, Martin Henze, Jan Pennekamp, Klaus Wehrle, and Thomas Engel. Website Fingerprinting at Internet Scale. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium (NDSS ’16), 02 2016.
    [BibTeX] [Abstract] [DOI] [PDF] [CODE]
    The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper – one of the weakest adversaries in the attacker model of anonymization networks such as Tor. In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method – including our own – scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.
    @inproceedings{PLZ+16,
    author = {Panchenko, Andriy and Lanze, Fabian and Zinnen, Andreas and Henze, Martin and Pennekamp, Jan and Wehrle, Klaus and Engel, Thomas},
    title = {{Website Fingerprinting at Internet Scale}},
    booktitle = {Proceedings of the 23rd Annual Network and Distributed System Security Symposium (NDSS '16)},
    year = {2016},
    month = {02},
    doi = {10.14722/ndss.2016.23477},
    abstract = {The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper - one of the weakest adversaries in the attacker model of anonymization networks such as Tor.
    In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method - including our own - scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.},
    code = {https://www.informatik.tu-cottbus.de/~andriy/zwiebelfreunde/},
    meta = {},
    }

University papers:

  • Master Thesis: Uncovering Doppelgängers in Online Communities
    Advised by Dr. Andriy Panchenko1 (SecanLab, University of Luxembourg) & Dr. Oliver Hohlfeld2 (COMSYS, RWTH Aachen University)
  • Seminar: Challenges for Privacy Enforcing on Smartphones *2nd best paper* (COMSYS, RWTH Aachen University)
    [Journal-Submission]
  • Seminar: MOOCs and Authentication *Best Paper Award* (School of Science, Aalto University)
    [Proceedings]
  • Bachelor Thesis: Evaluating Website Fingerprinting Attacks in Real-World Settings
    Advised by Dr. Andriy Panchenko1 (SecanLab, University of Luxembourg) & Martin Henze3 (COMSYS, RWTH Aachen University)

Code collaboration (Until 05/2018):

  • CloudAnalyzer (COMSYS, RWTH Aachen University)
    [APK] [Code]
    Among others used in:
    – “CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps” Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous 2017)
    – “Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps” Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous 2017)
  • MailAnalyzer (COMSYS, RWTH Aachen University)
    [Code]
    Used in “Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape”
    Proceedings of the 2017 Network Traffic Measurement and Analysis Conference (TMA 2017)
  • Secure Genome Outsourcing (COMSYS, RWTH Aachen University)
    [Code]
    Used in “BLOOM: BLoom-filter-based Oblivious Outsourced Matchings”
    BMC Medical Genomics 2017, Volume 10, July 2017, Issue 2 Supplement.
  • Website Fingerprinting Toolkit (SecanLab, University of Luxembourg)
    [Code] Only a subset, the CUMUL classifier is publicly available.
    Among others used in:
    – “Analysis of Fingerprinting Techniques for Tor Hidden Services” Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES 2017)
    – “POSTER: Fingerprinting Tor Hidden Services” Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS 2016)
  • TBA

1. [Now full professor for IT Security at Brandenburg University of Technology]
2. [Now full professor for Computer Networks and Communication Systems at Brandenburg University of Technology]
3. [Now junior professor for Secure Industrial Data Exchange at RWTH Aachen University and postdoctoral researcher at Cyber ​​Analysis & Defense, Fraunhofer FKIE]