Publications
Additional references: Google Scholar and DBLP.
2025
- Christian van Sloun, Vincent Woeste, Konrad Wolsing, Jan Pennekamp, and Klaus Wehrle. Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach. In Proceedings of the 22nd Annual Network and Distributed System Security Symposium (NDSS ’25), 02 2025.
[BibTeX]@inproceedings{vSWW+25, author = {van Sloun, Christian and Woeste, Vincent and Wolsing, Konrad and Pennekamp, Jan and Wehrle, Klaus}, title = {{Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach}}, booktitle = {Proceedings of the 22nd Annual Network and Distributed System Security Symposium (NDSS '25)}, year = {2025}, month = {02}, meta = {}, }
2024
- Johannes Lohmöller, Roman Matzutt, Joscha Loos, Eduard Vlad, Jan Pennekamp, and Klaus Wehrle. Complementing Organizational Security in Data Ecosystems with Technical Guarantees. In Proceedings of the 1st Conference on Building a Secure and Empowered Cyberspace (BuildSEC ’24), 12 2024.
[BibTeX] [Abstract] [PDF]Federated data ecosystems continue to emerge to connect previously isolated data silos across organizational boundaries over the Internet. These platforms aim to facilitate data sharing while maintaining data sovereignty, which is supposed to empower data owners to retain control over their data. However, the employed organizational security measures, such as policyenforcing middleware besides software certification, processes, and employees are insufficient to provide reliable guarantees against malicious insiders. This paper thus proposes a corresponding technical solution for federated platforms that builds on communication between Trusted Execution Environments (TEEs) and demonstrates the feasibility of technically enforceable data protection. Specifically, we provide dependable guarantees for data owners formulated via rich policies while maintaining usability as a general-purpose data exchange platform. Further, by evaluating a real-world use case that concerns sharing sensitive genomic data, we demonstrate its real-world suitability. Our findings emphasize the potential of TEEs in establishing trust and increasing data security for federated data scenarios far beyond a single use case.
@inproceedings{LML+24, author = {Lohm{\"o}ller, Johannes and Matzutt, Roman and Loos, Joscha and Vlad, Eduard and Pennekamp, Jan and Wehrle, Klaus}, title = {{Complementing Organizational Security in Data Ecosystems with Technical Guarantees}}, booktitle = {Proceedings of the 1st Conference on Building a Secure and Empowered Cyberspace (BuildSEC '24)}, year = {2024}, month = {12}, abstract = {Federated data ecosystems continue to emerge to connect previously isolated data silos across organizational boundaries over the Internet. These platforms aim to facilitate data sharing while maintaining data sovereignty, which is supposed to empower data owners to retain control over their data. However, the employed organizational security measures, such as policyenforcing middleware besides software certification, processes, and employees are insufficient to provide reliable guarantees against malicious insiders. This paper thus proposes a corresponding technical solution for federated platforms that builds on communication between Trusted Execution Environments (TEEs) and demonstrates the feasibility of technically enforceable data protection. Specifically, we provide dependable guarantees for data owners formulated via rich policies while maintaining usability as a general-purpose data exchange platform. Further, by evaluating a real-world use case that concerns sharing sensitive genomic data, we demonstrate its real-world suitability. Our findings emphasize the potential of TEEs in establishing trust and increasing data security for federated data scenarios far beyond a single use case.}, meta = {}, }
- Johannes Lohmöller, Jannis Scheiber, Rafael Kramann, Klaus Wehrle, Sikander Hayat, and Jan Pennekamp. scE(match): Privacy-Preserving Cluster Matching of Single-Cell Data. In Proceedings of the the 23rd IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom ’24), International Workshop on AI-Driven Trust, Security and Privacy in Computer Networks (AI-Driven TSP ’24), 12 2024.
[BibTeX]@inproceedings{LSK+24, author = {Lohm{\"o}ller, Johannes and Scheiber, Jannis and Kramann, Rafael and Wehrle, Klaus and Hayat, Sikander and Pennekamp, Jan}, title = {{scE(match): Privacy-Preserving Cluster Matching of Single-Cell Data}}, booktitle = {Proceedings of the the 23rd IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom '24), International Workshop on AI-Driven Trust, Security and Privacy in Computer Networks (AI-Driven TSP '24)}, year = {2024}, month = {12}, meta = {}, }
- Markus Dahlmanns, Jan Pennekamp, Robin Decker, and Klaus Wehrle. LUA-IoT: Let’s Usably Authenticate the IoT. In Proceedings of the 27th Annual International Conference on Information Security and Cryptology (ICISC ’24), 11 2024.
[BibTeX] [Abstract] [PDF]Following the advent of the Internet of Things (IoT), users and their devices transmit sensitive data over the Internet. For the Web, Let’s Encrypt offers a usable foundation to safeguard such data by straightforwardly issuing certificates. However, its approach is not directly applicable to the IoT as deployments lack a (dedicated) domain or miss essentials to prove domain ownership required for Let’s Encrypt. Thus, a usable approach to secure IoT deployments by properly authenticating IoT devices is missing. To close this research gap, we propose LUA-IoT, our framework to Let’s Usably Authenticate the IoT. LUA-IoT enables autonomous certificate enrollment by orienting at the success story of Let’s Encrypt, seamlessly integrating in the setup process of modern IoT devices, and relying on process steps that users already know from other domains. In the end, LUA-IoT binds the authenticity of IoT deployments to a globally valid user identifier, e.g., an email address, that is included in certificates directly issued to the IoT deployments. We exemplarily implement LUA-IoT to show that it is realizable on commodity IoT hardware and conduct a small user study indicating that LUA-IoT indeed nudges users to safeguard their devices and data (transmissions).
@inproceedings{DPDW24, author = {Dahlmanns, Markus and Pennekamp, Jan and Decker, Robin and Wehrle, Klaus}, title = {{LUA-IoT: Let's Usably Authenticate the IoT}}, booktitle = {Proceedings of the 27th Annual International Conference on Information Security and Cryptology (ICISC '24)}, year = {2024}, month = {11}, abstract = {Following the advent of the Internet of Things (IoT), users and their devices transmit sensitive data over the Internet. For the Web, Let's Encrypt offers a usable foundation to safeguard such data by straightforwardly issuing certificates. However, its approach is not directly applicable to the IoT as deployments lack a (dedicated) domain or miss essentials to prove domain ownership required for Let's Encrypt. Thus, a usable approach to secure IoT deployments by properly authenticating IoT devices is missing. To close this research gap, we propose LUA-IoT, our framework to Let's Usably Authenticate the IoT. LUA-IoT enables autonomous certificate enrollment by orienting at the success story of Let's Encrypt, seamlessly integrating in the setup process of modern IoT devices, and relying on process steps that users already know from other domains. In the end, LUA-IoT binds the authenticity of IoT deployments to a globally valid user identifier, e.g., an email address, that is included in certificates directly issued to the IoT deployments. We exemplarily implement LUA-IoT to show that it is realizable on commodity IoT hardware and conduct a small user study indicating that LUA-IoT indeed nudges users to safeguard their devices and data (transmissions).}, meta = {}, }
- Johannes Lohmöller, Jan Pennekamp, and Klaus Wehrle. Toward Technically Enforceable Consent in Healthcare Research. In Proceedings of the 9th Privacy Platform (Interdisciplinary Annual Conference) 2024, 10 2024.
[BibTeX] [Abstract] [DOI] [PDF]No abstract available.
@inproceedings{LPW24, author = {Lohm{\"o}ller, Johannes and Pennekamp, Jan and Wehrle, Klaus}, title = {{Toward Technically Enforceable Consent in Healthcare Research}}, booktitle = {Proceedings of the 9th Privacy Platform (Interdisciplinary Annual Conference) 2024}, year = {2024}, month = {10}, doi = {10.24406/publica-3685}, abstract = {No abstract available.}, meta = {}, }
- Markus Dahlmanns, Felix Heidenreich, Johannes Lohmöller, Jan Pennekamp, Klaus Wehrle, and Martin Henze. Unconsidered Installations: Discovering IoT Deployments in the IPv6 Internet. In Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS ’24), 05 2024.
[BibTeX] [Abstract] [DOI] [PDF]Internet-wide studies provide extremely valuable insight into how operators manage their Internet of Things (IoT) deployments in reality and often reveal grievances, e.g., significant security issues. However, while IoT devices often use IPv6, past studies resorted to comprehensively scan the IPv4 address space. To fully understand how the IoT and all its services and devices is operated, including IPv6-reachable deployments is inevitable-although scanning the entire IPv6 address space is infeasible. In this paper, we close this gap and examine how to best discover IPv6-reachable IoT deployments. To this end, we propose a methodology that allows combining various IPv6 scan direction approaches to understand the findability and prevalence of IPv6-reachable IoT deployments. Using three sources of active IPv6 addresses and eleven address generators, we discovered 6658 IoT deployments. We derive that the available address sources are a good starting point for finding IoT deployments. Additionally, we show that using two address generators is sufficient to cover most found deployments and save time as well as resources. Assessing the security of the deployments, we surprisingly find similar issues as in the IPv4 Internet, although IPv6 deployments might be newer and generally more up-to-date: Only 39{\%} of deployments have access control in place and only 6.2{\%} make use of TLS inviting attackers, e.g., to eavesdrop sensitive data.
@inproceedings{DHL+24, author = {Dahlmanns, Markus and Heidenreich, Felix and Lohm{\"o}ller, Johannes and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin}, title = {{Unconsidered Installations: Discovering IoT Deployments in the IPv6 Internet}}, booktitle = {Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS '24)}, year = {2024}, month = {05}, doi = {10.1109/NOMS59830.2024.10574963}, abstract = {Internet-wide studies provide extremely valuable insight into how operators manage their Internet of Things (IoT) deployments in reality and often reveal grievances, e.g., significant security issues. However, while IoT devices often use IPv6, past studies resorted to comprehensively scan the IPv4 address space. To fully understand how the IoT and all its services and devices is operated, including IPv6-reachable deployments is inevitable-although scanning the entire IPv6 address space is infeasible. In this paper, we close this gap and examine how to best discover IPv6-reachable IoT deployments. To this end, we propose a methodology that allows combining various IPv6 scan direction approaches to understand the findability and prevalence of IPv6-reachable IoT deployments. Using three sources of active IPv6 addresses and eleven address generators, we discovered 6658 IoT deployments. We derive that the available address sources are a good starting point for finding IoT deployments. Additionally, we show that using two address generators is sufficient to cover most found deployments and save time as well as resources. Assessing the security of the deployments, we surprisingly find similar issues as in the IPv4 Internet, although IPv6 deployments might be newer and generally more up-to-date: Only 39{\%} of deployments have access control in place and only 6.2{\%} make use of TLS inviting attackers, e.g., to eavesdrop sensitive data.}, meta = {}, }
- Jan Pennekamp. Evolving the Industrial Internet of Things: The Advent of Secure Collaborations. In Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS ’24), 05 2024.
[BibTeX] [Abstract] [DOI] [PDF]The Industrial Internet of Things (IIoT) leads to increasingly-interconnected industrial processes and environments, which, in turn, result in stakeholders collecting a plethora of information. Even though the global sharing of information and industrial collaborations in the IIoT promise significant improvements concerning productivity, sustainability, and product quality, among others, the majority of stakeholders is hesitant to implement them due to confidentiality and reliability concerns. However, strong technical guarantees could convince them of the contrary. Thus, to address these concerns, our interdisciplinary efforts focus on establishing and realizing secure industrial collaborations in the IIoT. By applying private computing, we are indeed able to reliably secure collaborations that not only scale to industry-sized applications but also allow for use case-specific confidentiality guarantees. Hence, improvements that follow from industrial collaborations with (strong) technical guarantees are within reach, even when dealing with cautious stakeholders. Still, until we can fully exploit these benefits, several challenges remain, primarily regarding collaboration management, introduced overhead, interoperability, and universality of proposed protocols.
@inproceedings{P24b, author = {Pennekamp, Jan}, title = {{Evolving the Industrial Internet of Things: The Advent of Secure Collaborations}}, booktitle = {Proceedings of the 2024 IEEE/IFIP Network Operations and Management Symposium (NOMS '24)}, year = {2024}, month = {05}, doi = {10.1109/NOMS59830.2024.10575325}, abstract = {The Industrial Internet of Things (IIoT) leads to increasingly-interconnected industrial processes and environments, which, in turn, result in stakeholders collecting a plethora of information. Even though the global sharing of information and industrial collaborations in the IIoT promise significant improvements concerning productivity, sustainability, and product quality, among others, the majority of stakeholders is hesitant to implement them due to confidentiality and reliability concerns. However, strong technical guarantees could convince them of the contrary. Thus, to address these concerns, our interdisciplinary efforts focus on establishing and realizing secure industrial collaborations in the IIoT. By applying private computing, we are indeed able to reliably secure collaborations that not only scale to industry-sized applications but also allow for use case-specific confidentiality guarantees. Hence, improvements that follow from industrial collaborations with (strong) technical guarantees are within reach, even when dealing with cautious stakeholders. Still, until we can fully exploit these benefits, several challenges remain, primarily regarding collaboration management, introduced overhead, interoperability, and universality of proposed protocols.}, meta = {}, }
- Johannes Lohmöller, Jan Pennekamp, Roman Matzutt, Carolin Victoria Schneider, Eduard Vlad, Christian Trautwein, and Klaus Wehrle. The Unresolved Need for Dependable Guarantees on Security, Sovereignty, and Trust in Data Ecosystems. Data & Knowledge Engineering, 151, 05 2024.
[BibTeX] [Abstract] [DOI] [PDF]Data ecosystems emerged as a new paradigm to facilitate the automated and massive exchange of data from heterogeneous information sources between different stakeholders. However, the corresponding benefits come with unforeseen risks as sensitive information is potentially exposed, questioning their reliability. Consequently, data security is of utmost importance and, thus, a central requirement for successfully realizing data ecosystems. Academia has recognized this requirement, and current initiatives foster sovereign participation via a federated infrastructure where participants retain local control over what data they offer to whom. However, recent proposals place significant trust in remote infrastructure by implementing organizational security measures such as certification processes before the admission of a participant. At the same time, the data sensitivity incentivizes participants to bypass the organizational security measures to maximize their benefit. This issue significantly weakens security, sovereignty, and trust guarantees and highlights that organizational security measures are insufficient in this context. In this paper, we argue that data ecosystems must be extended with technical means to (re)establish dependable guarantees. We underpin this need with three representative use cases for data ecosystems, which cover personal, economic, and governmental data, and systematically map the lack of dependable guarantees in related work. To this end, we identify three enablers of dependable guarantees, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. These enablers are critical for securely implementing data ecosystems in data-sensitive contexts.
@article{LPM+24, author = {Lohm{\"o}ller, Johannes and Pennekamp, Jan and Matzutt, Roman and Schneider, Carolin Victoria and Vlad, Eduard and Trautwein, Christian and Wehrle, Klaus}, title = {{The Unresolved Need for Dependable Guarantees on Security, Sovereignty, and Trust in Data Ecosystems}}, journal = {Data {\&} Knowledge Engineering}, year = {2024}, volume = {151}, publisher = {Elsevier}, month = {05}, doi = {10.1016/j.datak.2024.102301}, issn = {0169-023X}, abstract = {Data ecosystems emerged as a new paradigm to facilitate the automated and massive exchange of data from heterogeneous information sources between different stakeholders. However, the corresponding benefits come with unforeseen risks as sensitive information is potentially exposed, questioning their reliability. Consequently, data security is of utmost importance and, thus, a central requirement for successfully realizing data ecosystems. Academia has recognized this requirement, and current initiatives foster sovereign participation via a federated infrastructure where participants retain local control over what data they offer to whom. However, recent proposals place significant trust in remote infrastructure by implementing organizational security measures such as certification processes before the admission of a participant. At the same time, the data sensitivity incentivizes participants to bypass the organizational security measures to maximize their benefit. This issue significantly weakens security, sovereignty, and trust guarantees and highlights that organizational security measures are insufficient in this context. In this paper, we argue that data ecosystems must be extended with technical means to (re)establish dependable guarantees. We underpin this need with three representative use cases for data ecosystems, which cover personal, economic, and governmental data, and systematically map the lack of dependable guarantees in related work. To this end, we identify three enablers of dependable guarantees, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. These enablers are critical for securely implementing data ecosystems in data-sensitive contexts.}, meta = {}, }
- Jan Pennekamp. Secure Collaborations for the Industrial Internet of Things. PhD thesis, RWTH Aachen University, 04 2024.
[BibTeX] [Abstract] [PDF]The Industrial Internet of Things (IIoT) is leading to increasingly-interconnected and networked industrial processes and environments, which, in turn, results in stakeholders gathering vast amounts of information. Although the global sharing of information and industrial collaborations in the IIoT promise to enhance productivity, sustainability, and product quality, among other benefits, most information is still commonly encapsulated in local information silos. In addition to interoperability issues, confidentiality concerns of involved stakeholders remain the main obstacle to fully realizing these improvements in practice as they largely hinder real-world industrial collaborations today. Therefore, this dissertation addresses this mission-critical research gap. Since existing approaches to privacy-preserving information sharing are not scalable to industry-sized applications in the IIoT, we present solutions that enable secure collaborations in the IIoT while providing technical (confidentiality) guarantees to the involved stakeholders. Our research is crucial (i) for demonstrating the potential and added value of (secure) collaborations and (ii) for convincing cautious stakeholders of the usefulness and benefits of technical building blocks, enabling reliable sharing of confidential information, even among direct competitors. Our interdisciplinary research thus focuses on establishing and realizing secure industrial collaborations in the IIoT. In this regard, we study two overarching angles of collaborations in detail. First, we distinguish between collaborations along and across supply chains, with the former type entailing more relaxed confidentiality requirements. Second, whether or not collaborators know each other in advance implies different levels of trust and requires different technical guarantees. We rely on well-established building blocks from private computing (i.e., privacy-preserving computation and confidential computing) to reliably realize secure collaborations. We thoroughly evaluate each of our designs, using multiple real-world use cases from production technology, to prove their practical feasibility for the IIoT. By applying private computing, we are indeed able to secure collaborations that not only scale to industry-sized applications but also allow for use case-specific configurations of confidentiality guarantees. In this dissertation, we use well-established building blocks to assemble novel solutions with technical guarantees for all types of collaborations (along and across supply chains as well as with known or unknown collaborators). Finally, on the basis of our experience with engineers, we have derived a research methodology for future use that structures the process of interdisciplinary development and evaluation of secure collaborations in the evolving IIoT. Overall, given the aforementioned improvements, our research should greatly contribute to convincing even cautious stakeholders to participate in (reliably-secured) industrial collaborations. Our work is an essential first step toward establishing widespread information sharing among stakeholders in the IIoT. We further conclude: (i) collaborations can be reliably secured, and we can even provide technical guarantees while doing so; (ii) building blocks from private computing scale to industrial applications and satisfy the outlined confidentiality needs; (iii) improvements resulting from industrial collaborations are within reach, even when dealing with cautious stakeholders; and (iv) the interdisciplinary development of sophisticated yet appropriate designs for use case-driven secure collaborations can succeed in practice.
@phdthesis{P24a, author = {Pennekamp, Jan}, title = {{Secure Collaborations for the Industrial Internet of Things}}, school = {RWTH Aachen University}, year = {2024}, month = {04}, abstract = {The Industrial Internet of Things (IIoT) is leading to increasingly-interconnected and networked industrial processes and environments, which, in turn, results in stakeholders gathering vast amounts of information. Although the global sharing of information and industrial collaborations in the IIoT promise to enhance productivity, sustainability, and product quality, among other benefits, most information is still commonly encapsulated in local information silos. In addition to interoperability issues, confidentiality concerns of involved stakeholders remain the main obstacle to fully realizing these improvements in practice as they largely hinder real-world industrial collaborations today. Therefore, this dissertation addresses this mission-critical research gap. Since existing approaches to privacy-preserving information sharing are not scalable to industry-sized applications in the IIoT, we present solutions that enable secure collaborations in the IIoT while providing technical (confidentiality) guarantees to the involved stakeholders. Our research is crucial (i) for demonstrating the potential and added value of (secure) collaborations and (ii) for convincing cautious stakeholders of the usefulness and benefits of technical building blocks, enabling reliable sharing of confidential information, even among direct competitors. Our interdisciplinary research thus focuses on establishing and realizing secure industrial collaborations in the IIoT. In this regard, we study two overarching angles of collaborations in detail. First, we distinguish between collaborations along and across supply chains, with the former type entailing more relaxed confidentiality requirements. Second, whether or not collaborators know each other in advance implies different levels of trust and requires different technical guarantees. We rely on well-established building blocks from private computing (i.e., privacy-preserving computation and confidential computing) to reliably realize secure collaborations. We thoroughly evaluate each of our designs, using multiple real-world use cases from production technology, to prove their practical feasibility for the IIoT. By applying private computing, we are indeed able to secure collaborations that not only scale to industry-sized applications but also allow for use case-specific configurations of confidentiality guarantees. In this dissertation, we use well-established building blocks to assemble novel solutions with technical guarantees for all types of collaborations (along and across supply chains as well as with known or unknown collaborators). Finally, on the basis of our experience with engineers, we have derived a research methodology for future use that structures the process of interdisciplinary development and evaluation of secure collaborations in the evolving IIoT. Overall, given the aforementioned improvements, our research should greatly contribute to convincing even cautious stakeholders to participate in (reliably-secured) industrial collaborations. Our work is an essential first step toward establishing widespread information sharing among stakeholders in the IIoT. We further conclude: (i) collaborations can be reliably secured, and we can even provide technical guarantees while doing so; (ii) building blocks from private computing scale to industrial applications and satisfy the outlined confidentiality needs; (iii) improvements resulting from industrial collaborations are within reach, even when dealing with cautious stakeholders; and (iv) the interdisciplinary development of sophisticated yet appropriate designs for use case-driven secure collaborations can succeed in practice.}, isbn = {978-3-8440-9467-1}, meta = {}, }
- Jan Pennekamp, Lennart Bader, Eric Wagner, Jens Hiller, Roman Matzutt, and Klaus Wehrle. Blockchain Technology Accelerating Industry 4.0. In Blockchains: A Handbook on Fundamentals, Platforms and Applications, volume 105 of Advances in Information Security, page 531–564. Springer, 03 2024.
[BibTeX] [Abstract] [DOI]Competitive industrial environments impose significant requirements on data sharing as well as the accountability and verifiability of related processes. Here, blockchain technology emerges as a possible driver that satisfies demands even in settings with mutually distrustful stakeholders. We identify significant benefits achieved by blockchain technology for Industry 4.0 but also point out challenges and corresponding design options when applying blockchain technology in the industrial domain. Furthermore, we survey diverse industrial sectors to shed light on the current intersection between blockchain technology and industry, which provides the foundation for ongoing as well as upcoming research. As industrial blockchain applications are still in their infancy, we expect that new designs and concepts will develop gradually, creating both supporting tools and groundbreaking innovations.
@incollection{PBW+24, author = {Pennekamp, Jan and Bader, Lennart and Wagner, Eric and Hiller, Jens and Matzutt, Roman and Wehrle, Klaus}, title = {{Blockchain Technology Accelerating Industry 4.0}}, booktitle = {Blockchains: A Handbook on Fundamentals, Platforms and Applications}, publisher = {Springer}, year = {2024}, volume = {105}, series = {Advances in Information Security}, pages = {531--564}, month = {03}, abstract = {Competitive industrial environments impose significant requirements on data sharing as well as the accountability and verifiability of related processes. Here, blockchain technology emerges as a possible driver that satisfies demands even in settings with mutually distrustful stakeholders. We identify significant benefits achieved by blockchain technology for Industry 4.0 but also point out challenges and corresponding design options when applying blockchain technology in the industrial domain. Furthermore, we survey diverse industrial sectors to shed light on the current intersection between blockchain technology and industry, which provides the foundation for ongoing as well as upcoming research. As industrial blockchain applications are still in their infancy, we expect that new designs and concepts will develop gradually, creating both supporting tools and groundbreaking innovations.}, doi = {10.1007/978-3-031-32146-7_17}, isbn = {978-3-031-32145-0}, meta = {}, }
- Jan Pennekamp, Roman Matzutt, Christopher Klinkmüller, Lennart Bader, Martin Serror, Eric Wagner, Sidra Malik, Maria Spiß, Jessica Rahn, Tan Gürpinar, Eduard Vlad, Sander J. J. Leemans, Salil S. Kanhere, Volker Stich, and Klaus Wehrle. An Interdisciplinary Survey on Information Flows in Supply Chains. ACM Computing Surveys, 56(2), 02 2024.
[BibTeX] [Abstract] [DOI] [PDF]Supply chains form the backbone of modern economies and therefore require reliable information flows. In practice, however, supply chains face severe technical challenges, especially regarding security and privacy. In this work, we consolidate studies from supply chain management, information systems, and computer science from 2010–2021 in an interdisciplinary meta-survey to make this topic holistically accessible to interdisciplinary research. In particular, we identify a significant potential for computer scientists to remedy technical challenges and improve the robustness of information flows. We subsequently present a concise information flow-focused taxonomy for supply chains before discussing future research directions to provide possible entry points.
@article{PMK+24, author = {Pennekamp, Jan and Matzutt, Roman and Klinkm{\"u}ller, Christopher and Bader, Lennart and Serror, Martin and Wagner, Eric and Malik, Sidra and Spi{\ss}, Maria and Rahn, Jessica and G{\"u}rpinar, Tan and Vlad, Eduard and Leemans, Sander J. J. and Kanhere, Salil S. and Stich, Volker and Wehrle, Klaus}, title = {{An Interdisciplinary Survey on Information Flows in Supply Chains}}, journal = {ACM Computing Surveys}, year = {2024}, volume = {56}, number = {2}, publisher = {ACM}, month = {02}, doi = {10.1145/3606693}, issn = {0360-0300}, abstract = {Supply chains form the backbone of modern economies and therefore require reliable information flows. In practice, however, supply chains face severe technical challenges, especially regarding security and privacy. In this work, we consolidate studies from supply chain management, information systems, and computer science from 2010--2021 in an interdisciplinary meta-survey to make this topic holistically accessible to interdisciplinary research. In particular, we identify a significant potential for computer scientists to remedy technical challenges and improve the robustness of information flows. We subsequently present a concise information flow-focused taxonomy for supply chains before discussing future research directions to provide possible entry points.}, meta = {}, }
- Jan Pennekamp, Fritz Alder, Lennart Bader, Gianluca Scopelliti, Klaus Wehrle, and Jan Tobias Mühlberg. Securing Sensing in Supply Chains: Opportunities, Building Blocks, and Designs. IEEE Access, 12, 01 2024.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Supply chains increasingly develop toward complex networks, both technically in terms of devices and connectivity, and also anthropogenic with a growing number of actors. The lack of mutual trust in such networks results in challenges that are exacerbated by stringent requirements for shipping conditions or quality, and where actors may attempt to reduce costs or cover up incidents. In this paper, we develop and comprehensively study four scenarios that eventually lead to end-to-end-secured sensing in complex IoT-based supply chains with many mutually distrusting actors, while highlighting relevant pitfalls and challenges–-details that are still missing in related work. Our designs ensure that sensed data is securely transmitted and stored, and can be verified by all parties. To prove practical feasibility, we evaluate the most elaborate design with regard to performance, cost, deployment, and also trust implications on the basis of prevalent (mis)use cases. Our work enables a notion of secure end-to-end sensing with minimal trust across the system stack, even for complex and opaque supply chain networks.
@article{PAB+24, author = {Pennekamp, Jan and Alder, Fritz and Bader, Lennart and Scopelliti, Gianluca and Wehrle, Klaus and M{\"u}hlberg, Jan Tobias}, title = {{Securing Sensing in Supply Chains: Opportunities, Building Blocks, and Designs}}, journal = {IEEE Access}, year = {2024}, volume = {12}, publisher = {IEEE}, month = {01}, doi = {10.1109/ACCESS.2024.3350778}, issn = {2169-3536}, abstract = {Supply chains increasingly develop toward complex networks, both technically in terms of devices and connectivity, and also anthropogenic with a growing number of actors. The lack of mutual trust in such networks results in challenges that are exacerbated by stringent requirements for shipping conditions or quality, and where actors may attempt to reduce costs or cover up incidents. In this paper, we develop and comprehensively study four scenarios that eventually lead to end-to-end-secured sensing in complex IoT-based supply chains with many mutually distrusting actors, while highlighting relevant pitfalls and challenges---details that are still missing in related work. Our designs ensure that sensed data is securely transmitted and stored, and can be verified by all parties. To prove practical feasibility, we evaluate the most elaborate design with regard to performance, cost, deployment, and also trust implications on the basis of prevalent (mis)use cases. Our work enables a notion of secure end-to-end sensing with minimal trust across the system stack, even for complex and opaque supply chain networks.}, code = {https://github.com/COMSYS/secure-sensing}, meta = {}, }
2023
- Roman Matzutt, Jan Pennekamp, and Klaus Wehrle. Poster: Accountable Processing of Reported Street Problems. In Proceedings of the 30th ACM SIGSAC Conference on Computer and Communications Security (CCS ’23), 11 2023.
[BibTeX] [Abstract] [DOI] [PDF]Municipalities increasingly depend on citizens to file digital reports about issues such as potholes or illegal trash dumps to improve their response time. However, the responsible authorities may be incentivized to ignore certain reports, e.g., when addressing them inflicts high costs. In this work, we explore the applicability of blockchain technology to hold authorities accountable regarding filed reports. Our initial assessment indicates that our approach can be extended to benefit citizens and authorities in the future.
@inproceedings{MPW23, author = {Matzutt, Roman and Pennekamp, Jan and Wehrle, Klaus}, title = {{Poster: Accountable Processing of Reported Street Problems}}, booktitle = {Proceedings of the 30th ACM SIGSAC Conference on Computer and Communications Security (CCS '23)}, year = {2023}, month = {11}, doi = {10.1145/3576915.3624367}, abstract = {Municipalities increasingly depend on citizens to file digital reports about issues such as potholes or illegal trash dumps to improve their response time. However, the responsible authorities may be incentivized to ignore certain reports, e.g., when addressing them inflicts high costs. In this work, we explore the applicability of blockchain technology to hold authorities accountable regarding filed reports. Our initial assessment indicates that our approach can be extended to benefit citizens and authorities in the future.}, meta = {}, }
- Jan Pennekamp, Markus Dahlmanns, Frederik Fuhrmann, Timo Heutmann, Alexander Kreppein, Dennis Grunert, Christoph Lange, Robert H. Schmitt, and Klaus Wehrle. Offering Two-Way Privacy for Evolved Purchase Inquiries. ACM Transactions on Internet Technology, 23(4), 11 2023.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Dynamic and flexible business relationships are expected to become more important in the future to accommodate specialized change requests or small-batch production. Today, buyers and sellers must disclose sensitive information on products upfront before the actual manufacturing. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness. Related work overlooks this issue so far: Existing approaches only protect the information of a single party only, hindering dynamic and on-demand business relationships. To account for the corresponding research gap of inadequately privacy-protected information and to deal with companies without an established trust relation, we pursue the direction of innovative privacy-preserving purchase inquiries that seamlessly integrate into today’s established supplier management and procurement processes. Utilizing well-established building blocks from private computing, such as private set intersection and homomorphic encryption, we propose two designs with slightly different privacy and performance implications to securely realize purchase inquiries over the Internet. In particular, we allow buyers to consider more potential sellers without sharing sensitive information and relieve sellers of the burden of repeatedly preparing elaborate yet discarded offers. We demonstrate our approaches’ scalability using two real-world use cases from the domain of production technology. Overall, we present deployable designs that offer two-way privacy for purchase inquiries and, in turn, fill a gap that currently hinders establishing dynamic and flexible business relationships. In the future, we expect significantly increasing research activity in this overlooked area to address the needs of an evolving production landscape.
@article{PDF+23, author = {Pennekamp, Jan and Dahlmanns, Markus and Fuhrmann, Frederik and Heutmann, Timo and Kreppein, Alexander and Grunert, Dennis and Lange, Christoph and Schmitt, Robert H. and Wehrle, Klaus}, title = {{Offering Two-Way Privacy for Evolved Purchase Inquiries}}, journal = {ACM Transactions on Internet Technology}, year = {2023}, volume = {23}, number = {4}, publisher = {ACM}, month = {11}, doi = {10.1145/3599968}, issn = {1533-5399}, abstract = {Dynamic and flexible business relationships are expected to become more important in the future to accommodate specialized change requests or small-batch production. Today, buyers and sellers must disclose sensitive information on products upfront before the actual manufacturing. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness. Related work overlooks this issue so far: Existing approaches only protect the information of a single party only, hindering dynamic and on-demand business relationships. To account for the corresponding research gap of inadequately privacy-protected information and to deal with companies without an established trust relation, we pursue the direction of innovative privacy-preserving purchase inquiries that seamlessly integrate into today's established supplier management and procurement processes. Utilizing well-established building blocks from private computing, such as private set intersection and homomorphic encryption, we propose two designs with slightly different privacy and performance implications to securely realize purchase inquiries over the Internet. In particular, we allow buyers to consider more potential sellers without sharing sensitive information and relieve sellers of the burden of repeatedly preparing elaborate yet discarded offers. We demonstrate our approaches' scalability using two real-world use cases from the domain of production technology. Overall, we present deployable designs that offer two-way privacy for purchase inquiries and, in turn, fill a gap that currently hinders establishing dynamic and flexible business relationships. In the future, we expect significantly increasing research activity in this overlooked area to address the needs of an evolving production landscape.}, code = {https://github.com/COMSYS/purchase-inquiries}, meta = {}, }
- Lennart Bader, Jan Pennekamp, Emildeon Thevaraj, Maria Spiß, Salil S. Kanhere, and Klaus Wehrle. Reputation Systems for Supply Chains: The Challenge of Achieving Privacy Preservation. In Proceedings of the 20th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous ’23), 11 2023.
[BibTeX] [Abstract] [DOI] [PDF]Consumers frequently interact with reputation systems to rate products, services, and deliveries. While past research extensively studied different conceptual approaches to realize such systems securely and privacy-preservingly, these concepts are not yet in use in business-to-business environments. In this paper, (1) we thus outline which specific challenges privacy-cautious stakeholders in volatile supply chain networks introduce, (2) give an overview of the diverse landscape of privacy-preserving reputation systems and their properties, and (3) based on well-established concepts from supply chain information systems and cryptography, we further propose an initial concept that accounts for the aforementioned challenges by utilizing fully homomorphic encryption. For future work, we identify the need of evaluating whether novel systems address the supply chain-specific privacy and confidentiality needs.
@inproceedings{BPT+23, author = {Bader, Lennart and Pennekamp, Jan and Thevaraj, Emildeon and Spi{\ss}, Maria and Kanhere, Salil S. and Wehrle, Klaus}, title = {{Reputation Systems for Supply Chains: The Challenge of Achieving Privacy Preservation}}, booktitle = {Proceedings of the 20th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous '23)}, year = {2023}, month = {11}, doi = {10.1007/978-3-031-63989-0_24}, abstract = {Consumers frequently interact with reputation systems to rate products, services, and deliveries. While past research extensively studied different conceptual approaches to realize such systems securely and privacy-preservingly, these concepts are not yet in use in business-to-business environments. In this paper, (1) we thus outline which specific challenges privacy-cautious stakeholders in volatile supply chain networks introduce, (2) give an overview of the diverse landscape of privacy-preserving reputation systems and their properties, and (3) based on well-established concepts from supply chain information systems and cryptography, we further propose an initial concept that accounts for the aforementioned challenges by utilizing fully homomorphic encryption. For future work, we identify the need of evaluating whether novel systems address the supply chain-specific privacy and confidentiality needs.}, meta = {}, }
- Olav Lamberts, Konrad Wolsing, Eric Wagner, Jan Pennekamp, Jan Bauer, Klaus Wehrle, and Martin Henze. SoK: Evaluations in Industrial Intrusion Detection Research. Journal of Systems Research, 3(1), 10 2023.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Industrial systems are increasingly threatened by cyberattacks with potentially disastrous consequences. To counter such attacks, industrial intrusion detection systems strive to timely uncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchers from diverse backgrounds, resulting in 130 new detection approaches in 2021 alone. This huge momentum facilitates the exploration of diverse promising paths but likewise risks fragmenting the research landscape and burying promising progress. Consequently, it needs sound and comprehensible evaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. In this paper, we therefore systematically analyze the evaluation methodologies of this field to understand the current state of industrial intrusion detection research. Our analysis of 609 publications shows that the rapid growth of this research field has positive and negative consequences. While we observe an increased use of public datasets, publications still only evaluate 1.3 datasets on average, and frequently used benchmarking metrics are ambiguous. At the same time, the adoption of newly developed benchmarking metrics sees little advancement. Finally, our systematic analysis enables us to provide actionable recommendations for all actors involved and thus bring the entire research field forward.
@article{LWW+23, author = {Lamberts, Olav and Wolsing, Konrad and Wagner, Eric and Pennekamp, Jan and Bauer, Jan and Wehrle, Klaus and Henze, Martin}, title = {{SoK: Evaluations in Industrial Intrusion Detection Research}}, journal = {Journal of Systems Research}, year = {2023}, volume = {3}, number = {1}, publisher = {eScholarship Publishing}, month = {10}, doi = {10.5070/SR33162445}, abstract = {Industrial systems are increasingly threatened by cyberattacks with potentially disastrous consequences. To counter such attacks, industrial intrusion detection systems strive to timely uncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchers from diverse backgrounds, resulting in 130 new detection approaches in 2021 alone. This huge momentum facilitates the exploration of diverse promising paths but likewise risks fragmenting the research landscape and burying promising progress. Consequently, it needs sound and comprehensible evaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. In this paper, we therefore systematically analyze the evaluation methodologies of this field to understand the current state of industrial intrusion detection research. Our analysis of 609 publications shows that the rapid growth of this research field has positive and negative consequences. While we observe an increased use of public datasets, publications still only evaluate 1.3 datasets on average, and frequently used benchmarking metrics are ambiguous. At the same time, the adoption of newly developed benchmarking metrics sees little advancement. Finally, our systematic analysis enables us to provide actionable recommendations for all actors involved and thus bring the entire research field forward.}, code = {https://zenodo.org/records/10008180}, code2 = {https://github.com/fkie-cad/ipal_evaluate}, meta = {}, }
- Niklas Hauser and Jan Pennekamp. Tool: Automatically Extracting Hardware Descriptions from PDF Technical Documentation. Journal of Systems Research, 3(1), 10 2023.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]The ever-increasing variety of microcontrollers aggravates the challenge of porting embedded software to new devices through much manual work, whereas code generators can be used only in special cases. Moreover, only little technical documentation for these devices is available in machine-readable formats that could facilitate automating porting efforts. Instead, the bulk of documentation comes as print-oriented PDFs. We hence identify a strong need for a processor to access the PDFs and extract their data with a high quality to improve the code generation for embedded software. In this paper, we design and implement a modular processor for extracting detailed datasets from PDF files containing technical documentation using deterministic table processing for thousands of microcontrollers. Namely, we systematically extract device identifiers, interrupt tables, package and pinouts, pin functions, and register maps. In our evaluation, we compare the documentation from STMicro against existing machine-readable sources. Our results show that our processor matches 96.5 {\%} of almost 6 million reference data points, and we further discuss identified issues in both sources. Hence, our tool yields very accurate data with only limited manual effort and can enable and enhance a significant amount of existing and new code generation use cases in the embedded software domain that are currently limited by a lack of machine-readable data sources.
@article{HP23, author = {Hauser, Niklas and Pennekamp, Jan}, title = {{Tool: Automatically Extracting Hardware Descriptions from PDF Technical Documentation}}, journal = {Journal of Systems Research}, year = {2023}, volume = {3}, number = {1}, publisher = {eScholarship Publishing}, month = {10}, doi = {10.5070/SR33162446}, abstract = {The ever-increasing variety of microcontrollers aggravates the challenge of porting embedded software to new devices through much manual work, whereas code generators can be used only in special cases. Moreover, only little technical documentation for these devices is available in machine-readable formats that could facilitate automating porting efforts. Instead, the bulk of documentation comes as print-oriented PDFs. We hence identify a strong need for a processor to access the PDFs and extract their data with a high quality to improve the code generation for embedded software. In this paper, we design and implement a modular processor for extracting detailed datasets from PDF files containing technical documentation using deterministic table processing for thousands of microcontrollers. Namely, we systematically extract device identifiers, interrupt tables, package and pinouts, pin functions, and register maps. In our evaluation, we compare the documentation from STMicro against existing machine-readable sources. Our results show that our processor matches 96.5 {\%} of almost 6 million reference data points, and we further discuss identified issues in both sources. Hence, our tool yields very accurate data with only limited manual effort and can enable and enhance a significant amount of existing and new code generation use cases in the embedded software domain that are currently limited by a lack of machine-readable data sources.}, code = {https://github.com/salkinium/pdf-data-extraction-jsys-artifact}, code2 = {https://github.com/modm-io/modm-data}, meta = {}, }
- Konrad Wolsing, Dominik Kus, Eric Wagner, Jan Pennekamp, Klaus Wehrle, and Martin Henze. One IDS is not Enough! Exploring Ensemble Learning for Industrial Intrusion Detection. In Proceedings of the 28th European Symposium on Research in Computer Security (ESORICS ’23), 09 2023.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Industrial Intrusion Detection Systems (IIDSs) play a critical role in safeguarding Industrial Control Systems (ICSs) against targeted cyberattacks. Unsupervised anomaly detectors, capable of learning the expected behavior of physical processes, have proven effective in detecting even novel cyberattacks. While offering decent attack detection, these systems, however, still suffer from too many False-Positive Alarms (FPAs) that operators need to investigate, eventually leading to alarm fatigue. To address this issue, in this paper, we challenge the notion of relying on a single IIDS and explore the benefits of combining multiple IIDSs. To this end, we examine the concept of ensemble learning, where a collection of classifiers (IIDSs in our case) are combined to optimize attack detection and reduce FPAs. While training ensembles for supervised classifiers is relatively straightforward, retaining the unsupervised nature of IIDSs proves challenging. In that regard, novel time-aware ensemble methods that incorporate temporal correlations between alerts and transfer-learning to best utilize the scarce training data constitute viable solutions. By combining diverse IIDSs, the detection performance can be improved beyond the individual approaches with close to no FPAs, resulting in a promising path for strengthening ICS cybersecurity.
@inproceedings{WKW+23, author = {Wolsing, Konrad and Kus, Dominik and Wagner, Eric and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin}, title = {{One IDS is not Enough! Exploring Ensemble Learning for Industrial Intrusion Detection}}, booktitle = {Proceedings of the 28th European Symposium on Research in Computer Security (ESORICS '23)}, year = {2023}, month = {09}, doi = {10.1007/978-3-031-51476-0_6}, abstract = {Industrial Intrusion Detection Systems (IIDSs) play a critical role in safeguarding Industrial Control Systems (ICSs) against targeted cyberattacks. Unsupervised anomaly detectors, capable of learning the expected behavior of physical processes, have proven effective in detecting even novel cyberattacks. While offering decent attack detection, these systems, however, still suffer from too many False-Positive Alarms (FPAs) that operators need to investigate, eventually leading to alarm fatigue. To address this issue, in this paper, we challenge the notion of relying on a single IIDS and explore the benefits of combining multiple IIDSs. To this end, we examine the concept of ensemble learning, where a collection of classifiers (IIDSs in our case) are combined to optimize attack detection and reduce FPAs. While training ensembles for supervised classifiers is relatively straightforward, retaining the unsupervised nature of IIDSs proves challenging. In that regard, novel time-aware ensemble methods that incorporate temporal correlations between alerts and transfer-learning to best utilize the scarce training data constitute viable solutions. By combining diverse IIDSs, the detection performance can be improved beyond the individual approaches with close to no FPAs, resulting in a promising path for strengthening ICS cybersecurity.}, code = {https://zenodo.org/doi/10.5281/zenodo.8256012}, code2 = {https://github.com/fkie-cad/ipal_ids_framework}, meta = {}, }
- Matthias Bodenbenner, Jan Pennekamp, Benjamin Montavon, Klaus Wehrle, and Robert H. Schmitt. FAIR Sensor Ecosystem: Long-Term (Re-)Usability of FAIR Sensor Data through Contextualization. In Proceedings of the 21th IEEE International Conference on Industrial Informatics (INDIN ’23), 07 2023.
[BibTeX] [Abstract] [PDF]The long-term utility and reusability of measurement data from production processes depend on the appropriate contextualization of the measured values. These requirements further mandate that modifications to the context need to be recorded. To be (re-)used at all, the data must be easily findable in the first place, which requires arbitrary filtering and searching routines. Following the FAIR guiding principles, fostering findable, accessible, interoperable and reusable (FAIR) data, in this paper, the FAIR Sensor Ecosystem is proposed, which provides a contextualization middleware based on a unified data metamodel. All information and relations which might change over time are versioned and associated with temporal validity intervals to enable full reconstruction of a system’s state at any point in time. A technical validation demonstrates the correctness of the FAIR Sensor Ecosystem, including its contextualization model and filtering techniques. State-of-the-art FAIRness assessment frameworks rate the proposed FAIR Sensor Ecosystem with an average FAIRness of 71{\%}. The obtained rating can be considered remarkable, as deductions mainly result from the lack of fully appropriate FAIRness metrics and the absence of relevant community standards for the domain of the manufacturing industry.
@inproceedings{BPM+23, author = {Bodenbenner, Matthias and Pennekamp, Jan and Montavon, Benjamin and Wehrle, Klaus and Schmitt, Robert H.}, title = {{FAIR Sensor Ecosystem: Long-Term (Re-)Usability of FAIR Sensor Data through Contextualization}}, booktitle = {Proceedings of the 21th IEEE International Conference on Industrial Informatics (INDIN '23)}, year = {2023}, month = {07}, abstract = {The long-term utility and reusability of measurement data from production processes depend on the appropriate contextualization of the measured values. These requirements further mandate that modifications to the context need to be recorded. To be (re-)used at all, the data must be easily findable in the first place, which requires arbitrary filtering and searching routines. Following the FAIR guiding principles, fostering findable, accessible, interoperable and reusable (FAIR) data, in this paper, the FAIR Sensor Ecosystem is proposed, which provides a contextualization middleware based on a unified data metamodel. All information and relations which might change over time are versioned and associated with temporal validity intervals to enable full reconstruction of a system's state at any point in time. A technical validation demonstrates the correctness of the FAIR Sensor Ecosystem, including its contextualization model and filtering techniques. State-of-the-art FAIRness assessment frameworks rate the proposed FAIR Sensor Ecosystem with an average FAIRness of 71{\%}. The obtained rating can be considered remarkable, as deductions mainly result from the lack of fully appropriate FAIRness metrics and the absence of relevant community standards for the domain of the manufacturing industry.}, meta = {}, }
- Jan Pennekamp, Johannes Lohmöller, Eduard Vlad, Joscha Loos, Niklas Rodemann, Patrick Sapel, Ina Berenice Fink, Seth Schmitz, Christian Hopmann, Matthias Jarke, Günther Schuh, Klaus Wehrle, and Martin Henze. Designing Secure and Privacy-Preserving Information Systems for Industry Benchmarking. In Proceedings of the 35th International Conference on Advanced Information Systems Engineering (CAiSE ’23), 06 2023.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Benchmarking is an essential tool for industrial organizations to identify potentials that allows them to improve their competitive position through operational and strategic means. However, the handling of sensitive information, in terms of (i) internal company data and (ii) the underlying algorithm to compute the benchmark, demands strict (technical) confidentiality guarantees–-an aspect that existing approaches fail to address adequately. Still, advances in private computing provide us with building blocks to reliably secure even complex computations and their inputs, as present in industry benchmarks. In this paper, we thus compare two promising and fundamentally different concepts (hardware- and software-based) to realize privacy-preserving benchmarks. Thereby, we provide detailed insights into the concept-specific benefits. Our evaluation of two real-world use cases from different industries underlines that realizing and deploying secure information systems for industry benchmarking is possible with today’s building blocks from private computing.
@inproceedings{PLV+23, author = {Pennekamp, Jan and Lohm{\"o}ller, Johannes and Vlad, Eduard and Loos, Joscha and Rodemann, Niklas and Sapel, Patrick and Fink, Ina Berenice and Schmitz, Seth and Hopmann, Christian and Jarke, Matthias and Schuh, G{\"u}nther and Wehrle, Klaus and Henze, Martin}, title = {{Designing Secure and Privacy-Preserving Information Systems for Industry Benchmarking}}, booktitle = {Proceedings of the 35th International Conference on Advanced Information Systems Engineering (CAiSE '23)}, year = {2023}, month = {06}, doi = {10.1007/978-3-031-34560-9_29}, abstract = {Benchmarking is an essential tool for industrial organizations to identify potentials that allows them to improve their competitive position through operational and strategic means. However, the handling of sensitive information, in terms of (i) internal company data and (ii) the underlying algorithm to compute the benchmark, demands strict (technical) confidentiality guarantees---an aspect that existing approaches fail to address adequately. Still, advances in private computing provide us with building blocks to reliably secure even complex computations and their inputs, as present in industry benchmarks. In this paper, we thus compare two promising and fundamentally different concepts (hardware- and software-based) to realize privacy-preserving benchmarks. Thereby, we provide detailed insights into the concept-specific benefits. Our evaluation of two real-world use cases from different industries underlines that realizing and deploying secure information systems for industry benchmarking is possible with today's building blocks from private computing.}, code = {https://github.com/COMSYS/industry-benchmarking}, meta = {}, }
- Adrian Karl Rüppel, Muzaffer Ay, Benedikt Biernat, Ike Kunze, Markus Landwehr, Samuel Mann, Jan Pennekamp, Pascal Rabe, Mark P. Sanders, Dominik Scheurenberg, Sven Schiller, Tiandong Xi, Dirk Abel, Thomas Bergs, Christian Brecher, Uwe Reisgen, Robert H. Schmitt, and Klaus Wehrle. Model-Based Controlling Approaches for Manufacturing Processes. In Internet of Production: Fundamentals, Applications and Proceedings, Interdisciplinary Excellence Accelerator Series, page 221–246. Springer, 02 2023.
[BibTeX] [Abstract] [DOI] [PDF]The main objectives in production technology are quality assurance, cost reduction, and guaranteed process safety and stability. Digital shadows enable a more comprehensive understanding and monitoring of processes on shop floor level. Thus, process information becomes available between decision levels, and the aforementioned criteria regarding quality, cost, or safety can be included in control decisions for production processes. The contextual data for digital shadows typically arises from heterogeneous sources. At shop floor level, the proximity to the process requires usage of available data as well as domain knowledge. Data sources need to be selected, synchronized, and processed. Especially high-frequency data requires algorithms for intelligent distribution and efficient filtering of the main information using real-time devices and in-network computing. Real-time data is enriched by simulations, metadata from product planning, and information across the whole process chain. Well-established analytical and empirical models serve as the base for new hybrid, gray box approaches. These models are then applied to optimize production process control by maximizing the productivity under given quality and safety constraints. To store and reuse the developed models, ontologies are developed and a data lake infrastructure is utilized and constantly enlarged laying the basis for a World Wide Lab (WWL). Finally, closing the control loop requires efficient quality assessment, immediately after the process and directly on the machine. This chapter addresses works in a connected job shop to acquire data, identify and optimize models, and automate systems and their deployment in the Internet of Production (IoP).
@incollection{RAB+23, author = {R{\"u}ppel, Adrian Karl and Ay, Muzaffer and Biernat, Benedikt and Kunze, Ike and Landwehr, Markus and Mann, Samuel and Pennekamp, Jan and Rabe, Pascal and Sanders, Mark P. and Scheurenberg, Dominik and Schiller, Sven and Xi, Tiandong and Abel, Dirk and Bergs, Thomas and Brecher, Christian and Reisgen, Uwe and Schmitt, Robert H. and Wehrle, Klaus}, title = {{Model-Based Controlling Approaches for Manufacturing Processes}}, booktitle = {Internet of Production: Fundamentals, Applications and Proceedings}, publisher = {Springer}, year = {2023}, series = {Interdisciplinary Excellence Accelerator Series}, pages = {221--246}, month = {02}, abstract = {The main objectives in production technology are quality assurance, cost reduction, and guaranteed process safety and stability. Digital shadows enable a more comprehensive understanding and monitoring of processes on shop floor level. Thus, process information becomes available between decision levels, and the aforementioned criteria regarding quality, cost, or safety can be included in control decisions for production processes. The contextual data for digital shadows typically arises from heterogeneous sources. At shop floor level, the proximity to the process requires usage of available data as well as domain knowledge. Data sources need to be selected, synchronized, and processed. Especially high-frequency data requires algorithms for intelligent distribution and efficient filtering of the main information using real-time devices and in-network computing. Real-time data is enriched by simulations, metadata from product planning, and information across the whole process chain. Well-established analytical and empirical models serve as the base for new hybrid, gray box approaches. These models are then applied to optimize production process control by maximizing the productivity under given quality and safety constraints. To store and reuse the developed models, ontologies are developed and a data lake infrastructure is utilized and constantly enlarged laying the basis for a World Wide Lab (WWL). Finally, closing the control loop requires efficient quality assessment, immediately after the process and directly on the machine. This chapter addresses works in a connected job shop to acquire data, identify and optimize models, and automate systems and their deployment in the Internet of Production (IoP).}, doi = {10.1007/978-3-031-44497-5_7}, isbn = {978-3-031-44496-8}, meta = {}, }
- Jan Pennekamp, Anastasiia Belova, Thomas Bergs, Matthias Bodenbenner, Andreas Bührig-Polaczek, Markus Dahlmanns, Ike Kunze, Moritz Kröger, Sandra Geisler, Martin Henze, Daniel Lütticke, Benjamin Montavon, Philipp Niemietz, Lucia Ortjohann, Maximilian Rudack, Robert H. Schmitt, Uwe Vroomen, Klaus Wehrle, and Michael Zeng. Evolving the Digital Industrial Infrastructure for Production: Steps Taken and the Road Ahead. In Internet of Production: Fundamentals, Applications and Proceedings, Interdisciplinary Excellence Accelerator Series, page 35–60. Springer, 02 2023.
[BibTeX] [Abstract] [DOI] [PDF]The Internet of Production (IoP) leverages concepts such as digital shadows, data lakes, and a World Wide Lab (WWL) to advance today’s production. Consequently, it requires a technical infrastructure that can support the agile deployment of these concepts and corresponding high-level applications, which, e.g., demand the processing of massive data in motion and at rest. As such, key research aspects are the support for low-latency control loops, concepts on scalable data stream processing, deployable information security, and semantically rich and efficient long-term storage. In particular, such an infrastructure cannot continue to be limited to machines and sensors, but additionally needs to encompass networked environments: production cells, edge computing, and location-independent cloud infrastructures. Finally, in light of the envisioned WWL, i.e., the interconnection of production sites, the technical infrastructure must be advanced to support secure and privacy-preserving industrial collaboration. To evolve today’s production sites and lay the infrastructural foundation for the IoP, we identify five broad streams of research: (1) adapting data and stream processing to heterogeneous data from distributed sources, (2) ensuring data interoperability between systems and production sites, (3) exchanging and sharing data with different stakeholders, (4) network security approaches addressing the risks of increasing interconnectivity, and (5) security architectures to enable secure and privacy-preserving industrial collaboration. With our research, we evolve the underlying infrastructure from isolated, sparsely networked production sites toward an architecture that supports high-level applications and sophisticated digital shadows while facilitating the transition toward a WWL.
@incollection{PBB+23, author = {Pennekamp, Jan and Belova, Anastasiia and Bergs, Thomas and Bodenbenner, Matthias and B{\"u}hrig-Polaczek, Andreas and Dahlmanns, Markus and Kunze, Ike and Kr{\"o}ger, Moritz and Geisler, Sandra and Henze, Martin and L{\"u}tticke, Daniel and Montavon, Benjamin and Niemietz, Philipp and Ortjohann, Lucia and Rudack, Maximilian and Schmitt, Robert H. and Vroomen, Uwe and Wehrle, Klaus and Zeng, Michael}, title = {{Evolving the Digital Industrial Infrastructure for Production: Steps Taken and the Road Ahead}}, booktitle = {Internet of Production: Fundamentals, Applications and Proceedings}, publisher = {Springer}, year = {2023}, series = {Interdisciplinary Excellence Accelerator Series}, pages = {35--60}, month = {02}, abstract = {The Internet of Production (IoP) leverages concepts such as digital shadows, data lakes, and a World Wide Lab (WWL) to advance today's production. Consequently, it requires a technical infrastructure that can support the agile deployment of these concepts and corresponding high-level applications, which, e.g., demand the processing of massive data in motion and at rest. As such, key research aspects are the support for low-latency control loops, concepts on scalable data stream processing, deployable information security, and semantically rich and efficient long-term storage. In particular, such an infrastructure cannot continue to be limited to machines and sensors, but additionally needs to encompass networked environments: production cells, edge computing, and location-independent cloud infrastructures. Finally, in light of the envisioned WWL, i.e., the interconnection of production sites, the technical infrastructure must be advanced to support secure and privacy-preserving industrial collaboration. To evolve today's production sites and lay the infrastructural foundation for the IoP, we identify five broad streams of research: (1) adapting data and stream processing to heterogeneous data from distributed sources, (2) ensuring data interoperability between systems and production sites, (3) exchanging and sharing data with different stakeholders, (4) network security approaches addressing the risks of increasing interconnectivity, and (5) security architectures to enable secure and privacy-preserving industrial collaboration. With our research, we evolve the underlying infrastructure from isolated, sparsely networked production sites toward an architecture that supports high-level applications and sophisticated digital shadows while facilitating the transition toward a WWL.}, doi = {10.1007/978-3-031-44497-5_2}, isbn = {978-3-031-44496-8}, meta = {}, }
2022
- Dominik Kus, Konrad Wolsing, Jan Pennekamp, Eric Wagner, Martin Henze, and Klaus Wehrle. Poster: Ensemble Learning for Industrial Intrusion Detection. Technical Report RWTH-2022-10809, RWTH Aachen University, 12 2022. 38th Annual Computer Security Applications Conference (ACSAC ’22).
[BibTeX] [Abstract] [DOI] [PDF]Industrial intrusion detection promises to protect networked industrial control systems by monitoring them and raising an alarm in case of suspicious behavior. Many monolithic intrusion detection systems are proposed in literature. These detectors are often specialized and, thus, work particularly well on certain types of attacks or monitor different parts of the system, e.g., the network or the physical process. Combining multiple such systems promises to leverage their joint strengths, allowing the detection of a wider range of attacks due to their diverse specializations and reducing false positives. We study this concept’s feasibility with initial results of various methods to combine detectors.
@techreport{KWP+22b, author = {Kus, Dominik and Wolsing, Konrad and Pennekamp, Jan and Wagner, Eric and Henze, Martin and Wehrle, Klaus}, title = {{Poster: Ensemble Learning for Industrial Intrusion Detection}}, institution = {RWTH Aachen University}, year = {2022}, number = {RWTH-2022-10809}, month = {12}, note = {38th Annual Computer Security Applications Conference (ACSAC '22)}, doi = {10.18154/RWTH-2022-10809}, abstract = {Industrial intrusion detection promises to protect networked industrial control systems by monitoring them and raising an alarm in case of suspicious behavior. Many monolithic intrusion detection systems are proposed in literature. These detectors are often specialized and, thus, work particularly well on certain types of attacks or monitor different parts of the system, e.g., the network or the physical process. Combining multiple such systems promises to leverage their joint strengths, allowing the detection of a wider range of attacks due to their diverse specializations and reducing false positives. We study this concept's feasibility with initial results of various methods to combine detectors.}, meta = {}, }
- Jan Pennekamp, Martin Henze, Andreas Zinnen, Fabian Lanze, Klaus Wehrle, and Andriy Panchenko. CUMUL & Co: High-Impact Artifacts for Website Fingerprinting Research. Cybersecurity Artifacts Competition and Impact Award at 38th Annual Computer Security Applications Conference (ACSAC ’22), 12 2022.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Anonymous communication on the Internet is about hiding the relationship between communicating parties. At NDSS ’16, we presented a new website fingerprinting approach, CUMUL, that utilizes novel features and a simple yet powerful algorithm to attack anonymization networks such as Tor. Based on pattern observation of data flows, this attack aims at identifying the content of encrypted and anonymized connections. Apart from the feature generation and the used classifier, we also provided a large dataset to the research community to study the attack at Internet scale. In this paper, we emphasize the impact of our artifacts by analyzing publications referring to our work with respect to the dataset, feature extraction method, and source code of the implementation. Based on this data, we draw conclusions about the impact of our artifacts on the research field and discuss their influence on related cybersecurity topics. Overall, from 393 unique citations, we discover more than 130 academic references that utilize our artifacts, 61 among them are highly influential (according to SemanticScholar), and at least 35 are from top-ranked security venues. This data underlines the significant relevance and impact of our work as well as of our artifacts in the community and beyond.
@misc{PHZ+22, title = {{CUMUL {\&} Co: High-Impact Artifacts for Website Fingerprinting Research}}, author = {Pennekamp, Jan and Henze, Martin and Zinnen, Andreas and Lanze, Fabian and Wehrle, Klaus and Panchenko, Andriy}, howpublished = {Cybersecurity Artifacts Competition and Impact Award at 38th Annual Computer Security Applications Conference (ACSAC '22)}, month = {12}, year = {2022}, doi = {10.18154/RWTH-2022-10811}, abstract = {Anonymous communication on the Internet is about hiding the relationship between communicating parties. At NDSS '16, we presented a new website fingerprinting approach, CUMUL, that utilizes novel features and a simple yet powerful algorithm to attack anonymization networks such as Tor. Based on pattern observation of data flows, this attack aims at identifying the content of encrypted and anonymized connections. Apart from the feature generation and the used classifier, we also provided a large dataset to the research community to study the attack at Internet scale. In this paper, we emphasize the impact of our artifacts by analyzing publications referring to our work with respect to the dataset, feature extraction method, and source code of the implementation. Based on this data, we draw conclusions about the impact of our artifacts on the research field and discuss their influence on related cybersecurity topics. Overall, from 393 unique citations, we discover more than 130 academic references that utilize our artifacts, 61 among them are highly influential (according to SemanticScholar), and at least 35 are from top-ranked security venues. This data underlines the significant relevance and impact of our work as well as of our artifacts in the community and beyond.}, code = {https://www.informatik.tu-cottbus.de/~andriy/zwiebelfreunde/}, meta = {}, }
- Johannes Lohmöller, Jan Pennekamp, Roman Matzutt, and Klaus Wehrle. On the Need for Strong Sovereignty in Data Ecosystems. In Proceedings of the 1st International Workshop on Data Ecosystems (DEco ’22), 09 2022.
[BibTeX] [Abstract] [PDF]Data ecosystems are the foundation of emerging data-driven business models as they (i) enable an automated exchange between their participants and (ii) provide them with access to huge and heterogeneous data sources. However, the corresponding benefits come with unforeseen risks as also sensitive information is potentially exposed. Consequently, data security is of utmost importance and, thus, a central requirement for the successful implementation of these ecosystems. Current initiatives, such as IDS and GAIA-X, hence foster sovereign participation via a federated infrastructure where participants retain local control. However, these designs place significant trust in remote infrastructure by mostly implementing organizational security measures such as certification processes prior to admission of a participant. At the same time, due to the sensitive nature of involved data, participants are incentivized to bypass security measures to maximize their own benefit: In practice, this issue significantly weakens sovereignty guarantees. In this paper, we hence claim that data ecosystems must be extended with technical means to reestablish such guarantees. To underpin our position, we analyze promising building blocks and identify three core research directions toward stronger data sovereignty, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. We conclude that these directions are critical to securely implement data ecosystems in data-sensitive contexts.
@inproceedings{LPMW22, author = {Lohm{\"o}ller, Johannes and Pennekamp, Jan and Matzutt, Roman and Wehrle, Klaus}, title = {{On the Need for Strong Sovereignty in Data Ecosystems}}, booktitle = {Proceedings of the 1st International Workshop on Data Ecosystems (DEco '22)}, year = {2022}, month = {09}, abstract = {Data ecosystems are the foundation of emerging data-driven business models as they (i) enable an automated exchange between their participants and (ii) provide them with access to huge and heterogeneous data sources. However, the corresponding benefits come with unforeseen risks as also sensitive information is potentially exposed. Consequently, data security is of utmost importance and, thus, a central requirement for the successful implementation of these ecosystems. Current initiatives, such as IDS and GAIA-X, hence foster sovereign participation via a federated infrastructure where participants retain local control. However, these designs place significant trust in remote infrastructure by mostly implementing organizational security measures such as certification processes prior to admission of a participant. At the same time, due to the sensitive nature of involved data, participants are incentivized to bypass security measures to maximize their own benefit: In practice, this issue significantly weakens sovereignty guarantees. In this paper, we hence claim that data ecosystems must be extended with technical means to reestablish such guarantees. To underpin our position, we analyze promising building blocks and identify three core research directions toward stronger data sovereignty, namely trusted remote policy enforcement, verifiable data tracking, and integration of resource-constrained participants. We conclude that these directions are critical to securely implement data ecosystems in data-sensitive contexts.}, meta = {}, }
- Markus Dahlmanns, Johannes Lohmöller, Jan Pennekamp, Jörn Bodenhausen, Klaus Wehrle, and Martin Henze. Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things. In Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIACCS ’22), 06 2022.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.
@inproceedings{DLP+22, author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Pennekamp, Jan and Bodenhausen, J{\"o}rn and Wehrle, Klaus and Henze, Martin}, title = {{Missed Opportunities: Measuring the Untapped TLS Support in the Industrial Internet of Things}}, booktitle = {Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIACCS '22)}, year = {2022}, month = {06}, doi = {10.1145/3488932.3497762}, abstract = {The ongoing trend to move industrial appliances from previously isolated networks to the Internet requires fundamental changes in security to uphold secure and safe operation. Consequently, to ensure end-to-end secure communication and authentication, (i) traditional industrial protocols, e.g., Modbus, are retrofitted with TLS support, and (ii) modern protocols, e.g., MQTT, are directly designed to use TLS. To understand whether these changes indeed lead to secure Industrial Internet of Things deployments, i.e., using TLS-based protocols, which are configured according to security best practices, we perform an Internet-wide security assessment of ten industrial protocols covering the complete IPv4 address space. Our results show that both, retrofitted existing protocols and newly developed secure alternatives, are barely noticeable in the wild. While we find that new protocols have a higher TLS adoption rate than traditional protocols (7.2 {\%} vs. 0.4 {\%}), the overall adoption of TLS is comparably low (6.5 {\%} of hosts). Thus, most industrial deployments (934,736 hosts) are insecurely connected to the Internet. Furthermore, we identify that 42 {\%} of hosts with TLS support (26,665 hosts) show security deficits, e.g., missing access control. Finally, we show that support in configuring systems securely, e.g., via configuration templates, is promising to strengthen security.}, code = {https://github.com/COMSYS/zgrab2}, meta = {}, }
- Dominik Kus, Eric Wagner, Jan Pennekamp, Konrad Wolsing, Ina Berenice Fink, Markus Dahlmanns, Klaus Wehrle, and Martin Henze. A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection. In Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS ’22), 05 2022.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.
@inproceedings{KWP+22, author = {Kus, Dominik and Wagner, Eric and Pennekamp, Jan and Wolsing, Konrad and Fink, Ina Berenice and Dahlmanns, Markus and Wehrle, Klaus and Henze, Martin}, title = {{A False Sense of Security? Revisiting the State of Machine Learning-Based Industrial Intrusion Detection}}, booktitle = {Proceedings of the 8th ACM Cyber-Physical System Security Workshop (CPSS '22)}, year = {2022}, month = {05}, doi = {10.1145/3494107.3522773}, abstract = {Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems by modeling expected system behavior and raising corresponding alarms for any deviations. As manually creating these behavioral models is tedious and error-prone, research focuses on machine learning to train them automatically, achieving detection rates upwards of 99 {\%}. However, these approaches are typically trained not only on benign traffic but also on attacks and then evaluated against the same type of attack used for training. Hence, their actual, real-world performance on unknown (not trained on) attacks remains unclear. In turn, the reported near-perfect detection rates of machine learning-based intrusion detection might create a false sense of security. To assess this situation and clarify the real potential of machine learning-based industrial intrusion detection, we develop an evaluation methodology and examine multiple approaches from literature for their performance on unknown attacks (excluded from training). Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2 {\%} and 14.7 {\%} for some types of attacks. Moving forward, we derive recommendations for further research on machine learning-based approaches to ensure clarity on their ability to detect unknown attacks.}, code = {https://github.com/COMSYS/ML-IIDS-generalizability}, meta = {}, }
- Eric Wagner, Roman Matzutt, Jan Pennekamp, Lennart Bader, Irakli Bajelidze, Klaus Wehrle, and Martin Henze. Scalable and Privacy-Focused Company-Centric Supply Chain Management. In Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC ’22), 05 2022.
[BibTeX] [Abstract] [DOI] [PDF]Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.
@inproceedings{WMP+22, author = {Wagner, Eric and Matzutt, Roman and Pennekamp, Jan and Bader, Lennart and Bajelidze, Irakli and Wehrle, Klaus and Henze, Martin}, title = {{Scalable and Privacy-Focused Company-Centric Supply Chain Management}}, booktitle = {Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC '22)}, year = {2022}, month = {05}, doi = {10.1109/ICBC54727.2022.9805503}, abstract = {Blockchain technology promises to overcome trust and privacy concerns inherent to centralized information sharing. However, current decentralized supply chain management systems do either not meet privacy and scalability requirements or require a trustworthy consortium, which is challenging for increasingly dynamic supply chains with constantly changing participants. In this paper, we propose CCChain, a scalable and privacy-aware supply chain management system that stores all information locally to give companies complete sovereignty over who accesses their data. Still, tamper protection of all data through a permissionless blockchain enables on-demand tracking and tracing of products as well as reliable information sharing while affording the detection of data inconsistencies. Our evaluation confirms that CCChain offers superior scalability in comparison to alternatives while also enabling near real-time tracking and tracing for many, less complex products.}, meta = {}, }
- Roman Matzutt, Vincent Ahlrichs, Jan Pennekamp, Roman Karwacik, and Klaus Wehrle. A Moderation Framework for the Swift and Transparent Removal of Illicit Blockchain Content. In Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC ’22), 05 2022.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Blockchains gained tremendous attention for their capability to provide immutable and decentralized event ledgers that can facilitate interactions between mutually distrusting parties. However, precisely this immutability and the openness of permissionless blockchains raised concerns about the consequences of illicit content being irreversibly stored on them. Related work coined the notion of redactable blockchains, which allow for removing illicit content from their history without affecting the blockchain’s integrity. While honest users can safely prune identified content, current approaches either create trust issues by empowering fixed third parties to rewrite history, cannot react quickly to reported content due to using lengthy public votings, or create large per-redaction overheads. In this paper, we instead propose to outsource redactions to small and periodically exchanged juries, whose members can only jointly redact transactions using chameleon hash functions and threshold cryptography. Multiple juries are active at the same time to swiftly redact reported content. They oversee their activities via a global redaction log, which provides transparency and allows for appealing and reversing a rogue jury’s decisions. Hence, our approach establishes a framework for the swift and transparent moderation of blockchain content. Our evaluation shows that our moderation scheme can be realized with feasible per-block and per-redaction overheads, i.e., the redaction capabilities do not impede the blockchain’s normal operation.
@inproceedings{MAPKW22, author = {Matzutt, Roman and Ahlrichs, Vincent and Pennekamp, Jan and Karwacik, Roman and Wehrle, Klaus}, title = {{A Moderation Framework for the Swift and Transparent Removal of Illicit Blockchain Content}}, booktitle = {Proceedings of the 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC '22)}, year = {2022}, month = {05}, doi = {10.1109/ICBC54727.2022.9805508}, abstract = {Blockchains gained tremendous attention for their capability to provide immutable and decentralized event ledgers that can facilitate interactions between mutually distrusting parties. However, precisely this immutability and the openness of permissionless blockchains raised concerns about the consequences of illicit content being irreversibly stored on them. Related work coined the notion of redactable blockchains, which allow for removing illicit content from their history without affecting the blockchain's integrity. While honest users can safely prune identified content, current approaches either create trust issues by empowering fixed third parties to rewrite history, cannot react quickly to reported content due to using lengthy public votings, or create large per-redaction overheads. In this paper, we instead propose to outsource redactions to small and periodically exchanged juries, whose members can only jointly redact transactions using chameleon hash functions and threshold cryptography. Multiple juries are active at the same time to swiftly redact reported content. They oversee their activities via a global redaction log, which provides transparency and allows for appealing and reversing a rogue jury's decisions. Hence, our approach establishes a framework for the swift and transparent moderation of blockchain content. Our evaluation shows that our moderation scheme can be realized with feasible per-block and per-redaction overheads, i.e., the redaction capabilities do not impede the blockchain's normal operation.}, code = {https://github.com/COMSYS/redactchain}, meta = {}, }
- Philipp Brauner, Manuela Dalibor, Matthias Jarke, Ike Kunze, István Koren, Gerhard Lakemeyer, Martin Liebenberg, Judith Michael, Jan Pennekamp, Christoph Quix, Bernhard Rumpe, Wil van der Aalst, Klaus Wehrle, Andreas Wortmann, and Martina Ziefle. A Computer Science Perspective on Digital Transformation in Production. ACM Transactions on Internet of Things, 3(2), 05 2022.
[BibTeX] [Abstract] [DOI] [PDF]The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.
@article{BDJ+22, author = {Brauner, Philipp and Dalibor, Manuela and Jarke, Matthias and Kunze, Ike and Koren, Istv{\'a}n and Lakemeyer, Gerhard and Liebenberg, Martin and Michael, Judith and Pennekamp, Jan and Quix, Christoph and Rumpe, Bernhard and van der Aalst, Wil and Wehrle, Klaus and Wortmann, Andreas and Ziefle, Martina}, title = {{A Computer Science Perspective on Digital Transformation in Production}}, journal = {ACM Transactions on Internet of Things}, year = {2022}, volume = {3}, number = {2}, publisher = {ACM}, month = {05}, doi = {10.1145/3502265}, issn = {2691-1914}, abstract = {The Industrial Internet-of-Things (IIoT) promises significant improvements for the manufacturing industry by facilitating the integration of manufacturing systems by Digital Twins. However, ecological and economic demands also require a cross-domain linkage of multiple scientific perspectives from material sciences, engineering, operations, business, and ergonomics, as optimization opportunities can be derived from any of these perspectives. To extend the IIoT to a true Internet of Production, two concepts are required: first, a complex, interrelated network of Digital Shadows which combine domain-specific models with data-driven AI methods; and second, the integration of a large number of research labs, engineering, and production sites as a World Wide Lab which offers controlled exchange of selected, innovation-relevant data even across company boundaries. In this article, we define the underlying Computer Science challenges implied by these novel concepts in four layers: Smart human interfaces provide access to information that has been generated by model-integrated AI. Given the large variety of manufacturing data, new data modeling techniques should enable efficient management of Digital Shadows, which is supported by an interconnected infrastructure. Based on a detailed analysis of these challenges, we derive a systematized research roadmap to make the vision of the Internet of Production a reality.}, meta = {}, }
2021
- Jan Pennekamp, Erik Buchholz, Markus Dahlmanns, Ike Kunze, Stefan Braun, Eric Wagner, Matthias Brockmann, Klaus Wehrle, and Martin Henze. Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use. In Proceedings of the Workshop on Learning from Authoritative Security Experiment Results (LASER ’20), 12 2021.
[BibTeX] [Abstract] [DOI] [PDF]Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base. Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations. Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike. Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area. Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications. Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.
@inproceedings{PBD+21, author = {Pennekamp, Jan and Buchholz, Erik and Dahlmanns, Markus and Kunze, Ike and Braun, Stefan and Wagner, Eric and Brockmann, Matthias and Wehrle, Klaus and Henze, Martin}, title = {{Collaboration is not Evil: A Systematic Look at Security Research for Industrial Use}}, booktitle = {Proceedings of the Workshop on Learning from Authoritative Security Experiment Results (LASER '20)}, year = {2021}, month = {12}, doi = {10.14722/laser-acsac.2020.23088}, abstract = {Following the recent Internet of Things-induced trends on digitization in general, industrial applications will further evolve as well. With a focus on the domains of manufacturing and production, the Internet of Production pursues the vision of a digitized, globally interconnected, yet secure environment by establishing a distributed knowledge base. Background. As part of our collaborative research of advancing the scope of industrial applications through cybersecurity and privacy, we identified a set of common challenges and pitfalls that surface in such applied interdisciplinary collaborations. Aim. Our goal with this paper is to support researchers in the emerging field of cybersecurity in industrial settings by formalizing our experiences as reference for other research efforts, in industry and academia alike. Method. Based on our experience, we derived a process cycle of performing such interdisciplinary research, from the initial idea to the eventual dissemination and paper writing. This presented methodology strives to successfully bootstrap further research and to encourage further work in this emerging area. Results. Apart from our newly proposed process cycle, we report on our experiences and conduct a case study applying this methodology, raising awareness for challenges in cybersecurity research for industrial applications. We further detail the interplay between our process cycle and the data lifecycle in applied research data management. Finally, we augment our discussion with an industrial as well as an academic view on this research area and highlight that both areas still have to overcome significant challenges to sustainably and securely advance industrial applications. Conclusions. With our proposed process cycle for interdisciplinary research in the intersection of cybersecurity and industrial application, we provide a foundation for further research. We look forward to promising research initiatives, projects, and directions that emerge based on our methodological work.}, meta = {}, }
- Raphael Kiesel, Falk Boehm, Jan Pennekamp, and Robert H. Schmitt. Development of a Model to Evaluate the Potential of 5G Technology for Latency-Critical Applications in Production. In Proceedings of the 28th IEEE International Conference on Industrial Engineering and Engineering Management (IEEM ’21), 12 2021.
[BibTeX] [Abstract] [DOI] [PDF]Latency-critical applications in production promise to be essential enablers for performance improvement in production. However, they require the right and often wireless communication system. 5G technology appears to be an effective way to achieve communication system for these applications. Its estimated economic benefit on production gross domestic product is immense ({\$}740 billion Euro until 2030). However, 55{\%} of production companies state that 5G technology deployment is currently not a subject matter for them and mainly state the lack of knowledge on benefits as a reason. Currently, it is missing an approach or model for a use case specific, data-based evaluation of 5G technology influence on the performance of production applications. Therefore, this paper presents a model to evaluate the potential of 5G technology for latency-critical applications in production. First, we derive requirements for the model to fulfill the decision-makers’ needs. Second, we analyze existing evaluation approaches regarding their fulfillment of the derived requirements. Third, based on outlined research gaps, we develop a model fulfilling the requirements. Fourth, we give an outlook for further research needs.
@inproceedings{KBPS21, author = {Kiesel, Raphael and Boehm, Falk and Pennekamp, Jan and Schmitt, Robert H.}, title = {{Development of a Model to Evaluate the Potential of 5G Technology for Latency-Critical Applications in Production}}, booktitle = {Proceedings of the 28th IEEE International Conference on Industrial Engineering and Engineering Management (IEEM '21)}, year = {2021}, month = {12}, doi = {10.1109/IEEM50564.2021.9673074}, abstract = {Latency-critical applications in production promise to be essential enablers for performance improvement in production. However, they require the right and often wireless communication system. 5G technology appears to be an effective way to achieve communication system for these applications. Its estimated economic benefit on production gross domestic product is immense ({\$}740 billion Euro until 2030). However, 55{\%} of production companies state that 5G technology deployment is currently not a subject matter for them and mainly state the lack of knowledge on benefits as a reason. Currently, it is missing an approach or model for a use case specific, data-based evaluation of 5G technology influence on the performance of production applications. Therefore, this paper presents a model to evaluate the potential of 5G technology for latency-critical applications in production. First, we derive requirements for the model to fulfill the decision-makers' needs. Second, we analyze existing evaluation approaches regarding their fulfillment of the derived requirements. Third, based on outlined research gaps, we develop a model fulfilling the requirements. Fourth, we give an outlook for further research needs.}, meta = {}, }
- Asya Mitseva, Jan Pennekamp, Johannes Lohmöller, Torsten Ziemann, Carl Hoerchner, Klaus Wehrle, and Andriy Panchenko. POSTER: How Dangerous is My Click? Boosting Website Fingerprinting By Considering Sequences of Webpages. In Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS ’21), 11 2021.
[BibTeX] [Abstract] [DOI] [PDF]Website fingerprinting (WFP) is a special case of traffic analysis, where a passive attacker infers information about the content of encrypted and anonymized connections by observing patterns of data flows. Although modern WFP attacks pose a serious threat to online privacy of users, including Tor users, they usually aim to detect single pages only. By ignoring the browsing behavior of users, the attacker excludes valuable information: users visit multiple pages of a single website consecutively, e.g., by following links. In this paper, we propose two novel methods that can take advantage of the consecutive visits of multiple pages to detect websites. We show that two up to three clicks within a site allow attackers to boost the accuracy by more than 20{\%} and to dramatically increase the threat to users’ privacy. We argue that WFP defenses have to consider this new dimension of the attack surface.
@inproceedings{MPL+21, author = {Mitseva, Asya and Pennekamp, Jan and Lohm{\"o}ller, Johannes and Ziemann, Torsten and Hoerchner, Carl and Wehrle, Klaus and Panchenko, Andriy}, title = {{POSTER: How Dangerous is My Click? Boosting Website Fingerprinting By Considering Sequences of Webpages}}, booktitle = {Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS '21)}, year = {2021}, month = {11}, doi = {10.1145/3460120.3485347}, abstract = {Website fingerprinting (WFP) is a special case of traffic analysis, where a passive attacker infers information about the content of encrypted and anonymized connections by observing patterns of data flows. Although modern WFP attacks pose a serious threat to online privacy of users, including Tor users, they usually aim to detect single pages only. By ignoring the browsing behavior of users, the attacker excludes valuable information: users visit multiple pages of a single website consecutively, e.g., by following links. In this paper, we propose two novel methods that can take advantage of the consecutive visits of multiple pages to detect websites. We show that two up to three clicks within a site allow attackers to boost the accuracy by more than 20{\%} and to dramatically increase the threat to users' privacy. We argue that WFP defenses have to consider this new dimension of the attack surface.}, meta = {}, }
- Jan Pennekamp, Frederik Fuhrmann, Markus Dahlmanns, Timo Heutmann, Alexander Kreppein, Dennis Grunert, Christoph Lange, Robert H. Schmitt, and Klaus Wehrle. Confidential Computing-Induced Privacy Benefits for the Bootstrapping of New Business Relationships. Technical Report RWTH-2021-09499, RWTH Aachen University, 11 2021. Blitz Talk at the 2021 Cloud Computing Security Workshop (CCSW ’21).
[BibTeX] [Abstract] [DOI] [PDF]In addition to quality improvements and cost reductions, dynamic and flexible business relationships are expected to become more important in the future to account for specific customer change requests or small-batch production. Today, despite reservation, sensitive information must be shared upfront between buyers and sellers. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness following information leaks or breaches of their privacy. To address this issue, the concepts of confidential computing and cloud computing come to mind as they promise to offer scalable approaches that preserve the privacy of participating companies. In particular, designs building on confidential computing can help to technically enforce privacy. Moreover, cloud computing constitutes an elegant design choice to scale these novel protocols to industry needs while limiting the setup and management overhead for practitioners. Thus, novel approaches in this area can advance the status quo of bootstrapping new relationships as they provide privacy-preserving alternatives that are suitable for immediate deployment.
@techreport{PFD+21, author = {Pennekamp, Jan and Fuhrmann, Frederik and Dahlmanns, Markus and Heutmann, Timo and Kreppein, Alexander and Grunert, Dennis and Lange, Christoph and Schmitt, Robert H. and Wehrle, Klaus}, title = {{Confidential Computing-Induced Privacy Benefits for the Bootstrapping of New Business Relationships}}, institution = {RWTH Aachen University}, year = {2021}, number = {RWTH-2021-09499}, month = {11}, note = {Blitz Talk at the 2021 Cloud Computing Security Workshop (CCSW '21)}, doi = {10.18154/RWTH-2021-09499}, abstract = {In addition to quality improvements and cost reductions, dynamic and flexible business relationships are expected to become more important in the future to account for specific customer change requests or small-batch production. Today, despite reservation, sensitive information must be shared upfront between buyers and sellers. However, without a trust relation, this situation is precarious for the involved companies as they fear for their competitiveness following information leaks or breaches of their privacy. To address this issue, the concepts of confidential computing and cloud computing come to mind as they promise to offer scalable approaches that preserve the privacy of participating companies. In particular, designs building on confidential computing can help to technically enforce privacy. Moreover, cloud computing constitutes an elegant design choice to scale these novel protocols to industry needs while limiting the setup and management overhead for practitioners. Thus, novel approaches in this area can advance the status quo of bootstrapping new relationships as they provide privacy-preserving alternatives that are suitable for immediate deployment.}, meta = {}, }
- Sebastian Reuter, Jens Hiller, Jan Pennekamp, Andriy Panchenko, and Klaus Wehrle. Demo: Traffic Splitting for Tor –- A Defense against Fingerprinting Attacks. In Proceedings of the 2021 International Conference on Networked Systems (NetSys ’21), 09 2021.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Website fingerprinting (WFP) attacks on the anonymity network Tor have become ever more effective. Furthermore, research discovered that proposed defenses are insufficient or cause high overhead. In previous work, we presented a new WFP defense for Tor that incorporates multipath transmissions to repel malicious Tor nodes from conducting WFP attacks. In this demo, we showcase the operation of our traffic splitting defense by visually illustrating the underlying Tor multipath transmission using LED-equipped Raspberry Pis.
@inproceedings{RHP+21, author = {Reuter, Sebastian and Hiller, Jens and Pennekamp, Jan and Panchenko, Andriy and Wehrle, Klaus}, title = {{Demo: Traffic Splitting for Tor --- A Defense against Fingerprinting Attacks}}, booktitle = {Proceedings of the 2021 International Conference on Networked Systems (NetSys '21)}, year = {2021}, month = {09}, doi = {10.14279/tuj.eceasst.80.1151}, abstract = {Website fingerprinting (WFP) attacks on the anonymity network Tor have become ever more effective. Furthermore, research discovered that proposed defenses are insufficient or cause high overhead. In previous work, we presented a new WFP defense for Tor that incorporates multipath transmissions to repel malicious Tor nodes from conducting WFP attacks. In this demo, we showcase the operation of our traffic splitting defense by visually illustrating the underlying Tor multipath transmission using LED-equipped Raspberry Pis.}, code = {https://github.com/TrafficSliver/trafficsliver-net-demo}, journal = {Electronic Communications of the EASST}, meta = {}, }
- Jan Pennekamp, Roman Matzutt, Salil S. Kanhere, Jens Hiller, and Klaus Wehrle. The Road to Accountable and Dependable Manufacturing. Automation, 2(3), 09 2021.
[BibTeX] [Abstract] [DOI] [PDF]The Internet of Things provides manufacturing with rich data for increased automation. Beyond company-internal data exploitation, the sharing of product and manufacturing process data along and across supply chains enables more efficient production flows and product lifecycle management. Even more, data-based automation facilitates short-lived ad hoc collaborations, realizing highly dynamic business relationships for sustainable exploitation of production resources and capacities. However, the sharing and use of business data across manufacturers and with end customers add requirements on data accountability, verifiability, and reliability and needs to consider security and privacy demands. While research has already identified blockchain technology as a key technology to address these challenges, current solutions mainly evolve around logistics or focus on established business relationships instead of automated but highly dynamic collaborations that cannot draw upon long-term trust relationships. We identify three open research areas on the road to such a truly accountable and dependable manufacturing enabled by blockchain technology: blockchain-inherent challenges, scenario-driven challenges, and socio-economic challenges. Especially tackling the scenario-driven challenges, we discuss requirements and options for realizing a blockchain-based trustworthy information store and outline its use for automation to achieve a reliable sharing of product information, efficient and dependable collaboration, and dynamic distributed markets without requiring established long-term trust.
@article{PMK+21, author = {Pennekamp, Jan and Matzutt, Roman and Kanhere, Salil S. and Hiller, Jens and Wehrle, Klaus}, title = {{The Road to Accountable and Dependable Manufacturing}}, journal = {Automation}, year = {2021}, volume = {2}, number = {3}, publisher = {MDPI}, month = {09}, doi = {10.3390/automation2030013}, issn = {2673-4052}, abstract = {The Internet of Things provides manufacturing with rich data for increased automation. Beyond company-internal data exploitation, the sharing of product and manufacturing process data along and across supply chains enables more efficient production flows and product lifecycle management. Even more, data-based automation facilitates short-lived ad hoc collaborations, realizing highly dynamic business relationships for sustainable exploitation of production resources and capacities. However, the sharing and use of business data across manufacturers and with end customers add requirements on data accountability, verifiability, and reliability and needs to consider security and privacy demands. While research has already identified blockchain technology as a key technology to address these challenges, current solutions mainly evolve around logistics or focus on established business relationships instead of automated but highly dynamic collaborations that cannot draw upon long-term trust relationships. We identify three open research areas on the road to such a truly accountable and dependable manufacturing enabled by blockchain technology: blockchain-inherent challenges, scenario-driven challenges, and socio-economic challenges. Especially tackling the scenario-driven challenges, we discuss requirements and options for realizing a blockchain-based trustworthy information store and outline its use for automation to achieve a reliable sharing of product information, efficient and dependable collaboration, and dynamic distributed markets without requiring established long-term trust.}, meta = {}, }
- Roman Matzutt, Benedikt Kalde, Jan Pennekamp, Arthur Drichel, Martin Henze, and Klaus Wehrle. CoinPrune: Shrinking Bitcoin’s Blockchain Retrospectively. IEEE Transactions on Network and Service Management, 18(3), 09 2021.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin’s set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot’s correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.
@article{MKP+21, author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus}, title = {{CoinPrune: Shrinking Bitcoin's Blockchain Retrospectively}}, journal = {IEEE Transactions on Network and Service Management}, year = {2021}, volume = {18}, number = {3}, publisher = {IEEE}, month = {09}, doi = {10.1109/TNSM.2021.3073270}, issn = {1932-4537}, abstract = {Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin's set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot's correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.}, code = {https://github.com/COMSYS/coinprune}, meta = {}, }
- Jan Pennekamp, Martin Henze, and Klaus Wehrle. Unlocking Secure Industrial Collaborations through Privacy-Preserving Computation. ERCIM News, 126, 07 2021.
[BibTeX] [Abstract] [PDF]In industrial settings, significant process improvements can be achieved when utilising and sharing information across stakeholders. However, traditionally conservative companies impose significant confidentiality requirements for any (external) data processing. We discuss how privacy-preserving computation can unlock secure and private collaborations even in such competitive environments.
@article{PHW21, author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus}, title = {{Unlocking Secure Industrial Collaborations through Privacy-Preserving Computation}}, journal = {ERCIM News}, year = {2021}, volume = {126}, publisher = {ERCIM EEIG}, month = {07}, issn = {0926-4981}, abstract = {In industrial settings, significant process improvements can be achieved when utilising and sharing information across stakeholders. However, traditionally conservative companies impose significant confidentiality requirements for any (external) data processing. We discuss how privacy-preserving computation can unlock secure and private collaborations even in such competitive environments.}, meta = {}, }
- Simon Mangel, Lars Gleim, Jan Pennekamp, Klaus Wehrle, and Stefan Decker. Data Reliability and Trustworthiness through Digital Transmission Contracts. In Proceedings of the 18th Extended Semantic Web Conference (ESWC ’21), 06 2021.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare’s performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today’s approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation.
@inproceedings{MGPWD21, author = {Mangel, Simon and Gleim, Lars and Pennekamp, Jan and Wehrle, Klaus and Decker, Stefan}, title = {{Data Reliability and Trustworthiness through Digital Transmission Contracts}}, booktitle = {Proceedings of the 18th Extended Semantic Web Conference (ESWC '21)}, year = {2021}, month = {06}, doi = {10.1007/978-3-030-77385-4_16}, abstract = {As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare's performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today's approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation.}, code = {https://git.rwth-aachen.de/i5/factdag/factcheck.js}, code2 = {http://i5.pages.rwth-aachen.de/factdag/reshare-ontology/0.1/}, meta = {}, }
- Lars Gleim, Jan Pennekamp, Liam Tirpitz, Sascha Welten, Florian Brillowski, and Stefan Decker. FactStack: Interoperable Data Management and Preservation for the Web and Industry 4.0. In Proceedings of the 19th Symposium for Database Systems for Business, Technology and Web (BTW ’21), 05 2021.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Data exchange throughout the supply chain is essential for the agile and adaptive manufacturing processes of Industry 4.0. As companies employ numerous, frequently mutually incompatible data management and preservation approaches, interorganizational data sharing and reuse regularly requires human interaction and is thus associated with high overhead costs. An interoperable system, supporting the unified management, preservation and exchange of data across organizational boundaries is missing to date. We propose FactStack, a unified approach to data management and preservation based upon a novel combination of existing Web-standards and tightly integrated with the HTTP protocol itself. Based on the FactDAG model, FactStack guides and supports the full data lifecycle in a FAIR and interoperable manner, independent of individual software solutions and backward-compatible with existing resource oriented architectures. We describe our reference implementation of the approach and evaluate its performance, showcasing scalability even to high-throughput applications. We analyze the system’s applicability to industry using a representative real-world use case in aircraft manufacturing based on principal requirements identified in prior work. We conclude that FactStack fulfills all requirements and provides a promising solution for the on-demand integration of persistence and provenance into existing resource-oriented architectures, facilitating data management and preservation for the agile and interorganizational manufacturing processes of Industry 4.0. Through its open source distribution, it is readily available for adoption by the community, paving the way for improved utility and usability of data management and preservation in digital manufacturing and supply chains.
@inproceedings{GPT+21, author = {Gleim, Lars and Pennekamp, Jan and Tirpitz, Liam and Welten, Sascha and Brillowski, Florian and Decker, Stefan}, title = {{FactStack: Interoperable Data Management and Preservation for the Web and Industry 4.0}}, booktitle = {Proceedings of the 19th Symposium for Database Systems for Business, Technology and Web (BTW '21)}, year = {2021}, month = {05}, doi = {10.18420/btw2021-20}, abstract = {Data exchange throughout the supply chain is essential for the agile and adaptive manufacturing processes of Industry 4.0. As companies employ numerous, frequently mutually incompatible data management and preservation approaches, interorganizational data sharing and reuse regularly requires human interaction and is thus associated with high overhead costs. An interoperable system, supporting the unified management, preservation and exchange of data across organizational boundaries is missing to date. We propose FactStack, a unified approach to data management and preservation based upon a novel combination of existing Web-standards and tightly integrated with the HTTP protocol itself. Based on the FactDAG model, FactStack guides and supports the full data lifecycle in a FAIR and interoperable manner, independent of individual software solutions and backward-compatible with existing resource oriented architectures. We describe our reference implementation of the approach and evaluate its performance, showcasing scalability even to high-throughput applications. We analyze the system's applicability to industry using a representative real-world use case in aircraft manufacturing based on principal requirements identified in prior work. We conclude that FactStack fulfills all requirements and provides a promising solution for the on-demand integration of persistence and provenance into existing resource-oriented architectures, facilitating data management and preservation for the agile and interorganizational manufacturing processes of Industry 4.0. Through its open source distribution, it is readily available for adoption by the community, paving the way for improved utility and usability of data management and preservation in digital manufacturing and supply chains.}, code = {https://git.rwth-aachen.de/i5/factdag/factlibjs}, code2 = {https://git.rwth-aachen.de/i5/factdag/trellis}, meta = {}, }
- Armin F. Buckhorst, Benjamin Montavon, Dominik Wolfschläger, Melanie Buchsbaum, Amir Shahidi, Henning Petruck, Ike Kunze, Jan Pennekamp, Christian Brecher, Mathias Hüsing, Burkhard Corves, Verena Nitsch, Klaus Wehrle, and Robert H. Schmitt. Holarchy for Line-less Mobile Assembly Systems Operation in the Context of the Internet of Production. Procedia CIRP, 99, 05 2021. Proceedings of the 14th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME ’20).
[BibTeX] [Abstract] [DOI] [PDF]Assembly systems must provide maximum flexibility qualified by organization and technology to offer cost-compliant performance features to differentiate themselves from competitors in buyers’ markets. By mobilization of multipurpose resources and dynamic planning, Line-less Mobile Assembly Systems (LMASs) offer organizational reconfigurability. By proposing a holarchy to combine LMASs with the concept of an Internet of Production (IoP), we enable LMASs to source valuable information from cross-level production networks, physical resources, software nodes, and data stores that are interconnected in an IoP. The presented holarchy provides a concept of how to address future challenges, meet the requirements of shorter lead times, and unique lifecycle support. The paper suggests an application of decision making, distributed sensor services, recommender-based data reduction, and in-network computing while considering safety and human usability alike.
@article{BMW+21, author = {Buckhorst, Armin F. and Montavon, Benjamin and Wolfschl{\"a}ger, Dominik and Buchsbaum, Melanie and Shahidi, Amir and Petruck, Henning and Kunze, Ike and Pennekamp, Jan and Brecher, Christian and H{\"u}sing, Mathias and Corves, Burkhard and Nitsch, Verena and Wehrle, Klaus and Schmitt, Robert H.}, title = {{Holarchy for Line-less Mobile Assembly Systems Operation in the Context of the Internet of Production}}, journal = {Procedia CIRP}, year = {2021}, volume = {99}, publisher = {Elsevier}, month = {05}, doi = {10.1016/j.procir.2021.03.064}, issn = {2212-8271}, note = {Proceedings of the 14th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME '20)}, abstract = {Assembly systems must provide maximum flexibility qualified by organization and technology to offer cost-compliant performance features to differentiate themselves from competitors in buyers' markets. By mobilization of multipurpose resources and dynamic planning, Line-less Mobile Assembly Systems (LMASs) offer organizational reconfigurability. By proposing a holarchy to combine LMASs with the concept of an Internet of Production (IoP), we enable LMASs to source valuable information from cross-level production networks, physical resources, software nodes, and data stores that are interconnected in an IoP. The presented holarchy provides a concept of how to address future challenges, meet the requirements of shorter lead times, and unique lifecycle support. The paper suggests an application of decision making, distributed sensor services, recommender-based data reduction, and in-network computing while considering safety and human usability alike.}, meta = {}, }
- Lennart Bader, Jan Pennekamp, Roman Matzutt, David Hedderich, Markus Kowalski, Volker Lücken, and Klaus Wehrle. Blockchain-Based Privacy Preservation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability. Information Processing & Management, 58(3), 05 2021.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]The benefits of information sharing along supply chains are well known for improving productivity and reducing costs. However, with the shift towards more dynamic and flexible supply chains, privacy concerns severely challenge the required information retrieval. A lack of trust between the different involved stakeholders inhibits advanced, multi-hop information flows, as valuable information for tracking and tracing products and parts is either unavailable or only retained locally. Our extensive literature review of previous approaches shows that these needs for cross-company information retrieval are widely acknowledged, but related work currently only addresses them insufficiently. To overcome these concerns, we present PrivAccIChain, a secure, privacy-preserving architecture for improving the multi-hop information retrieval with stakeholder accountability along supply chains. To address use case-specific needs, we particularly introduce an adaptable configuration of transparency and data privacy within our design. Hence, we enable the benefits of information sharing as well as multi-hop tracking and tracing even in supply chains that include mutually distrusting stakeholders. We evaluate the performance of PrivAccIChain and demonstrate its real-world feasibility based on the information of a purchasable automobile, the e.GO Life. We further conduct an in-depth security analysis and propose tunable mitigations against common attacks. As such, we attest PrivAccIChain’s practicability for information management even in complex supply chains with flexible and dynamic business relationships.
@article{BPM+21, author = {Bader, Lennart and Pennekamp, Jan and Matzutt, Roman and Hedderich, David and Kowalski, Markus and L{\"u}cken, Volker and Wehrle, Klaus}, title = {{Blockchain-Based Privacy Preservation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability}}, journal = {Information Processing {\&} Management}, year = {2021}, volume = {58}, number = {3}, publisher = {Elsevier}, month = {05}, doi = {10.1016/j.ipm.2021.102529}, issn = {0306-4573}, abstract = {The benefits of information sharing along supply chains are well known for improving productivity and reducing costs. However, with the shift towards more dynamic and flexible supply chains, privacy concerns severely challenge the required information retrieval. A lack of trust between the different involved stakeholders inhibits advanced, multi-hop information flows, as valuable information for tracking and tracing products and parts is either unavailable or only retained locally. Our extensive literature review of previous approaches shows that these needs for cross-company information retrieval are widely acknowledged, but related work currently only addresses them insufficiently. To overcome these concerns, we present PrivAccIChain, a secure, privacy-preserving architecture for improving the multi-hop information retrieval with stakeholder accountability along supply chains. To address use case-specific needs, we particularly introduce an adaptable configuration of transparency and data privacy within our design. Hence, we enable the benefits of information sharing as well as multi-hop tracking and tracing even in supply chains that include mutually distrusting stakeholders. We evaluate the performance of PrivAccIChain and demonstrate its real-world feasibility based on the information of a purchasable automobile, the e.GO Life. We further conduct an in-depth security analysis and propose tunable mitigations against common attacks. As such, we attest PrivAccIChain's practicability for information management even in complex supply chains with flexible and dynamic business relationships.}, code = {https://github.com/COMSYS/PrivAccIChain}, meta = {}, }
- Markus Dahlmanns, Jan Pennekamp, Ina Berenice Fink, Bernd Schoolmann, Klaus Wehrle, and Martin Henze. Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems. In Proceedings of the 1st ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS ’21), 04 2021.
[BibTeX] [Abstract] [DOI] [PDF]The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.
@inproceedings{DPF+21, author = {Dahlmanns, Markus and Pennekamp, Jan and Fink, Ina Berenice and Schoolmann, Bernd and Wehrle, Klaus and Henze, Martin}, title = {{Transparent End-to-End Security for Publish/Subscribe Communication in Cyber-Physical Systems}}, booktitle = {Proceedings of the 1st ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS '21)}, year = {2021}, month = {04}, doi = {10.1145/3445969.3450423}, abstract = {The ongoing digitization of industrial manufacturing leads to a decisive change in industrial communication paradigms. Moving from traditional one-to-one to many-to-many communication, publish/subscribe systems promise a more dynamic and efficient exchange of data. However, the resulting significantly more complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems. Most notably, the central message brokers inherent in publish/subscribe systems introduce a designated weak spot for security as they can access all communication messages. To address this issue, we propose ENTRUST, a novel solution for key server-based end-to-end security in publish/subscribe systems. ENTRUST transparently realizes confidentiality, integrity, and authentication for publish/subscribe systems without any modification of the underlying protocol. We exemplarily implement ENTRUST on top of MQTT, the de-facto standard for machine-to-machine communication, showing that ENTRUST can integrate seamlessly into existing publish/subscribe systems.}, meta = {}, }
2020
- Jan Pennekamp, Patrick Sapel, Ina Berenice Fink, Simon Wagner, Sebastian Reuter, Christian Hopmann, Klaus Wehrle, and Martin Henze. Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking. In Proceedings of the 8th Workshop on Encrypted Computing & Applied Homomorphic Cryptography (WAHC ’20), 12 2020.
[BibTeX] [Abstract] [DOI] [PDF]Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today’s applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB –- a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy –- which is specifically tailored for realistic real-world applications in which we protect companies’ sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB’s performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.
@inproceedings{PSF+20, author = {Pennekamp, Jan and Sapel, Patrick and Fink, Ina Berenice and Wagner, Simon and Reuter, Sebastian and Hopmann, Christian and Wehrle, Klaus and Henze, Martin}, title = {{Revisiting the Privacy Needs of Real-World Applicable Company Benchmarking}}, booktitle = {Proceedings of the 8th Workshop on Encrypted Computing {\&} Applied Homomorphic Cryptography (WAHC '20)}, year = {2020}, month = {12}, doi = {10.25835/0072999}, abstract = {Benchmarking the performance of companies is essential to identify improvement potentials in various industries. Due to a competitive environment, this process imposes strong privacy needs, as leaked business secrets can have devastating effects on participating companies. Consequently, related work proposes to protect sensitive input data of companies using secure multi-party computation or homomorphic encryption. However, related work so far does not consider that also the benchmarking algorithm, used in today's applied real-world scenarios to compute all relevant statistics, itself contains significant intellectual property, and thus needs to be protected. Addressing this issue, we present PCB --- a practical design for Privacy-preserving Company Benchmarking that utilizes homomorphic encryption and a privacy proxy --- which is specifically tailored for realistic real-world applications in which we protect companies' sensitive input data and the valuable algorithms used to compute underlying key performance indicators. We evaluate PCB's performance using synthetic measurements and showcase its applicability alongside an actual company benchmarking performed in the domain of injection molding, covering 48 distinct key performance indicators calculated out of hundreds of different input values. By protecting the privacy of all participants, we enable them to fully profit from the benefits of company benchmarking.}, meta = {}, }
- Jan Pennekamp, Erik Buchholz, Yannik Lockner, Markus Dahlmanns, Tiandong Xi, Marcel Fey, Christian Brecher, Christian Hopmann, and Klaus Wehrle. Privacy-Preserving Production Process Parameter Exchange. In Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC ’20), 12 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Nowadays, collaborations between industrial companies always go hand in hand with trust issues, i.e., exchanging valuable production data entails the risk of improper use of potentially sensitive information. Therefore, companies hesitate to offer their production data, e.g., process parameters that would allow other companies to establish new production lines faster, against a quid pro quo. Nevertheless, the expected benefits of industrial collaboration, data exchanges, and the utilization of external knowledge are significant. In this paper, we introduce our Bloom filter-based Parameter Exchange (BPE), which enables companies to exchange process parameters privacy-preservingly. We demonstrate the applicability of our platform based on two distinct real-world use cases: injection molding and machine tools. We show that BPE is both scalable and deployable for different needs to foster industrial collaborations. Thereby, we reward data-providing companies with payments while preserving their valuable data and reducing the risks of data leakage.
@inproceedings{PBL+20, author = {Pennekamp, Jan and Buchholz, Erik and Lockner, Yannik and Dahlmanns, Markus and Xi, Tiandong and Fey, Marcel and Brecher, Christian and Hopmann, Christian and Wehrle, Klaus}, title = {{Privacy-Preserving Production Process Parameter Exchange}}, booktitle = {Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC '20)}, year = {2020}, month = {12}, doi = {10.1145/3427228.3427248}, abstract = {Nowadays, collaborations between industrial companies always go hand in hand with trust issues, i.e., exchanging valuable production data entails the risk of improper use of potentially sensitive information. Therefore, companies hesitate to offer their production data, e.g., process parameters that would allow other companies to establish new production lines faster, against a quid pro quo. Nevertheless, the expected benefits of industrial collaboration, data exchanges, and the utilization of external knowledge are significant. In this paper, we introduce our Bloom filter-based Parameter Exchange (BPE), which enables companies to exchange process parameters privacy-preservingly. We demonstrate the applicability of our platform based on two distinct real-world use cases: injection molding and machine tools. We show that BPE is both scalable and deployable for different needs to foster industrial collaborations. Thereby, we reward data-providing companies with payments while preserving their valuable data and reducing the risks of data leakage.}, code = {https://github.com/COMSYS/parameter-exchange}, meta = {}, }
- Lars Gleim, Liam Tirpitz, Jan Pennekamp, and Stefan Decker. Expressing FactDAG Provenance with PROV-O. In Proceedings of the 6th Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW ’20), 11 2020.
[BibTeX] [Abstract] [PDF]To foster data sharing and reuse across organizational boundaries, provenance tracking is of vital importance for the establishment of trust and accountability, especially in industrial applications, but often neglected due to associated overhead. The abstract FactDAG data interoperability model strives to address this challenge by simplifying the creation of provenance-linked knowledge graphs of revisioned (and thus immutable) resources. However, to date, it lacks a practical provenance implementation. In this work, we present a concrete alignment of all roles and relations in the FactDAG model to the W3C PROV provenance standard, allowing future software implementations to directly produce standard-compliant provenance information. Maintaining compatibility with existing PROV tooling, an implementation of this mapping will pave the way for practical FactDAG implementations and deployments, improving trust and accountability for Open Data through simplified provenance management.
@inproceedings{GTPD20, author = {Gleim, Lars and Tirpitz, Liam and Pennekamp, Jan and Decker, Stefan}, title = {{Expressing FactDAG Provenance with PROV-O}}, booktitle = {Proceedings of the 6th Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW '20)}, year = {2020}, month = {11}, abstract = {To foster data sharing and reuse across organizational boundaries, provenance tracking is of vital importance for the establishment of trust and accountability, especially in industrial applications, but often neglected due to associated overhead. The abstract FactDAG data interoperability model strives to address this challenge by simplifying the creation of provenance-linked knowledge graphs of revisioned (and thus immutable) resources. However, to date, it lacks a practical provenance implementation. In this work, we present a concrete alignment of all roles and relations in the FactDAG model to the W3C PROV provenance standard, allowing future software implementations to directly produce standard-compliant provenance information. Maintaining compatibility with existing PROV tooling, an implementation of this mapping will pave the way for practical FactDAG implementations and deployments, improving trust and accountability for Open Data through simplified provenance management.}, meta = {}, }
- Wladimir De la Cadena, Asya Mitseva, Jens Hiller, Jan Pennekamp, Sebastian Reuter, Julian Filter, Klaus Wehrle, Thomas Engel, and Andriy Panchenko. TrafficSliver: Fighting Website Fingerprinting Attacks with Traffic Splitting. In Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security (CCS ’20), 11 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Website fingerprinting (WFP) aims to infer information about the content of encrypted and anonymized connections by observing patterns of data flows based on the size and direction of packets. By collecting traffic traces at a malicious Tor entry node –- one of the weakest adversaries in the attacker model of Tor –- a passive eavesdropper can leverage the captured meta-data to reveal the websites visited by a Tor user. As recently shown, WFP is significantly more effective and realistic than assumed. Concurrently, former WFP defenses are either infeasible for deployment in real-world settings or defend against specific WFP attacks only. To limit the exposure of Tor users to WFP, we propose novel lightweight WFP defenses, TrafficSliver, which successfully counter today’s WFP classifiers with reasonable bandwidth and latency overheads and, thus, make them attractive candidates for adoption in Tor. Through user-controlled splitting of traffic over multiple Tor entry nodes, TrafficSliver limits the data a single entry node can observe and distorts repeatable traffic patterns exploited by WFP attacks. We first propose a network-layer defense, in which we apply the concept of multipathing entirely within the Tor network. We show that our network-layer defense reduces the accuracy from more than 98{\%} to less than 16{\%} for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. We further suggest an elegant client-side application-layer defense, which is independent of the underlying anonymization network. By sending single HTTP requests for different web objects over distinct Tor entry nodes, our application-layer defense reduces the detection rate of WFP classifiers by almost 50 percentage points. Although it offers lower protection than our network-layer defense, it provides a security boost at the cost of a very low implementation overhead and is fully compatible with today’s Tor network.
@inproceedings{DMH+20, author = {De la Cadena, Wladimir and Mitseva, Asya and Hiller, Jens and Pennekamp, Jan and Reuter, Sebastian and Filter, Julian and Wehrle, Klaus and Engel, Thomas and Panchenko, Andriy}, title = {{TrafficSliver: Fighting Website Fingerprinting Attacks with Traffic Splitting}}, booktitle = {Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security (CCS '20)}, year = {2020}, month = {11}, doi = {10.1145/3372297.3423351}, abstract = {Website fingerprinting (WFP) aims to infer information about the content of encrypted and anonymized connections by observing patterns of data flows based on the size and direction of packets. By collecting traffic traces at a malicious Tor entry node --- one of the weakest adversaries in the attacker model of Tor --- a passive eavesdropper can leverage the captured meta-data to reveal the websites visited by a Tor user. As recently shown, WFP is significantly more effective and realistic than assumed. Concurrently, former WFP defenses are either infeasible for deployment in real-world settings or defend against specific WFP attacks only. To limit the exposure of Tor users to WFP, we propose novel lightweight WFP defenses, TrafficSliver, which successfully counter today's WFP classifiers with reasonable bandwidth and latency overheads and, thus, make them attractive candidates for adoption in Tor. Through user-controlled splitting of traffic over multiple Tor entry nodes, TrafficSliver limits the data a single entry node can observe and distorts repeatable traffic patterns exploited by WFP attacks. We first propose a network-layer defense, in which we apply the concept of multipathing entirely within the Tor network. We show that our network-layer defense reduces the accuracy from more than 98{\%} to less than 16{\%} for all state-of-the-art WFP attacks without adding any artificial delays or dummy traffic. We further suggest an elegant client-side application-layer defense, which is independent of the underlying anonymization network. By sending single HTTP requests for different web objects over distinct Tor entry nodes, our application-layer defense reduces the detection rate of WFP classifiers by almost 50 percentage points. Although it offers lower protection than our network-layer defense, it provides a security boost at the cost of a very low implementation overhead and is fully compatible with today's Tor network.}, code = {https://github.com/TrafficSliver}, meta = {}, }
- Markus Dahlmanns, Johannes Lohmöller, Ina Berenice Fink, Jan Pennekamp, Klaus Wehrle, and Martin Henze. Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments. In Proceedings of the Internet Measurement Conference (IMC ’20), 10 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24{\%} of hosts), disabled security functionality (24{\%}), or use of deprecated cryptographic primitives (25{\%}) on in total 92{\%} of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.
@inproceedings{DLF+20, author = {Dahlmanns, Markus and Lohm{\"o}ller, Johannes and Fink, Ina Berenice and Pennekamp, Jan and Wehrle, Klaus and Henze, Martin}, title = {{Easing the Conscience with OPC UA: An Internet-Wide Study on Insecure Deployments}}, booktitle = {Proceedings of the Internet Measurement Conference (IMC '20)}, year = {2020}, month = {10}, doi = {10.1145/3419394.3423666}, abstract = {Due to increasing digitalization, formerly isolated industrial networks, e.g., for factory and process automation, move closer and closer to the Internet, mandating secure communication. However, securely setting up OPC UA, the prime candidate for secure industrial communication, is challenging due to a large variety of insecure options. To study whether Internet-facing OPC UA appliances are configured securely, we actively scan the IPv4 address space for publicly reachable OPC UA systems and assess the security of their configurations. We observe problematic security configurations such as missing access control (on 24{\%} of hosts), disabled security functionality (24{\%}), or use of deprecated cryptographic primitives (25{\%}) on in total 92{\%} of the reachable deployments. Furthermore, we discover several hundred devices in multiple autonomous systems sharing the same security certificate, opening the door for impersonation attacks. Overall, in this paper, we highlight commonly found security misconfigurations and underline the importance of appropriate configuration for security-featuring protocols.}, code = {https://github.com/COMSYS/opcua}, code2 = {https://github.com/COMSYS/zgrab2}, meta = {}, }
- Roman Matzutt, Jan Pennekamp, Erik Buchholz, and Klaus Wehrle. Utilizing Public Blockchains for the Sybil-Resistant Bootstrapping of Distributed Anonymity Services. In Proceedings of the 15th ACM ASIA Conference on Computer and Communications Security (ASIACCS ’20), 10 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Distributed anonymity services, such as onion routing networks or cryptocurrency tumblers, promise privacy protection without trusted third parties. While the security of these services is often well-researched, security implications of their required bootstrapping processes are usually neglected: Users either jointly conduct the anonymization themselves or they need to rely on a set of non-colluding privacy peers. However, the typically small number of privacy peers enable single adversaries to mimic distributed services. We thus present AnonBoot, a Sybil-resistant medium to securely bootstrap distributed anonymity services via public blockchains. AnonBoot enforces that peers periodically create a small proof of work to refresh their eligibility of providing secure anonymity services. A pseudo-random, locally replicable bootstrapping process using on-chain entropy then prevents biasing the election of eligible peers. Our evaluation using Bitcoin as AnonBoot’s underlying blockchain shows its feasibility to maintain a trustworthy repository of 1000 peers with only a small storage footprint while supporting arbitrarily large user bases on top of most blockchains.
@inproceedings{MPBW20, author = {Matzutt, Roman and Pennekamp, Jan and Buchholz, Erik and Wehrle, Klaus}, title = {{Utilizing Public Blockchains for the Sybil-Resistant Bootstrapping of Distributed Anonymity Services}}, booktitle = {Proceedings of the 15th ACM ASIA Conference on Computer and Communications Security (ASIACCS '20)}, year = {2020}, month = {10}, doi = {10.1145/3320269.3384729}, abstract = {Distributed anonymity services, such as onion routing networks or cryptocurrency tumblers, promise privacy protection without trusted third parties. While the security of these services is often well-researched, security implications of their required bootstrapping processes are usually neglected: Users either jointly conduct the anonymization themselves or they need to rely on a set of non-colluding privacy peers. However, the typically small number of privacy peers enable single adversaries to mimic distributed services. We thus present AnonBoot, a Sybil-resistant medium to securely bootstrap distributed anonymity services via public blockchains. AnonBoot enforces that peers periodically create a small proof of work to refresh their eligibility of providing secure anonymity services. A pseudo-random, locally replicable bootstrapping process using on-chain entropy then prevents biasing the election of eligible peers. Our evaluation using Bitcoin as AnonBoot's underlying blockchain shows its feasibility to maintain a trustworthy repository of 1000 peers with only a small storage footprint while supporting arbitrarily large user bases on top of most blockchains.}, code = {https://github.com/COMSYS/anonboot}, meta = {}, }
- Philipp Niemietz, Jan Pennekamp, Ike Kunze, Daniel Trauth, Klaus Wehrle, and Thomas Bergs. Stamping Process Modelling in an Internet of Production. Procedia Manufacturing, 49, 07 2020. Proceedings of the 8th International Conference on Through-Life Engineering Service (TESConf ’19).
[BibTeX] [Abstract] [DOI] [PDF]Sharing data between companies throughout the supply chain is expected to be beneficial for product quality as well as for the economical savings in the manufacturing industry. To utilize the available data in the vision of an Internet of Production (IoP) a precise condition monitoring of manufacturing and production processes that facilitates the quantification of influences throughout the supply chain is inevitable. In this paper, we consider stamping processes in the context of an Internet of Production and the preliminaries for analytical models that utilize the ever-increasing available data. Three research objectives to cope with the amount of data and for a methodology to monitor, analyze and evaluate the influence of available data onto stamping processes have been identified: (i) State detection based on cyclic sensor signals, (ii) mapping of in- and output parameter variations onto process states, and (iii) models for edge and in-network computing approaches. After discussing state-of-the-art approaches to monitor stamping processes and the introduction of the fineblanking process as an exemplary stamping process, a research roadmap for an IoP enabling modeling framework is presented.
@article{NPK+20, author = {Niemietz, Philipp and Pennekamp, Jan and Kunze, Ike and Trauth, Daniel and Wehrle, Klaus and Bergs, Thomas}, title = {{Stamping Process Modelling in an Internet of Production}}, journal = {Procedia Manufacturing}, year = {2020}, volume = {49}, publisher = {Elsevier}, month = {07}, doi = {10.1016/j.promfg.2020.06.012}, issn = {2351-9789}, note = {Proceedings of the 8th International Conference on Through-Life Engineering Service (TESConf '19)}, abstract = {Sharing data between companies throughout the supply chain is expected to be beneficial for product quality as well as for the economical savings in the manufacturing industry. To utilize the available data in the vision of an Internet of Production (IoP) a precise condition monitoring of manufacturing and production processes that facilitates the quantification of influences throughout the supply chain is inevitable. In this paper, we consider stamping processes in the context of an Internet of Production and the preliminaries for analytical models that utilize the ever-increasing available data. Three research objectives to cope with the amount of data and for a methodology to monitor, analyze and evaluate the influence of available data onto stamping processes have been identified: (i) State detection based on cyclic sensor signals, (ii) mapping of in- and output parameter variations onto process states, and (iii) models for edge and in-network computing approaches. After discussing state-of-the-art approaches to monitor stamping processes and the introduction of the fineblanking process as an exemplary stamping process, a research roadmap for an IoP enabling modeling framework is presented.}, meta = {}, }
- Jan Pennekamp, Fritz Alder, Roman Matzutt, Jan Tobias Mühlberg, Frank Piessens, and Klaus Wehrle. Secure End-to-End Sensing in Supply Chains. In Proceedings of the 5th International Workshop on Cyber-Physical Systems Security (CPS-Sec ’20). IEEE, 07 2020.
[BibTeX] [Abstract] [DOI] [PDF]Trust along digitalized supply chains is challenged by the aspect that monitoring equipment may not be trustworthy or unreliable as respective measurements originate from potentially untrusted parties. To allow for dynamic relationships along supply chains, we propose a blockchain-backed supply chain monitoring architecture relying on trusted hardware. Our design provides a notion of secure end-to-end sensing of interactions even when originating from untrusted surroundings. Due to attested checkpointing, we can identify misinformation early on and reliably pinpoint the origin. A blockchain enables long-term verifiability for all (now trustworthy) IoT data within our system even if issues are detected only after the fact. Our feasibility study and cost analysis further show that our design is indeed deployable in and applicable to today’s supply chain settings.
@inproceedings{PAM+20, author = {Pennekamp, Jan and Alder, Fritz and Matzutt, Roman and M{\"u}hlberg, Jan Tobias and Piessens, Frank and Wehrle, Klaus}, title = {{Secure End-to-End Sensing in Supply Chains}}, booktitle = {Proceedings of the 5th International Workshop on Cyber-Physical Systems Security (CPS-Sec '20)}, year = {2020}, publisher = {IEEE}, month = {07}, doi = {10.1109/CNS48642.2020.9162337}, abstract = {Trust along digitalized supply chains is challenged by the aspect that monitoring equipment may not be trustworthy or unreliable as respective measurements originate from potentially untrusted parties. To allow for dynamic relationships along supply chains, we propose a blockchain-backed supply chain monitoring architecture relying on trusted hardware. Our design provides a notion of secure end-to-end sensing of interactions even when originating from untrusted surroundings. Due to attested checkpointing, we can identify misinformation early on and reliably pinpoint the origin. A blockchain enables long-term verifiability for all (now trustworthy) IoT data within our system even if issues are detected only after the fact. Our feasibility study and cost analysis further show that our design is indeed deployable in and applicable to today's supply chain settings.}, meta = {}, }
- Roman Matzutt, Benedikt Kalde, Jan Pennekamp, Arthur Drichel, Martin Henze, and Klaus Wehrle. How to Securely Prune Bitcoin’s Blockchain. In Proceedings of the 19th IFIP Networking 2020 Conference (NETWORKING ’20), 06 2020.
[BibTeX] [Abstract] [PDF] [CODE]Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots’ correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.
@inproceedings{MKP+20, author = {Matzutt, Roman and Kalde, Benedikt and Pennekamp, Jan and Drichel, Arthur and Henze, Martin and Wehrle, Klaus}, title = {{How to Securely Prune Bitcoin's Blockchain}}, booktitle = {Proceedings of the 19th IFIP Networking 2020 Conference (NETWORKING '20)}, year = {2020}, month = {06}, abstract = {Bitcoin was the first successful decentralized cryptocurrency and remains the most popular of its kind to this day. Despite the benefits of its blockchain, Bitcoin still faces serious scalability issues, most importantly its ever-increasing blockchain size. While alternative designs introduced schemes to periodically create snapshots and thereafter prune older blocks, already-deployed systems such as Bitcoin are often considered incapable of adopting corresponding approaches. In this work, we revise this popular belief and present CoinPrune, a snapshot-based pruning scheme that is fully compatible with Bitcoin. CoinPrune can be deployed through an opt-in velvet fork, i.e., without impeding the established Bitcoin network. By requiring miners to publicly announce and jointly reaffirm recent snapshots on the blockchain, CoinPrune establishes trust into the snapshots' correctness even in the presence of powerful adversaries. Our evaluation shows that CoinPrune reduces the storage requirements of Bitcoin already by two orders of magnitude today, with further relative savings as the blockchain grows. In our experiments, nodes only have to fetch and process 5 GiB instead of 230 GiB of data when joining the network, reducing the synchronization time on powerful devices from currently 5 h to 46 min, with even more savings for less powerful devices.}, code = {https://github.com/COMSYS/coinprune}, meta = {}, }
- Jan Pennekamp, Lennart Bader, Roman Matzutt, Philipp Niemietz, Daniel Trauth, Martin Henze, Thomas Bergs, and Klaus Wehrle. Private Multi-Hop Accountability for Supply Chains. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops ’20), 1st Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS ’20), 06 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Today’s supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers’ demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.
@inproceedings{PBM+20, author = {Pennekamp, Jan and Bader, Lennart and Matzutt, Roman and Niemietz, Philipp and Trauth, Daniel and Henze, Martin and Bergs, Thomas and Wehrle, Klaus}, title = {{Private Multi-Hop Accountability for Supply Chains}}, booktitle = {Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops '20), 1st Workshop on Blockchain for IoT and Cyber-Physical Systems (BIoTCPS '20)}, year = {2020}, month = {06}, doi = {10.1109/ICCWorkshops49005.2020.9145100}, abstract = {Today's supply chains are becoming increasingly flexible in nature. While adaptability is vastly increased, these more dynamic associations necessitate more extensive data sharing among different stakeholders while simultaneously overturning previously established levels of trust. Hence, manufacturers' demand to track goods and to investigate root causes of issues across their supply chains becomes more challenging to satisfy within these now untrusted environments. Complementarily, suppliers need to keep any data irrelevant to such routine checks secret to remain competitive. To bridge the needs of contractors and suppliers in increasingly flexible supply chains, we thus propose to establish a privacy-preserving and distributed multi-hop accountability log among the involved stakeholders based on Attribute-based Encryption and backed by a blockchain. Our large-scale feasibility study is motivated by a real-world manufacturing process, i.e., a fine blanking line, and reveals only modest costs for multi-hop tracing and tracking of goods.}, code = {https://github.com/COMSYS/PrivAccIChain}, meta = {}, }
- Lars Gleim, Jan Pennekamp, Martin Liebenberg, Melanie Buchsbaum, Philipp Niemietz, Simon Knape, Alexander Epple, Simon Storms, Daniel Trauth, Thomas Bergs, Christian Brecher, Stefan Decker, Gerhard Lakemeyer, and Klaus Wehrle. FactDAG: Formalizing Data Interoperability in an Internet of Production. IEEE Internet of Things Journal, 7(4), 04 2020.
[BibTeX] [Abstract] [DOI] [PDF]In the production industry, the volume, variety and velocity of data as well as the number of deployed protocols increase exponentially due to the influences of IoT advances. While hundreds of isolated solutions exist to utilize this data, e.g., optimizing processes or monitoring machine conditions, the lack of a unified data handling and exchange mechanism hinders the implementation of approaches to improve the quality of decisions and processes in such an interconnected environment. The vision of an Internet of Production promises the establishment of a Worldwide Lab, where data from every process in the network can be utilized, even interorganizational and across domains. While numerous existing approaches consider interoperability from an interface and communication system perspective, fundamental questions of data and information interoperability remain insufficiently addressed. In this paper, we identify ten key issues, derived from three distinctive real-world use cases, that hinder large-scale data interoperability for industrial processes. Based on these issues we derive a set of five key requirements for future (IoT) data layers, building upon the FAIR data principles. We propose to address them by creating FactDAG, a conceptual data layer model for maintaining a provenance-based, directed acyclic graph of facts, inspired by successful distributed version-control and collaboration systems. Eventually, such a standardization should greatly shape the future of interoperability in an interconnected production industry.
@article{GPL+20, author = {Gleim, Lars and Pennekamp, Jan and Liebenberg, Martin and Buchsbaum, Melanie and Niemietz, Philipp and Knape, Simon and Epple, Alexander and Storms, Simon and Trauth, Daniel and Bergs, Thomas and Brecher, Christian and Decker, Stefan and Lakemeyer, Gerhard and Wehrle, Klaus}, title = {{FactDAG: Formalizing Data Interoperability in an Internet of Production}}, journal = {IEEE Internet of Things Journal}, year = {2020}, volume = {7}, number = {4}, publisher = {IEEE}, month = {04}, doi = {10.1109/JIOT.2020.2966402}, issn = {2327-4662}, abstract = {In the production industry, the volume, variety and velocity of data as well as the number of deployed protocols increase exponentially due to the influences of IoT advances. While hundreds of isolated solutions exist to utilize this data, e.g., optimizing processes or monitoring machine conditions, the lack of a unified data handling and exchange mechanism hinders the implementation of approaches to improve the quality of decisions and processes in such an interconnected environment. The vision of an Internet of Production promises the establishment of a Worldwide Lab, where data from every process in the network can be utilized, even interorganizational and across domains. While numerous existing approaches consider interoperability from an interface and communication system perspective, fundamental questions of data and information interoperability remain insufficiently addressed. In this paper, we identify ten key issues, derived from three distinctive real-world use cases, that hinder large-scale data interoperability for industrial processes. Based on these issues we derive a set of five key requirements for future (IoT) data layers, building upon the FAIR data principles. We propose to address them by creating FactDAG, a conceptual data layer model for maintaining a provenance-based, directed acyclic graph of facts, inspired by successful distributed version-control and collaboration systems. Eventually, such a standardization should greatly shape the future of interoperability in an interconnected production industry.}, meta = {}, }
- Linus Roepert, Markus Dahlmanns, Ina Berenice Fink, Jan Pennekamp, and Martin Henze. Assessing the Security of OPC UA Deployments. In Proceedings of the 1st ITG Workshop on IT Security (ITSec ’20), 04 2020.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.
@inproceedings{RDF+20, author = {Roepert, Linus and Dahlmanns, Markus and Fink, Ina Berenice and Pennekamp, Jan and Henze, Martin}, title = {{Assessing the Security of OPC UA Deployments}}, booktitle = {Proceedings of the 1st ITG Workshop on IT Security (ITSec '20)}, year = {2020}, month = {04}, doi = {10.15496/publikation-41813}, abstract = {To address the increasing security demands of industrial deployments, OPC UA is one of the first industrial protocols explicitly designed with security in mind. However, deploying it securely requires a thorough configuration of a wide range of options. Thus, assessing the security of OPC UA deployments and their configuration is necessary to ensure secure operation, most importantly confidentiality and integrity of industrial processes. In this work, we present extensions to the popular Metasploit Framework to ease network-based security assessments of OPC UA deployments. To this end, we discuss methods to discover OPC UA servers, test their authentication, obtain their configuration, and check for vulnerabilities. Ultimately, our work enables operators to verify the (security) configuration of their systems and identify potential attack vectors.}, code = {https://github.com/COMSYS/msf-opcua}, meta = {}, }
- Samuel Mann, Jan Pennekamp, Tobias Brockhoff, Anahita Farhang, Mahsa Pourbafrani, Lukas Oster, Merih Seran Uysal, Rahul Sharma, Uwe Reisgen, Klaus Wehrle, and Wil van der Aalst. Connected, digitalized welding production –- Secure, ubiquitous utilization of data across process layers. Advanced Structured Materials, 125, 2020. Proceedings of the 1st International Conference on Advanced Joining Processes (AJP ’19).
[BibTeX] [Abstract] [DOI] [PDF]A connected, digitalized welding production unlocks vast and dynamic potentials: from improving state of the art welding to new business models in production. For this reason, offering frameworks, which are capable of addressing multiple layers of applications on the one hand and providing means of data security and privacy for ubiquitous dataflows on the other hand, is an important step to enable the envisioned advances. In this context, welding production has been introduced from the perspective of interlaced process layers connecting information sources across various entities. Each layer has its own distinct challenges from both a process view and a data perspective. Besides, investigating each layer promises to reveal insight into (currently unknown) process interconnections. This approach has been substantiated by methods for data security and privacy to draw a line between secure handling of data and the need of trustworthy dealing with sensitive data among different parties and therefore partners. In conclusion, the welding production has to develop itself from an accumulation of local and isolated data sources towards a secure industrial collaboration in an Internet of Production.
@article{MPB+20, author = {Mann, Samuel and Pennekamp, Jan and Brockhoff, Tobias and Farhang, Anahita and Pourbafrani, Mahsa and Oster, Lukas and Uysal, Merih Seran and Sharma, Rahul and Reisgen, Uwe and Wehrle, Klaus and van der Aalst, Wil}, title = {{Connected, digitalized welding production --- Secure, ubiquitous utilization of data across process layers}}, journal = {Advanced Structured Materials}, year = {2020}, volume = {125}, publisher = {Springer}, doi = {10.1007/978-981-15-2957-3_8}, issn = {1869-8433}, note = {Proceedings of the 1st International Conference on Advanced Joining Processes (AJP '19)}, abstract = {A connected, digitalized welding production unlocks vast and dynamic potentials: from improving state of the art welding to new business models in production. For this reason, offering frameworks, which are capable of addressing multiple layers of applications on the one hand and providing means of data security and privacy for ubiquitous dataflows on the other hand, is an important step to enable the envisioned advances. In this context, welding production has been introduced from the perspective of interlaced process layers connecting information sources across various entities. Each layer has its own distinct challenges from both a process view and a data perspective. Besides, investigating each layer promises to reveal insight into (currently unknown) process interconnections. This approach has been substantiated by methods for data security and privacy to draw a line between secure handling of data and the need of trustworthy dealing with sensitive data among different parties and therefore partners. In conclusion, the welding production has to develop itself from an accumulation of local and isolated data sources towards a secure industrial collaboration in an Internet of Production.}, meta = {}, }
- Roman Matzutt, Jan Pennekamp, and Klaus Wehrle. A Secure and Practical Decentralized Ecosystem for Shareable Education Material. In Proceedings of the 34th International Conference on Information Networking (ICOIN ’20), 01 2020.
[BibTeX] [Abstract] [DOI] [PDF]Traditionally, the university landscape is highly federated, which hinders potentials for coordinated collaborations. While the lack of a strict hierarchy on the inter-university level is critical for ensuring free research and higher education, this concurrency limits the access to high-quality education materials. Especially regarding resources such as lecture notes or exercise tasks we observe a high susceptibility to redundant work and lacking quality assessment of material created in isolation by individual university institutes. To remedy this situation, in this paper we propose CORALIS, a decentralized marketplace for offering, acquiring, discussing, and improving education resources across university borders. Our design is based on a permissioned blockchain to (a) realize accountable access control via simple on-chain license terms, (b) trace the evolution of encrypted containers accumulating bundles of shareable education resources, and (c) record user comments and ratings for further improving the quality of offered education material.
@inproceedings{MPW20, author = {Matzutt, Roman and Pennekamp, Jan and Wehrle, Klaus}, title = {{A Secure and Practical Decentralized Ecosystem for Shareable Education Material}}, booktitle = {Proceedings of the 34th International Conference on Information Networking (ICOIN '20)}, year = {2020}, month = {01}, doi = {10.1109/ICOIN48656.2020.9016478}, abstract = {Traditionally, the university landscape is highly federated, which hinders potentials for coordinated collaborations. While the lack of a strict hierarchy on the inter-university level is critical for ensuring free research and higher education, this concurrency limits the access to high-quality education materials. Especially regarding resources such as lecture notes or exercise tasks we observe a high susceptibility to redundant work and lacking quality assessment of material created in isolation by individual university institutes. To remedy this situation, in this paper we propose CORALIS, a decentralized marketplace for offering, acquiring, discussing, and improving education resources across university borders. Our design is based on a permissioned blockchain to (a) realize accountable access control via simple on-chain license terms, (b) trace the evolution of encrypted containers accumulating bundles of shareable education resources, and (c) record user comments and ratings for further improving the quality of offered education material.}, meta = {}, }
2019
- Jan Pennekamp, Markus Dahlmanns, Lars Gleim, Stefan Decker, and Klaus Wehrle. Security Considerations for Collaborations in an Industrial IoT-based Lab of Labs. In Proceedings of the 3rd IEEE Global Conference on Internet of Things (GCIoT ’19), 12 2019.
[BibTeX] [Abstract] [DOI] [PDF]The productivity and sustainability advances for (smart) manufacturing resulting from (globally) interconnected Industrial IoT devices in a lab of labs are expected to be significant. While such visions introduce opportunities for the involved parties, the associated risks must be considered as well. In particular, security aspects are crucial challenges and remain unsolved. So far, single stakeholders only had to consider their local view on security. However, for a global lab, we identify several fundamental research challenges in (dynamic) scenarios with multiple stakeholders: While information security mandates that models must be adapted wrt. confidentiality to address these new influences on business secrets, from a network perspective, the drastically increasing amount of possible attack vectors challenges today’s approaches. Finally, concepts addressing these security challenges should provide backwards compatibility to enable a smooth transition from today’s isolated landscape towards globally interconnected IIoT environments.
@inproceedings{PDGDW19, author = {Pennekamp, Jan and Dahlmanns, Markus and Gleim, Lars and Decker, Stefan and Wehrle, Klaus}, title = {{Security Considerations for Collaborations in an Industrial IoT-based Lab of Labs}}, booktitle = {Proceedings of the 3rd IEEE Global Conference on Internet of Things (GCIoT '19)}, year = {2019}, month = {12}, doi = {10.1109/GCIoT47977.2019.9058413}, abstract = {The productivity and sustainability advances for (smart) manufacturing resulting from (globally) interconnected Industrial IoT devices in a lab of labs are expected to be significant. While such visions introduce opportunities for the involved parties, the associated risks must be considered as well. In particular, security aspects are crucial challenges and remain unsolved. So far, single stakeholders only had to consider their local view on security. However, for a global lab, we identify several fundamental research challenges in (dynamic) scenarios with multiple stakeholders: While information security mandates that models must be adapted wrt. confidentiality to address these new influences on business secrets, from a network perspective, the drastically increasing amount of possible attack vectors challenges today's approaches. Finally, concepts addressing these security challenges should provide backwards compatibility to enable a smooth transition from today's isolated landscape towards globally interconnected IIoT environments.}, meta = {}, }
- Jan Pennekamp, Martin Henze, Simo Schmidt, Philipp Niemietz, Marcel Fey, Daniel Trauth, Thomas Bergs, Christian Brecher, and Klaus Wehrle. Dataflow Challenges in an Internet of Production: A Security & Privacy Perspective. In Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC ’19), 11 2019.
[BibTeX] [Abstract] [DOI] [PDF]The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop. Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.
@inproceedings{PHS+19, author = {Pennekamp, Jan and Henze, Martin and Schmidt, Simo and Niemietz, Philipp and Fey, Marcel and Trauth, Daniel and Bergs, Thomas and Brecher, Christian and Wehrle, Klaus}, title = {{Dataflow Challenges in an Internet of Production: A Security {\&} Privacy Perspective}}, booktitle = {Proceedings of the 5th ACM Workshop on Cyber-Physical Systems Security and PrivaCy (CPS-SPC '19)}, year = {2019}, month = {11}, doi = {10.1145/3338499.3357357}, abstract = {The Internet of Production (IoP) envisions the interconnection of previously isolated CPS in the area of manufacturing across institutional boundaries to realize benefits such as increased profit margins and product quality as well as reduced product development costs and time to market. This interconnection of CPS will lead to a plethora of new dataflows, especially between (partially) distrusting entities. In this paper, we identify and illustrate these envisioned inter-organizational dataflows and the participating entities alongside two real-world use cases from the production domain: a fine blanking line and a connected job shop. Our analysis allows us to identify distinct security and privacy demands and challenges for these new dataflows. As a foundation to address the resulting requirements, we provide a survey of promising technical building blocks to secure inter-organizational dataflows in an IoP and propose next steps for future research. Consequently, we move an important step forward to overcome security and privacy concerns as an obstacle for realizing the promised potentials in an Internet of Production.}, meta = {}, }
- Wladimir De la Cadena, Asya Mitseva, Jan Pennekamp, Jens Hiller, Fabian Lanze, Thomas Engel, Klaus Wehrle, and Andriy Panchenko. POSTER: Traffic Splitting to Counter Website Fingerprinting. In Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS ’19), 11 2019.
[BibTeX] [Abstract] [DOI] [PDF]Website fingerprinting (WFP) is a special type of traffic analysis, which aims to infer the websites visited by a user. Recent studies have shown that WFP targeting Tor users is notably more effective than previously expected. Concurrently, state-of-the-art defenses have been proven to be less effective. In response, we present a novel WFP defense that splits traffic over multiple entry nodes to limit the data a single malicious entry can use. Here, we explore several traffic-splitting strategies to distribute user traffic. We establish that our \emph{weighted random} strategy dramatically reduces the accuracy from nearly 95\% to less than 35\% for \emph{four} state-of-the-art WFP attacks without adding any artificial delays or dummy traffic.
@inproceedings{DMP+19, author = {De la Cadena, Wladimir and Mitseva, Asya and Pennekamp, Jan and Hiller, Jens and Lanze, Fabian and Engel, Thomas and Wehrle, Klaus and Panchenko, Andriy}, title = {{POSTER: Traffic Splitting to Counter Website Fingerprinting}}, booktitle = {Proceedings of the 26th ACM SIGSAC Conference on Computer and Communications Security (CCS '19)}, year = {2019}, month = {11}, doi = {10.1145/3319535.3363249}, abstract = {Website fingerprinting (WFP) is a special type of traffic analysis, which aims to infer the websites visited by a user. Recent studies have shown that WFP targeting Tor users is notably more effective than previously expected. Concurrently, state-of-the-art defenses have been proven to be less effective. In response, we present a novel WFP defense that splits traffic over multiple entry nodes to limit the data a single malicious entry can use. Here, we explore several traffic-splitting strategies to distribute user traffic. We establish that our \emph{weighted random} strategy dramatically reduces the accuracy from nearly 95\% to less than 35\% for \emph{four} state-of-the-art WFP attacks without adding any artificial delays or dummy traffic.}, meta = {}, }
- Jens Hiller, Jan Pennekamp, Markus Dahlmanns, Martin Henze, Andriy Panchenko, and Klaus Wehrle. Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet. Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible. To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.
@inproceedings{HPD+19, author = {Hiller, Jens and Pennekamp, Jan and Dahlmanns, Markus and Henze, Martin and Panchenko, Andriy and Wehrle, Klaus}, title = {{Tailoring Onion Routing to the Internet of Things: Security and Privacy in Untrusted Environments}}, booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)}, year = {2019}, month = {10}, doi = {10.1109/ICNP.2019.8888033}, abstract = {An increasing number of IoT scenarios involve mobile, resource-constrained IoT devices that rely on untrusted networks for Internet connectivity. In such environments, attackers can derive sensitive private information of IoT device owners, e.g., daily routines or secret supply chain procedures, when sniffing on IoT communication and linking IoT devices and owner. Furthermore, untrusted networks do not provide IoT devices with any protection against attacks from the Internet. Anonymous communication using onion routing provides a well-proven mechanism to keep the relationship between communication partners secret and (optionally) protect against network attacks. However, the application of onion routing is challenged by protocol incompatibilities and demanding cryptographic processing on constrained IoT devices, rendering its use infeasible. To close this gap, we tailor onion routing to the IoT by bridging protocol incompatibilities and offloading expensive cryptographic processing to a router or web server of the IoT device owner. Thus, we realize resource-conserving access control and end-to-end security for IoT devices. To prove applicability, we deploy onion routing for the IoT within the well-established Tor network enabling IoT devices to leverage its resources to achieve the same grade of anonymity as readily available to traditional devices.}, code = {https://github.com/COMSYS/tor4iot-tor}, code2 = {https://github.com/COMSYS/tor4iot-contiki}, meta = {}, }
- Jan Pennekamp, Jens Hiller, Sebastian Reuter, Wladimir De la Cadena, Asya Mitseva, Martin Henze, Thomas Engel, Klaus Wehrle, and Andriy Panchenko. Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
[BibTeX] [Abstract] [DOI] [PDF]Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client’s identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.
@inproceedings{PHR+19, author = {Pennekamp, Jan and Hiller, Jens and Reuter, Sebastian and De la Cadena, Wladimir and Mitseva, Asya and Henze, Martin and Engel, Thomas and Wehrle, Klaus and Panchenko, Andriy}, title = {{Multipathing Traffic to Reduce Entry Node Exposure in Onion Routing}}, booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)}, year = {2019}, month = {10}, doi = {10.1109/ICNP.2019.8888029}, abstract = {Users of an onion routing network, such as Tor, depend on its anonymity properties. However, especially malicious entry nodes, which know the client's identity, can also observe the whole communication on their link to the client and, thus, conduct several de-anonymization attacks. To limit this exposure and to impede corresponding attacks, we propose to multipath traffic between the client and the middle node to reduce the information an attacker can obtain at a single vantage point. To facilitate the deployment, only clients and selected middle nodes need to implement our approach, which works transparently for the remaining legacy nodes. Furthermore, we let clients control the splitting strategy to prevent any external manipulation.}, meta = {}, }
- Markus Dahlmanns, Chris Dax, Roman Matzutt, Jan Pennekamp, Jens Hiller, and Klaus Wehrle. Privacy-Preserving Remote Knowledge System. In Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP ’19), 10 2019.
[BibTeX] [Abstract] [DOI] [PDF]More and more traditional services, such as malware detectors or collaboration services in industrial scenarios, move to the cloud. However, this behavior poses a risk for the privacy of clients since these services are able to generate profiles containing very sensitive information, e.g., vulnerability information or collaboration partners. Hence, a rising need for protocols that enable clients to obtain knowledge without revealing their requests exists. To address this issue, we propose a protocol that enables clients (i) to query large cloud-based knowledge systems in a privacy-preserving manner using Private Set Intersection and (ii) to subsequently obtain individual knowledge items without leaking the client’s requests via few Oblivious Transfers. With our preliminary design, we allow clients to save a significant amount of time in comparison to performing Oblivious Transfers only.
@inproceedings{DDM+19, author = {Dahlmanns, Markus and Dax, Chris and Matzutt, Roman and Pennekamp, Jan and Hiller, Jens and Wehrle, Klaus}, title = {{Privacy-Preserving Remote Knowledge System}}, booktitle = {Proceedings of the 27th IEEE International Conference on Network Protocols (ICNP '19)}, year = {2019}, month = {10}, doi = {10.1109/ICNP.2019.8888121}, abstract = {More and more traditional services, such as malware detectors or collaboration services in industrial scenarios, move to the cloud. However, this behavior poses a risk for the privacy of clients since these services are able to generate profiles containing very sensitive information, e.g., vulnerability information or collaboration partners. Hence, a rising need for protocols that enable clients to obtain knowledge without revealing their requests exists. To address this issue, we propose a protocol that enables clients (i) to query large cloud-based knowledge systems in a privacy-preserving manner using Private Set Intersection and (ii) to subsequently obtain individual knowledge items without leaking the client's requests via few Oblivious Transfers. With our preliminary design, we allow clients to save a significant amount of time in comparison to performing Oblivious Transfers only.}, meta = {}, }
- Jan Pennekamp, Martin Henze, Oliver Hohlfeld, and Andriy Panchenko. Hi Doppelgänger: Towards Detecting Manipulation in News Comments. In Companion Proceedings of the 2019 World Wide Web Conference (WWW ’19 Companion), 4th Workshop on Computational Methods in Online Misbehavior (CyberSafety ’19), 05 2019.
[BibTeX] [Abstract] [DOI] [PDF]Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging. In this paper, we evaluate whether stylometric features allow a detection of such doppelgängers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelgängers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelgängers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelgängers in real world data sets.
@inproceedings{PHHP19, author = {Pennekamp, Jan and Henze, Martin and Hohlfeld, Oliver and Panchenko, Andriy}, title = {{Hi Doppelg{\"a}nger: Towards Detecting Manipulation in News Comments}}, booktitle = {Companion Proceedings of the 2019 World Wide Web Conference (WWW '19 Companion), 4th Workshop on Computational Methods in Online Misbehavior (CyberSafety '19)}, year = {2019}, month = {05}, doi = {10.1145/3308560.3316496}, abstract = {Public opinion manipulation is a serious threat to society, potentially influencing elections and the political situation even in established democracies. The prevalence of online media and the opportunity for users to express opinions in comments magnifies the problem. Governments, organizations, and companies can exploit this situation for biasing opinions. Typically, they deploy a large number of pseudonyms to create an impression of a crowd that supports specific opinions. Side channel information (such as IP addresses or identities of browsers) often allows a reliable detection of pseudonyms managed by a single person. However, while spoofing and anonymizing data that links these accounts is simple, a linking without is very challenging. In this paper, we evaluate whether stylometric features allow a detection of such doppelg{\"a}ngers within comment sections on news articles. To this end, we adapt a state-of-the-art doppelg{\"a}ngers detector to work on small texts (such as comments) and apply it on three popular news sites in two languages. Our results reveal that detecting potential doppelg{\"a}ngers based on linguistics is a promising approach even when no reliable side channel information is available. Preliminary results following an application in the wild shows indications for doppelg{\"a}ngers in real world data sets.}, meta = {}, }
- Jan Pennekamp, René Glebke, Martin Henze, Tobias Meisen, Christoph Quix, Rihan Hai, Lars Gleim, Philipp Niemietz, Maximilian Rudack, Simon Knape, Alexander Epple, Daniel Trauth, Uwe Vroomen, Thomas Bergs, Christian Brecher, Andreas Bührig-Polaczek, Matthias Jarke, and Klaus Wehrle. Towards an Infrastructure Enabling the Internet of Production. In Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS ’19), 05 2019.
[BibTeX] [Abstract] [DOI] [PDF]New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today’s infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.
@inproceedings{PGH+19, author = {Pennekamp, Jan and Glebke, Ren{\'e} and Henze, Martin and Meisen, Tobias and Quix, Christoph and Hai, Rihan and Gleim, Lars and Niemietz, Philipp and Rudack, Maximilian and Knape, Simon and Epple, Alexander and Trauth, Daniel and Vroomen, Uwe and Bergs, Thomas and Brecher, Christian and B{\"u}hrig-Polaczek, Andreas and Jarke, Matthias and Wehrle, Klaus}, title = {{Towards an Infrastructure Enabling the Internet of Production}}, booktitle = {Proceedings of the 2nd IEEE International Conference on Industrial Cyber-Physical Systems (ICPS '19)}, year = {2019}, month = {05}, doi = {10.1109/ICPHYS.2019.8780276}, abstract = {New levels of cross-domain collaboration between manufacturing companies throughout the supply chain are anticipated to bring benefits to both suppliers and consumers of products. Enabling a fine-grained sharing and analysis of data among different stakeholders in an automated manner, such a vision of an Internet of Production (IoP) introduces demanding challenges to the communication, storage, and computation infrastructure in production environments. In this work, we present three example cases that would benefit from an IoP (a fine blanking line, a high pressure die casting process, and a connected job shop) and derive requirements that cannot be met by today's infrastructure. In particular, we identify three orthogonal research objectives: (i) real-time control of tightly integrated production processes to offer seamless low-latency analysis and execution, (ii) storing and processing heterogeneous production data to support scalable data stream processing and storage, and (iii) secure privacy-aware collaboration in production to provide a basis for secure industrial collaboration. Based on a discussion of state-of-the-art approaches for these three objectives, we create a blueprint for an infrastructure acting as an enabler for an IoP.}, meta = {}, }
2017
- Jan Pennekamp, Martin Henze, and Klaus Wehrle. A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead. Pervasive and Mobile Computing, 42, 12 2017.
[BibTeX] [Abstract] [DOI] [PDF]With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users’ privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications’ permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.
@article{PHW17, author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus}, title = {{A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead}}, journal = {Pervasive and Mobile Computing}, year = {2017}, volume = {42}, publisher = {Elsevier}, month = {12}, doi = {10.1016/j.pmcj.2017.09.005}, issn = {1574-1192}, abstract = {With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users' privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications' permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.}, meta = {}, }
- Martin Henze, Jan Pennekamp, David Hellmanns, Erik Mühmer, Jan Henrik Ziegeldorf, Arthur Drichel, and Klaus Wehrle. CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps. In Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 11 2017.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users’ contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.
@inproceedings{HPH+17, author = {Henze, Martin and Pennekamp, Jan and Hellmanns, David and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and Drichel, Arthur and Wehrle, Klaus}, title = {{CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps}}, booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)}, year = {2017}, month = {11}, doi = {10.1145/3144457.3144471}, abstract = {Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users' contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.}, code = {https://github.com/COMSYS/CloudAnalyzer}, meta = {}, }
- Jan Henrik Ziegeldorf, Jan Pennekamp, David Hellmanns, Felix Schwinger, Ike Kunze, Martin Henze, Jens Hiller, Roman Matzutt, and Klaus Wehrle. BLOOM: BLoom filter based Oblivious Outsourced Matchings. BMC Medical Genomics, 10(Suppl 2), 07 2017. Proceedings of the 5th iDASH Privacy and Security Workshop 2016.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose FHE-Bloom and PHE-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-Bloom is fully secure in the semi-honest model while PHE-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.
@article{ZPH+17, author = {Ziegeldorf, Jan Henrik and Pennekamp, Jan and Hellmanns, David and Schwinger, Felix and Kunze, Ike and Henze, Martin and Hiller, Jens and Matzutt, Roman and Wehrle, Klaus}, title = {{BLOOM: BLoom filter based Oblivious Outsourced Matchings}}, journal = {BMC Medical Genomics}, year = {2017}, volume = {10}, number = {Suppl 2}, month = {07}, doi = {10.1186/s12920-017-0277-y}, issn = {1755-8794}, note = {Proceedings of the 5th iDASH Privacy and Security Workshop 2016}, abstract = {Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose FHE-Bloom and PHE-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. FHE-Bloom is fully secure in the semi-honest model while PHE-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while PHE-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, FHE-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, PHE-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.}, code = {https://github.com/COMSYS/bloom}, meta = {}, }
2016
- Andriy Panchenko, Fabian Lanze, Andreas Zinnen, Martin Henze, Jan Pennekamp, Klaus Wehrle, and Thomas Engel. Website Fingerprinting at Internet Scale. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium (NDSS ’16), 02 2016.
[BibTeX] [Abstract] [DOI] [PDF] [CODE]The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper – one of the weakest adversaries in the attacker model of anonymization networks such as Tor. In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method – including our own – scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.
@inproceedings{PLZ+16, author = {Panchenko, Andriy and Lanze, Fabian and Zinnen, Andreas and Henze, Martin and Pennekamp, Jan and Wehrle, Klaus and Engel, Thomas}, title = {{Website Fingerprinting at Internet Scale}}, booktitle = {Proceedings of the 23rd Annual Network and Distributed System Security Symposium (NDSS '16)}, year = {2016}, month = {02}, doi = {10.14722/ndss.2016.23477}, abstract = {The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper - one of the weakest adversaries in the attacker model of anonymization networks such as Tor. In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method - including our own - scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.}, code = {https://www.informatik.tu-cottbus.de/~andriy/zwiebelfreunde/}, meta = {}, }
University Papers
- Master Thesis: Uncovering Doppelgängers in Online Communities
Advised by Dr. Andriy Panchenko1 (SecanLab, University of Luxembourg) & Dr. Oliver Hohlfeld2 (COMSYS, RWTH Aachen University) - Seminar: Challenges for Privacy Enforcing on Smartphones *2nd best paper* (COMSYS, RWTH Aachen University)
[Journal-Submission] - Seminar: MOOCs and Authentication *Best Paper Award* (School of Science, Aalto University)
[Proceedings] - Bachelor Thesis: Evaluating Website Fingerprinting Attacks in Real-World Settings
Advised by Dr. Andriy Panchenko1 (SecanLab, University of Luxembourg) & Martin Henze3 (COMSYS, RWTH Aachen University)
Code collaboration (Until 05/2018):
- CloudAnalyzer (COMSYS, RWTH Aachen University)
[APK] [Code]
Among others used in:
– “CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps” Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous 2017)
– “Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps” Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous 2017) - MailAnalyzer (COMSYS, RWTH Aachen University)
[Code]
Used in “Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape”
Proceedings of the 2017 Network Traffic Measurement and Analysis Conference (TMA 2017) - Secure Genome Outsourcing (COMSYS, RWTH Aachen University)
[Code]
Used in “BLOOM: BLoom-filter-based Oblivious Outsourced Matchings”
BMC Medical Genomics 2017, Volume 10, July 2017, Issue 2 Supplement. - Website Fingerprinting Toolkit (SecanLab, University of Luxembourg)
[Code] Only a subset, the CUMUL classifier is publicly available.
Among others used in:
– “Analysis of Fingerprinting Techniques for Tor Hidden Services” Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES 2017)
– “POSTER: Fingerprinting Tor Hidden Services” Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS 2016)
1. [Now full professor for IT Security at Brandenburg University of Technology]
2. [Now full professor for Distributed Systems at University of Kassel]
3. [Now junior professor for Security and Privacy in Industrial Cooperation at RWTH Aachen University and postdoctoral researcher at Cyber Analysis & Defense, Fraunhofer FKIE]