An Overview of Metrics for Evaluating the Quality of Graph Partitioners
Mustapha Abdulkadir Sani,
Abdulmalik Ahmad Lawan,
Ayaz Khalid Mohammed,
Abdulkadir Ahmad,
Yusuf Haruna
Issue:
Volume 7, Issue 1, January 2022
Pages:
1-8
Received:
12 January 2022
Accepted:
3 February 2022
Published:
16 February 2022
Abstract: Nowadays, many applications that involve big data can be modelled as graphs. In many cases these graphs may be too large to be loaded and processed on a single commodity computer. This necessitated the development of frameworks that allow processing large graphs by distributing the graph among nodes in a cluster. To process a graph using these frameworks, first, the graph is partitioned into smaller components called subgraphs or partitions, and then assign these smaller subgraphs to different nodes for parallel processing. Depending on the type of processing (example, computing pagerank, counting number of triangles etc.), there will be some communication between nodes during the execution, this communication affects execution time. Therefore, graph partitioning is an important step in distributed graph processing. Being able to determine the quality of a partition prior to processing is important as this will allow us to predict the execution time before the actual processing. A number of metrics for evaluating the quality of a graph partitions exist, but studies show that these metrics may not serve as accurate predictors in many cases. In this work, we reviewed published papers about graph partitioning and we were able to identify and defined more metrics in order to have a catalogue of these metrics.
Abstract: Nowadays, many applications that involve big data can be modelled as graphs. In many cases these graphs may be too large to be loaded and processed on a single commodity computer. This necessitated the development of frameworks that allow processing large graphs by distributing the graph among nodes in a cluster. To process a graph using these fram...
Show More
Secure Contact Agreement Protocol for Messenger Services Through Randomized ID Assignments
Michael Maigwa Martin Kangethe,
Elisha Odira Abade
Issue:
Volume 7, Issue 1, January 2022
Pages:
9-17
Received:
27 January 2022
Accepted:
23 February 2022
Published:
4 March 2022
Abstract: Messenger services have over the recent decade been the most dominant, ubiquitous, and widespread form of communication globally. While the service has evolved to enable reliable real-time communication between people across different technologies, there have been privacy and security concerns that were initially remediated by implementing TLS/SSL and E2E Encryption as a standard. However, new privacy challenges and security gaps have been identified and exploited through the capture and analysis of communication traffic metadata. These methods exploit the use of readily available advanced machine learning and data mining algorithms to identify users’ communication networks and patterns without reading the actual messages sent between end-users just by analyzing readily available metadata such as the sender and receiver IDs, time sent, and communication frequency. To close these gaps the need to anonymize users’ communications while maintaining reliable contact with each other is necessary. Randomized anonymization of metadata parameters can ensure it becomes nearly impossible for current analytics algorithms to identify user patterns from communication traffic over time. Also, to guarantee seamless communication between users with changing identities there needs to be a real-time contact exchange protocol enabling users to randomly change their IDs and secretly inform other users in their contacts without the physical intervention or involvement of the human user. This research paper proposes a solution through the use of a randomized contact reassignment and exchange protocol by using the PKI encryption protocol to share its new identity with its existing contacts defeating the creation of traceable logs over time.
Abstract: Messenger services have over the recent decade been the most dominant, ubiquitous, and widespread form of communication globally. While the service has evolved to enable reliable real-time communication between people across different technologies, there have been privacy and security concerns that were initially remediated by implementing TLS/SSL ...
Show More