Journée des stagiaires, doctorants et postdocs NPA
PhD students, interns and postdoc NPA day
2 Juillet 2020 09h
2 July 2020 9AM
Session I - Smart Cities (9h - 10h40)
|Assessing personal exposure to air Pollution using portable sensors
|C-Continuum: Edge-to-Cloud computing for distributed AI
|Persistence of Vehicular-Augmented Mobile Edges
|Improving Congestion Control Mechanisms of CoAP in Constrained IoT Networks
Session II - Blockchains (10h50 - 12h30)
|Game theoretical analysis of Atomic Cross-Chain Swaps
|Atomic Cross Chain Swaps via Relays and Adapters
|Game theoretical frameworks for blockchains
|Rational Behavior in Committee-Based Blockchains
Déjeuner - 12h30 - 13h30
Session III - Réseaux (13h30 - 15h20)
|E-IOTA: an efficient and fast metamorphism for IOTA
|DHT-Based Lightning Channels
|Ricardo Lopez Dawn
|More bang for your buck: optimal advertising through online social influencers
|Longitudinal study of a geolocation database
Session IV - Distribuée (15h40 - 17h)
|An overview on the reliable communication problem
Session I - Smart Cities
Boris Dessimond: Assessing Personal Exposure to Air Pollution using Portable Sensors
Quick introduction to air Pollution. Presentation of the Polluscope project, it's stakes and challenges. Preliminary results.
Sadia Khizar: Persistence of Vehicular-Augmented Mobile Edges
Mobile edge computing (MEC) allows for the execution of delay-sensitive and context-aware applications close to the end-users while alleviating backhaul utilization and computation at the core network. A recent trend is to extend the capabilities of the MEC using the available resources of end-user devices beyond the edge, such as vehicles. Because of the highly mobile nature of such devices, the beyond-the-edge computing resources available at the time the MEC offloads a task decays over time. In this paper, we propose and investigate the persistence of groups of computational nodes in a vehicular-augmented MEC scenario. The groups are formed according to the time of arrival of vehicles and their co-location within a cell. Through the analyses of two real-world vehicular scenarios in the cities of Rome and Rio de Janeiro. On the one hand, our results show that the persistence is highly heterogeneous, depending on the vehicular traffic density and the time of day. On the other hand, we are able to identify periods during which the persistence is short and periods that show stronger stability - it becomes then possible to schedule tasks to specific time slots depending on the requirements and the expected persistence.
Nabil Makarem: Improving Congestion Control Mechanisms of CoAP in Constrained IoT Networks
The Constrained Application Protocol (CoAP) is a lightweight core protocol used for communication between devices in the Internet-of-Things. The CoAP standard defines a simple congestion control mechanism based mainly on re-transmissions after timeouts and a binary exponential backoff procedure. However, the default parameters of this mechanism can negatively affect the efficiency of the protocol in severe network conditions represented by high retransmission timeouts due to the back-off mechanism and unneeded re-transmissions. Therefore, we propose new exact mathematical models to study the reliability and the overall performance of CoAP in lossy networks with and without bursty losses. This study provides useful insights in order to tune CoAP parameters and also highlights CoAP properties and limitations so that future better mechanisms can be designed adequately. We then propose enhanced algorithm for better performance in lossy network with evaluation results to show the accuracy of our proposed models and algorithm.
Davide Aguiari: C-Continuum: Edge-to-Cloud computing for distributed AI
Mobile autonomous systems are supposed to deeply impact manufacturing, space exploration, rescue, defense, transportation, and everyday life; massive amounts of data and computations will be required though. Distributed AI applications call for effective, seamless, and efficient communication and computation mechanisms across the whole computing spectrum edge, fog, and cloud. C-Continuum, a Computing Continuum framework for distributed AI aims to define a new generation of tools and mechanisms designed to enable fine-granularity computation, coordination and mobility management across the mobile-computing spectrum from the edge to the core embracing Named Data Networking (NDN) for naming any computational entity and using those names for resource location, data transfers, and computing functions as well.
Session II - Blockchains
Marianna Belotti: Game theoretical analysis of Atomic Cross-Chain Swaps
We address the distributed cross-chain swap problem in the blockchain context where multiple agents exchange assets across multiple blockchain systems (e.g. trading Bitcoins for Litecoins or Ethers). We present a mathematical framework allowing to characterize blockchain swap protocols as the combination of a publishing and a commitment phase, where contracts are respectively published and then committed. We characterize the equilibria of existing cross-chain swap protocols (i.e., blockchain swap protocols exchanging assets among different blockchains). More precisely, we prove that following a swap protocol characterized by concurrent publishing of exchange contracts and snap (immediate) assets transfers is a Nash equilibrium. Furthermore, we prove that for protocols with a sequential publishing and commitment of the assets transfers, following the prescribed protocol is a sub-game perfect equilibrium.
Léonard Lys: Atomic Cross Chain Swaps via Relays and Adapters
Blockchain technologies have proven their potential when it comes to store assets and value. However, swapping assets across chains, for example trading ethers for bitcoins is still a challenging problem to solve. Current solutions widely rely on trusted third parties such as exchanges, which is not acceptable for a distributed technology. Atomic Cross Chain Swap protocols that use hash time locked contract have been proposed as a solution for inter-chain exchanges but the atomicity of the transaction can be violated in case of client crash or packet loss. Furthermore those protocols are impractical as they require the user to perform complex audits and transactions on several blockchains over relatively long period of time. In this paper we propose to solve the shortcomings of hash time locked cross chain transaction by using relays and adapters in order to automate contract auditing and user actions as well as preventing atomicity violation.
Paolo Zappala: Game Theoretical Frameworks for Blockchains
In the literature the analysis of robustness of blockchains is held in different ways. Most of the models are built ad hoc for specific contexts and they are not comparable. After a brief analysis of game theoretical models, we introduce a new framework to analyse the robustness of blockchains. We apply it to a second layer class of protocols and to a cross-chain swap protocol. Finally, we show how the model can be generalised to any specific blockchain.
Yackolley Amoussou-Guenou: Rational Behavior in Committee-Based Blockchains
We study the rational behaviors of participants in committee-based blockchains. Committee-based blockchains rely on specific blockchain consensus that must be guaranteed in presence of rational participants. We consider a simplified blockchain consensus algorithm based on existing or proposed committee-based blockchains that encapsulates the main actions of the participants: voting for a block, and checking its validity. Knowing that those actions have costs, and achieving the consensus gives rewards to committee members, we study using game theory how strategic players behave while trying to maximizing their gains.
Session III - Réseaux
Gewu Bu: E-IOTA: An Efficient and Fast Metamorphism for IOTA
IOTA opened recently a new line of research in distributed ledgers area by targeting algorithms that ensure a high throughput for the transactions generated in IoT systems. Transactions are continuously appended to an acyclic structure called Tangle. Each new transaction selects and approves, as parents, two existing transactions who have not been approved yet (called Tips). Recent research indicates that, IOTA suffers from a Fairness Problem, where a part of submitted transactions might not be approved by new coming transactions. They are called Left-Behind transactions. Recently, G-IOTA has been proposed: by using the Left-Behind Protection mechanism, G-IOTA can fix the fairness problem. G-IOTA, however, needs additional computation resources for left-behind protection mechanism, which is not suitable for an energy-aware IoT system. Furthermore, both IOTA and G-IOTA suffer from a potential security defect. This side effect reduces the Unpredictability when transactions are appended to the tangle. Leaking the unpredictability, an adversary can easily induce the tendency of transactions appending to the tangle. This approach is called Prediction Attack. In this paper, we propose E-IOTA to solve the fairness problem and at the same time, guarantee the unpredictability of the tangle. Interestingly, our real scenario simulation shows that E-IOTA offers better performance efficiency compared to G-IOTA and even original IOTA. Simultaneously, E-IOTA retains the core data structure of IOTA, which makes E-IOTA be compatible with current IOTA implementation. Our solution can therefore be a perfect alternative solution for current IOTA.
Hamza Tadlaoui: DHT-Based Lightning Channels
Le sujet porte sur les lightning network, une seconde couche de bitcoin permettant d'améliorer la scalabilité de la blockchain. Mon stage se base notamment sur les techniques de routage sur les Lightning Network.
Ricardo Lopez Dawn: More Bang for your Buck: Optimal Advertising through Online Social Influencers
Since their appearance, online social platforms have always been the natural environment for viral marketing. To study the impact of advertising campaigns orchestrated inside these platforms, it is common to formulate an influence maximisation problem. Its solution determines a set of users who can maximise the spread of influence, under a given influence propagation model. Normally such problems are combinatorial in nature, thus hard to solve, and research has focused on sub-optimal solutions. In this presentation, we introduce a continuous variation of the influence maximisation problem. A specific user (advertiser) with a certain monetary budget in her disposal orchestrates an advertising campaign by investing on other users to create posts in her favour. We use a propagation model with the property that a higher posting activity leads to proportionally higher influence. The proposed problem formulation is flexible to include various requirements of the campaign. The solution algorithm has low computational complexity, allowing applicability in real platforms with large number of candidate influencers.
Kévin Vermeulen: Longitudinal study of a geolocation database
IP geolocation -- the processing of mapping network identifiers to physical locations -- has myriad applications. We examine a popular commercial geolocation database in detail and take a first look at its longitudinal properties over 10 years through metrics of persistence, prevalence, coverage, and movement at different location granularities and across different classes of IP addresses. We find that significant location differences can exist even between instances of the database closely separated in time. We then demonstrate via a case study that these differences can materially impact the results of research that depends on geolocation. Based on our findings, we make recommendations on the best practices for using geolocation databases to encourage reproducibility and support sound measurement.
Session IV - Distribuée
Mikaël Rabie: Distributed Recoloring
In Graph Theory, a recoloring problem is as follows: given two colorings of a graph, is there a path of acceptable solutions from the first to the second by changing the color of a node at each step (the new coloring remaining valid)? What is the length of the shortest recoloring path? What complexities are involved? In this talk, the goal would be to consider distributed version of recoloring. To parallelise the process, we will accept to change the state of an independent set of nodes at once. The questions will be, using the LOCAL model, how much communication is needed (i.e. how much a node needs knowledge of its neighborhood) in order to produce a recoloring schedule of a given length. We prove that the addition of colors for the intermediate steps is needed for some cases in order to have a solution. I will provide the analysis of trees, where we want to go from a 3-coloring to another with the use of a 4th color. II will show that a constant schedule can be found after O(log n) communications. I will also present some impossibility proof for toroidal grids. Those works are the result of collaborations with Marthe Bonamy, Paul Ouvrard, Jara Uitto and Jukka Suomela.
Giovanni Farina: An overview on the reliable communication problem
Reliable communication is a fundamental primitive in fault-tolerant distributed systems, providing correct message exchanges between processes interconnected through a not complete communication network.
In this talk, we provide an overview on the reliable communication problem, on the solving protocols available in the literature and on their limitations, and then on the solutions we developed during last years.