-
Notifications
You must be signed in to change notification settings - Fork 42
Ideas list
As new people join the Linux XIA project, they often want to have a smallish project that gives them an entry point and time to learn XIA as they go. There are many ways to find this entry point. For example, you can:
- grep (search) the code for the XXX tag, which identifies where improvements can be made;
- fix a bug you find while experimenting with Linux XIA;
- write a new application;
- port an existing application;
- offer help to another developer already working on a project;
- read our Roadmap for ideas;
- devise a new project of your own creation.
|
Note: this project was first proposed in 2018. We are offering it again as a project idea for GSoC 2019.
Gatekeeper is an open source project related to Linux XIA. It is a defense system against denial-of-service (DoS) attacks. To avoid attack packets reaching their target destinations, in Gatekeeper all packets are redirected to an intermediate location (such as a cloud data center) where they are first processed by Gatekeeper servers. At this intermediate location, the packets are examined and either forwarded to the destination or dropped.
To determine the fate of these packets, Grantor servers in the destination network inform the Gatekeeper servers of their decisions about whether to forward or drop. However, in the current Gatekeeper system, the Grantor to Gatekeeper channel that carries these important decisions is not protected. In theory, someone could spoof or replay decision packets to say that certain attack packets should be allowed through, or that certain legitimate packets should be dropped. We need to protect this channel by authenticating the Grantor decision packets using cryptographic techniques.
This project comprises the following steps:
1. Add a digital signature and expiration timestamp to decision packets that are sent from Grantor to Gatekeeper. This allows Grantor to safely send decision packets to Gatekeeper and blocks spoofing and replay attacks. Note that this only requires Gatekeeper knowing the public key of Grantor, but does not require Grantor to know the public key of all Gatekeeper servers that it serves.
Gatekeeper is implemented using a set of libraries named DPDK, which provides a symmetric-key cryptography library for authentication and encryption. If possible, utilize this software library for a digital signature, and any hardware support that comes with it, by integrating it with the Gatekeeper code. If DPDK does not provide any digital signature functionality, use a public-key cryptography library such as libressl. At a minimum, you will likely have to alter the GT block (running at Grantor) and the GT-GK Unit block (running at Gatekeeper) to do this step.
2. Add functionality to enable updating of Grantor's public key without stopping operation of the Gatekeeper system. To do so, first the new Grantor public key should be added to Gatekeeper so that two public keys are allowed to verify signatures from Grantor. Second, replace the old private key at the Grantor. Third, remove the old public key at Gatekeeper. This will require adding key update functionality to the Dynamic Configuration blocks for both Gatekeeper and Grantor.
3. Design an experiment showing two scenarios: an authenticated Grantor server successfully informing Gatekeeper of a decision, and an unauthenticated host unsuccessfully spoofing a decision packet.
4. Complete a code review with the project mentor(s) and have your code upstreamed into the Gatekeeper repository.
5. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2019 GSoC mentor summit.
Being proficient in C programming. Familiarity with computer networking and cryptography would be useful, but is not essential.
Since Gatekeeper is a DoS defense project, it would be useful to learn the basics of denial of service attacks.
If you are unfamiliar with cryptography, it will be useful to try to understand some topics relevant to this project, such as public-key cryptography and symmetric-key cryptography. Once you understand the theory, it would be useful to know how these techniques are used in DPDK. There is a cryptographic device library as well as a sample application.
To see where the new authentication code will be added, it could be useful to look at the Gatekeeper source code. The decision packets will be sent from the GT block (gt/main.c), and will be received by the GT-GK Unit block (ggu/main.c).
Potential mentors:
Allocated mentor:
GSoC student:
The ability to understand and record how various types of resources are used or how this usage correlates to DDoS attacks is essential to better mitigate these attacks. In order to understand the performance of Gatekeeper system and correlate the performance metrics with DDoS attack events, one needs to monitor the running states of both Gatekeeper servers and Grantor servers, including hardware and software states. Moreover, these time series data can be used to further optimize the performance of Gatekeeper (e.g., identifying bottlenecks), DDoS anomaly detection, generate global policies allowing gatekeeper instances to cooperate to better defense the DDoS attacks, dynamically scaling Gatekeeper system, etc.
Thus, there is a strong need to implement a telemetry system to quickly and easily observe a variety of metrics and produce a record of those metrics for future analysis. This project will create Grafana measurement infrastructure on Chameleon testbed, which is a configurable experimental environment for large-scale cloud research. It allows administrators to easily and quickly display metrics about ongoing system and network states to gain interactive insight into its dynamics or use stored data to explore correlations in subsequent analysis.
This project comprises the following steps:
1. Setup the InfluxDB + Grafana on the Chameleon testbed. Our monitoring infrastructure should be deployed on the Chameleon testbed and should consist of two components: (a) Grafana, a visualization tool for displaying time series data; (b) InfluxDB, a time series database that stores the dataset collected by the monitoring agents as illustrated in step 2). Overall, the system works as follows: an agent located on the server instance collects metrics and pushes them into InfluxDB. Grafana pulls the data from InfluxDB and plots resource usage graphs for each participating node. Moreover, one needs to find ways to aggregate metrics from a subset of all the participating nodes and display them on Grafana.
2. Design and implement monitoring agents that continuously collect various metrics. Specifically, on each Gatekeeper and Grantor server, it needs to start custom agents specific to metrics the administrators want to measure. On start, the available agents will generate telemetry, store them into InfluxDB and the administrator will be able to open a Grafana browser window, hosted and managed at a special node, to see the live recorded data for particular metrics. In this step, monitoring the resource consumption (e.g., CPU, network) on each node should be implemented. Moreover, the agents should allow the administrators to configure the data collection periods.
3. Extend the monitoring agents to collect three sources of dataset: (a) system states of the Gatekeeper servers and Grantor servers; (b) DDoS metrics; (c) infrastructure states. For (a), one can write a standalone program or using existing program (e.g., OpenStack) to collect the system states; for (b), one needs to patch the Gatekeeper project to add a monitoring component; for (c), one needs to implement an agent to actively probe the latencies and loss rates among the Gatekeeper servers and the Grantor servers.
4. Testing your code extensively. The code should be robust. You need to design a large experiment (16+ nodes) to demonstrate that the measurement infrastructure works, and add the experiment to Gatekeeper’s wiki.
5. Complete a code review with the project mentor(s) and have your code upstreamed into the Gatekeeper repository.
6. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2019 GSoC mentor summit.
Being proficient in SQL-like query languages and C programming, familiar with OpenStack.
Since Gatekeeper is a DoS defense project, it would be useful to learn the basics of denial of service attacks.
Since most nodes in Chameleon testbed are managed by OpenStack, it would be useful to learn the basics of OpenStack, including the tools, utilities, etc.
Moreover, this project needs to actively probe several network metrics like latency and loss rate, it would be useful to read the sections 3.1 and 3.3 in paper Inferring Persistent Interdomain Congestion.
Potential mentors:
Allocated mentor:
GSoC student:
Grantor is a part of Gatekeeper, our open source defense system against denial-of-service (DoS) attacks. All packets that pass through Gatekeeper servers in vantage points are forwarded to Grantor servers in the destination network. Grantors have the task of granting or denying client requests to transmit to servers in the destination network.
Grantor decides whether to grant or deny requests based on a policy defined by the destination network operator. For example, the policy may dictate that all packets sent on a certain port should be denied, or that all traffic to a given web server should be granted but rate limited.
However, these examples of policies are simple and based on only the packet at hand. We would also like to have a database of information that Grantor can draw from when making policy decisions, allowing the system to make policy decisions using algorithms on data collected over time. Specifically, this project will create the database and collect source address information from packets to try to detect IP address spoofing.
This project comprises the following steps:
1. Set up a Redis database to operate alongside Grantor. Redis is a distributed in-memory key-value database, which is desirable for our purposes because there may be multiple Grantor instances in a single deployment.
2. Implement the ability to update the database from Grantor. Grantor should be patched to allow Redis traffic, and then should pass any data (e.g. packet headers) to a Lua script that writes to the Redis database. There are multiple guides and libraries for interacting with Redis from Lua; for example, here and here. The exact Lua script that you compose should write the client source IP address and the Gatekeeper server’s IP address to the database; this will be used for detecting IP address spoofing in the next step.
3. Implement a Grantor policy in Lua -- invoked when requests are received -- to detect a spoofed IP address. The policy should perform a lookup on the Redis database using the client’s source IP address as the key, and check whether the associated Gatekeeper server IP address in the database matches the Gatekeeper server IP address in the packet. If it doesn’t, it should be flagged as a spoofed request and denied.
4. Create a demo showing your anti-spoofing policy at work. Create multiple Grantor instances and Gatekeeper instances, and show that when multiple clients use the same IP address to go through different Gatekeeper servers, that Grantor will flag the second client as using a spoofed IP address.
5. Complete a code review with the project mentor(s) and have your code upstreamed into the Gatekeeper repository.
6. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2019 GSoC mentor summit.
Familiarity with C is important, but proficiency in the Lua programming language will be essential. Familiarity with computer networking and databases, particularly noSQL databases like Redis, will also be useful.
Since Gatekeeper is a DoS defense project, it would be useful to learn the basics of denial of service attacks. Since this project will be focused on setting up and interacting with a Redis database, it will be useful to read the Redis documentation as well. Moreover, since this project uses Lua to interact with Redis, it will be useful to read the Lua Reference Manual.
To see where the new database insertion and lookup code will be added, it could be useful to look at the Gatekeeper source code. The Grantor functionality is contained in gt/main.c, where you can find code for receiving requests and invoking policy decision lookups. There is an example of a Grantor policy in a Lua script that you can find in lua/policy.lua.
Potential mentors:
Allocated mentor:
GSoC student:
Currently, a very simple version of Neighbor Watch Protocol (NWP) that fills in a role similar to ARP's one in IPv4 is implemented inside of HID's kernel module. This is not ideal because the evolution of NWP is attached to the evolution of HID.
In GSoC 2017, the Ethernet principal has been implemented by student Saurav Kumar working with mentors Pranav Goswami and Qiaobin Fu. With the Ethernet principal, NWP could be implemented in userland. A userland implementation of NWP would simplify the code, and lower the effort to bring out the potential of the protocol.
This project comprises the following steps:
1. Implementing NWP in userland. Note that, once NWP is in userland, the current HID principal will lose meaning and should be removed entirely. The AD principal should take the role of the HID principal. Thus, the NWP daemon should maintain two tables: (1) For the Ethernet principal, NWP should maintain the tables for neighbors and local entries, respectively; (2) For the AD principal, NWP should maintain the table that maps AD XIDs to Ethernet XIDs. To implement the NWP daemon, we need to use two libraries: (1) libevent is an event notification library, which provides a mechanism to execute a callback function when a specific event occurs; (2) libmnl is a minimalistic user-space library oriented to Netlink developers, which provides simple helpers that allows developers to re-use code for common repetitive and error-prone tasks. The NWP deamon should be eventually merged into the xiaconf repository.
2. Configuring NWP daemon using library libucl. This library has clear design that should be very convenient for reading and writing, makes it easier to evolve the configuration files, and gives network operator a syntax that is already familiar to them.
3. Testing your code extensively. The code should be robust. You need to design an experiment to demonstrate that the NWP daemon works, and add the experiment to Linux XIA's wiki.
4. Having your code merged into Linux XIA. This will require you to go through a careful code review, but once you meet our standards, your code will be part of Linux XIA and will evolve with it.
5. Summarizing your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in C programming.
The essential items to learn before starting this project is How to write a principal, RCU, HID principal, AD principal. Moreover, we have centralized all the information that we have accumulated about NWP so far. It should be a good starting point to implement NWP in userland.
Besides that, you should be able to compile the Linux XIA kernel, and execute one of the experiments described on this wiki.
Potential mentor: Saurav Kumar, Pranav Goswami, and Qiaobin Fu.
Allocated mentor: Pranav Goswami.
GSoC student: Vibhav Pant (submitted proposal).
To manage the Linux XIA network stack running in the kernel, we use a userspace tool named xip. xip is analogous to ip(8), which allows users to manipulate the IP routing tables, devices, and policies.
A big portion of the xip code is files that each control a single Linux XIA principal: AD, HID, U4ID, etc. Each principal has different formats and options for how to manipulate it in the kernel. For example, users may add entries in the routing table for how to forward AD XIDs, or may add Ethernet XIDs using a local interface name.
As a result, xip has grown with each principal added to the stack, but its code hasn't been carefully designed to avoid duplicate code. We want to improve the design of xip so that we can more easily add new principal types in the future.
This project comprises the following steps:
1. Refactor xip to keep the code clean as the number of principals increases. As part of this task, you should consider whether it makes sense to push the code for each principal into a dynamic object, and whether there should be an internal library to reduce code duplication and simplify the code of each principal.
2. Drop the current Route Netlink (rtnetlink) code in favor of adopting the library libmnl. The current rtnetlink code is borrowed from the ip tool, and it does more than what xip needs. Library libmnl is simple, clean, and has been successfully used in net-eval.
3. Alter xip to automatically load the kernel modules of requested principals. Right now, users must first manually load each kernel module needed before using xip. The new version of xip should automatically load a kernel module (if it’s not already loaded) when the user tries to manipulate that principal. An example piece of code that loads a module at runtime can be found in the repository of the Gatekeeper project; look for how the function init_kni() loads the kernel module.
4. Remove the various assert() statements that are in the code and replace them with user-friendly error messages that more accurately describe what issue has been encountered. For example, if one of the kernel modules cannot be loaded, the code should issue an error message that describes the problem.
5. Complete a code review with the project mentor(s) and have your code upstreamed into the main xiaconf repository.
6. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in C programming.
It will be useful to understand the source code of the xip tool as found in the xiaconf repository. This will help you to see how the code is currently structured and how it can be refactored.
It will also be helpful to understand what the rtnetlink library does and how libmnl can replace it. In case you are not familiar with Netlink sockets, you will want to read about them here.
Potential mentor: Nishanth Devarajan, Sachin Paryani, and Cody Doucette.
Allocated mentor: Sachin Paryani.
GSoC student: Pranjan Sana (submitted proposal).
In order to be a practical and deployable architecture, Linux XIA must on some level coexist with IP networks. Linux XIA has implemented the U4ID principal, which has proven to be a useful principal to promote interoperability between XIA and TCP/IP. This project applies the same ideas to the IPv6 architecture, and implements the U6ID principal to utilize IPv6 network functionality. The code of U6ID should follow the code of U4ID closely to keep general behavior, and simplify implementation.
More specifically, one needs to implement the U6ID principal, which allows XIP packets to be encapsulated into the payload of UDP/IPv6 packets so that XIP can effectively be tunneled through legacy networks. Similar to U4ID, U6ID tunnels are implicit in the network. A host can locally add a tunnel destination (and optionally, a tunnel source), but there is no agreement to set-up an explicit tunnel between hosts.
This project comprises the following steps:
1. Implementing the U6ID principal in Linux XIA. Notice that, there is only a local routing table for U6IDs; main U6IDs, or routes, are not recorded. This is because we need to be able to identify when there is a U6ID representing a local socket to be able to do encapsulation and decapsulation. However, the delivery of packets to other hosts is not the responsibility of XIA when using the U6ID principal. Forwarding packets to other hosts is done in the space of the IP stack; therefore, we do not need to keep U6ID route information in the XIA routing table.
2. Add the U6ID principal to xiaconf. You need to change the xip tool to add the ability to manipulate the U6ID principal via xip u6id. More specifically, you need to define functions for adding/deleting/dumping entries in the principal’s forwarding table via xip tool. For example, to add a local U6ID entry, the following command can be used:
# xip u6id add 2018:09::1 0x41d0
This will create a UDP socket on the IP address, port tuple of 2018:09::1, 0x41d0.
However, you can optionally specify that a local U6ID entry also represents the source of a tunnel, in addition to representing the destination of a tunnel:
# xip u6id add 2018:09::1 0x41d0 -tunnel
In addition, you need to implement the functionality to enable/disable the UDP checksumming for every tunnel socket created using the "-tunnel" flag.
3. Test your code, and design experiments to demonstrate the U6ID principal works using two other applications: (1) net-echo, an echo application that works with TCP and UDP in IP as well as Serval and XDP in XIA; (2) XLXC, a lightweight virtualization solution for quickly simulating many XIA hosts.
4. Having your code merged into Linux XIA. This will require you to go through a careful code review, but once you meet our standards, your code will be part of Linux XIA and will evolve with it.
5. Besides the implementation of the U6ID, this project should document a demo to be posted on this wiki.
6. Summarizing your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in C programming.
The essential items to learn before starting this project is How to write a principal, RCU, U4ID principal.
Besides that, you should be able to compile the Linux XIA kernel, and execute one of the experiments described on this wiki.
Potential mentor: Saurav Kumar, Pranav Goswami, and Qiaobin Fu.
Allocated mentor: Saurav Kumar.
GSoC student: Daivik Dave (submitted proposal).
Gatekeeper is an open source project related to Linux XIA. It is a defense system against denial-of-service (DoS) attacks. To avoid attack packets reaching their target destinations, in Gatekeeper all packets are redirected to an intermediate location (such as a cloud data center) where they are first processed by Gatekeeper servers. At this intermediate location, the packets are examined and either forwarded to the destination or dropped.
To determine the fate of these packets, Grantor servers in the destination network inform the Gatekeeper servers of their decisions about whether to forward or drop. However, in the current Gatekeeper system, the Grantor to Gatekeeper channel that carries these important decisions is not protected. In theory, someone could spoof or replay decision packets to say that certain attack packets should be allowed through, or that certain legitimate packets should be dropped. We need to protect this channel by authenticating the Grantor decision packets using cryptographic techniques.
This project comprises the following steps:
1. Add a digital signature and expiration timestamp to decision packets that are sent from Grantor to Gatekeeper. This allows Grantor to safely send decision packets to Gatekeeper and blocks spoofing and replay attacks. Note that this only requires Gatekeeper knowing the public key of Grantor, but does not require Grantor to know the public key of all Gatekeeper servers that it serves.
Gatekeeper is implemented using a set of libraries named DPDK, which provides a symmetric-key cryptography library for authentication and encryption. If possible, utilize this software library for a digital signature, and any hardware support that comes with it, by integrating it with the Gatekeeper code. If DPDK does not provide any digital signature functionality, use a public-key cryptography library such as libressl. At a minimum, you will likely have to alter the GT block (running at Grantor) and the GT-GK Unit block (running at Gatekeeper) to do this step.
2. Add functionality to enable updating of Grantor's public key without stopping operation of the Gatekeeper system. To do so, first the new Grantor public key should be added to Gatekeeper so that two public keys are allowed to verify signatures from Grantor. Second, replace the old private key at the Grantor. Third, remove the old public key at Gatekeeper. This will require adding key update functionality to the Dynamic Configuration blocks for both Gatekeeper and Grantor.
3. Design an experiment showing two scenarios: an authenticated Grantor server successfully informing Gatekeeper of a decision, and an unauthenticated host unsuccessfully spoofing a decision packet.
4. Complete a code review with the project mentor(s) and have your code upstreamed into the Gatekeeper repository.
5. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in C programming. Familiarity with computer networking and cryptography would be useful, but is not essential.
Since Gatekeeper is a DoS defense project, it would be useful to learn the basics of denial of service attacks.
If you are unfamiliar with cryptography, it will be useful to try to understand some topics relevant to this project, such as public-key cryptography and symmetric-key cryptography. Once you understand the theory, it would be useful to know how these techniques are used in DPDK. There is a cryptographic device library as well as a sample application.
To see where the new authentication code will be added, it could be useful to look at the Gatekeeper source code. The decision packets will be sent from the GT block (gt/main.c), and will be received by the GT-GK Unit block (ggu/main.c).
Potential mentor: Nishanth Devarajan, Sachin Paryani, and Cody Doucette.
Allocated mentor: Nishanth Devarajan.
GSoC student: Ka Ming Hui (submitted proposal) (link temporarily disabled for GSoC 2019).
In the past, we have used a lightweight virtualization mechanism, Linux containers, to test and evaluate Linux XIA. Linux containers allow us to emulate a network of multiple Linux XIA hosts on a single machine, helping us to generate and forward packets between XIA hosts. However, the topologies that we can create with these containers is limited to either a complete graph or a star topology.
Mininet is a tool that we could use to generate more complex networks for testing purposes. Mininet emulates a network of hosts, links, and switches on a single machine, and allows a user to specify custom network topologies. This project extends Mininet to support XIA and alters some of our testing scripts to use the new Mininet functionality.
This project comprises the following steps:
1. Extend Mininet to support configuration of XIA hosts. In designing the Mininet network, a user may want to perform some XIA-specific configuration of the hosts. Configuration of XIA hosts is typically done using the userspace tool xip, so your work here should bring to Mininet the same type of configuration that xip can do. For example, with respect to the host principal (HID), users should be able to specify in Mininet scripts that they want to assign one or more HIDs to a host, or add an HID to a host's main packet forwarding table.
2. Redesign the zFilter experiment to use Mininet instead of Linux containers. Update the wiki page with the new experiment details.
3. Design a large experiment (8+ nodes) for Linux XIA that uses a custom topology in Mininet. Use Mininet’s Python API to create a script for the new experiment, and add the experiment to Linux XIA’s wiki.
4. Complete a code review with the project mentor(s) and have your code upstreamed into the main Mininet repository.
5. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in Python programming.
A good starting point would just to set up Mininet, perhaps with the aid of the walkthrough in the Mininet documentation. Understanding the Linux XIA experiments posted on this wiki and thinking about how you can change them to use Mininet could be useful for your proposal as well.
Potential mentor: Nishanth Devarajan, Sachin Paryani, Qiaobin Fu, and Cody Doucette.
Allocated mentor: Cody Doucette.
GSoC student: Hrishav Mukherjee (submitted proposal).
Gatekeeper is an open source defense system against denial-of-service (DoS) attacks. Although the bandwidth of the link that connects a Gatekeeper server to an IX is always large, it can be overloaded. During these overwhelming load, the only solution is to black hole the IP destinations/prefixes/flows that responsible for most of the traffic. Gatekeeper will black hole IP destinations/prefixes/flows as the last resort to keep the rest of the services running because black holed destinations effectively lost the battle against the attackers.
The challenge is to quickly and efficiently identify the IP destinations/prefixes/flows that need to be black holed. We need to identify and implement an algorithm that can identify the smallest set of IP destinations/prefixes/flows that is using a given percentage of the whole bandwidth of the link. Once these IP destinations/prefixes/flows are identified, they can be announced through BGP to be black holed throughout the Internet.
Gatekeeper should black hole IP destinations/prefixes/flows for a maximum period described in its configuration file. After this expiration, the IP destinations/prefixes/flows are released, and Gatekeeper will black hole them again if the overwhelming attacks continue.
This project comprises the following steps:
1. Implementing the Randomized Hierarchical Heavy Hitter (RHHH) algorithm in Gatekeeper. The RHHH algorithm was published in an ACM SIGCOMM '17 paper: "Constant Time Updates in Hierarchical Heavy Hitters".
2. Enhancing the RHHH algorithm with Cold Filter, which is a meta-framework for faster and more accurate stream processing published in SIGMOD '18. Since the implementation of RHHH algorithm is based some counter algorithm, the Cold Filter can be applied to these algorithms for better performance.
3. Evaluating the implemented algorithm. One needs to evaluate the performance of the implemented algorithm in Gatekeeper in terms of throughput (packets per second), accuracy, errors, etc. This step should produce graphs that show the results.
4. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2018 GSoC mentor summit.
Being proficient in C programming.
Previously, Qiaobin has done some survey on methods for finding frequent items in data streams, which is a good starting point.
Besides, since Gatekeeper is a DoS defense project, it would be useful to learn the basics of denial of service attacks.
Potential mentor: Qiaobin Fu.
Allocated mentor: Qiaobin Fu.
GSoC student: Prashant Prajapati (submitted proposal).
Gatekeeper is an open source defense against denial-of-service (DoS) attacks. To protect resources from an attack, Gatekeeper servers request permission on behalf of a sender to be able to send traffic to a receiver. These requests are limited to take up at most 5% of a link's bandwidth and are queued according to an assigned priority -- the higher the priority, the closer to the exit of the queue a request is placed. This allows requests with higher priorities to be serviced first, which can help isolate and protect legitimate traffic during a DoS attack.
Currently, we can use Linux's tc(8) utility to instruct the kernel to queue packets according to their priority as specified by the DSCP field in an IP packet. However, this has the problem of creating a separate queue for each priority. Instead, we want a single priority queue that holds all requests, and drops low priority requests when resources are low. This allows us to allocate as many resources as possible to high priority requests, while only servicing low priority requests when we have idle resources.
The priority queue has already been implemented in userspace using the Intel DPDK framework, and is structured as follows:
We maintain a linked list of packets listed in order of highest priority to lowest priority, where cur_pkt references the next request to be serviced. We keep an array where each index represents a priority, and each element of that array holds a reference to the last packet of that priority. This allows us to quickly insert new packets of any priority into the linked list and drop the packet of lowest priority if necessary.
This project comprises the following steps:
1. Adapt the Gatekeeper priority queue as described above for use in the Linux kernel. You will need to find the appropriate place in the kernel and implement the priority queue using kernel data structures. The priority queue should also fit into the existing traffic control framework in order to keep functionality such as matching the IP DSCP field and limiting requests to at most 5% of the link bandwidth.
2. Adapt the tc utility to enable a user to create, configure, and use a priority queue for requests. This will likely involve adding a new queueing discipline to tc and expanding the command line interface to allow users to choose it. For example, once the new priority queue is available, the commands may be something like:
tc qdisc add dev eth0 root handle 1:0 htb tc class add dev eth0 parent 1:1 classid 1:2 htb rate 5mbit ceil 5mbit prio 1 tc qdisc add dev eth0 parent 1:2 handle 20: priority_queue
This would create a queueing discipline that limits a class of traffic to 5 Mbps using the Hierarchy Token Bucket algorithm ("htb") and would do so using the newly implemented priority_queue type. More options and filters may be needed to make packets match this priority queue.
3. Change the tc command line interface to allow the bandwidth limits to be specified as a percentage of the interface's capacity. In other words, the second command above would be easier to specify as:
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 5% ceil 5% prio 1
That way, we won't have to compute the bandwidth limit each time we add these kinds of class rules.
4. Stress test the priority queue by generating large amounts of traffic with a realistic distribution of request priorities.
5. Summarize your work in up five slides. These slides will be merged into our presentation of your work during the 2017 GSoC mentor summit.
Potential mentor: Sachin Paryani, and Cody Doucette.
Allocated mentor: Sachin Paryani.
GSoC student: Nishanth Devarajan (submitted proposal).
Being proficient in C programming, and comfortable with non-trivial data structures is an essential requirement for this project. While one does not need a large background in kernel developing for this project, your proposal should show that you can deal with it.
To understand the Gatekeeper request priority queue, you can read through the implementation here. This is a draft version; when the priority queue is merged into the official Gatekeeper repository, we will update this page.
The priority queue kernel implementation will likely be placed in the net/sched folder of the Linux source code. It will be useful to develop an understanding of any entry points into this code and the structure of the packet scheduling system overall.
It will also be useful to understand the basics of the tc application. You can start by reading the manual page and continue to the source code to see which parts will need to be changed to create the priority queue in the kernel. You should also look for system calls to the kernel from this repository since they will likely be received in the net/sched directory linked above. Knowing where the entry point into the kernel is will help you when you go to implement the priority queue.
To become comfortable writing code in the kernel, it would be useful to read chapters 2, 6, 9, 10, and 18 of the book Linux Kernel Development. Chapters 9 and 10 cover synchronization topics, which you will need to know when implementing the priority queue. You may need to use RCU to enable multiple reading and writing threads to access the priority queue consistently.
The Ethernet principal, or Ether principal for short, will highlight Linux XIA as a layerless network stack, and enable a cleaner implementation of NWP in userland. Currently, Linux XIA implements the functionality of sending a packet out of a network interface within the HID principal. This is not ideal because the evolution of NWP is attached to the evolution of HID, and HID without NWP is equivalent to the AD principal. Thus, there is a demand for having a link layer principal that forwards packets on devices represented by XIDs.
The general idea for the Ether principal is that each Ether XID represents a network interface. Other principals can send packets out of a specific network interface by simply having XIDs that redirect to the Ether principal. The Ethernet principal will have a local and main table, like the AD principal for example. Both tables are going to be populated by the command xip in userland. With the Ethernet principal, we will eventually be able to implement NWP in userland, and this NWP daemon will keep those tables. The Ether principal should follow the example of Linux's "neighbour" (British spelling) subsystem and cache link-layers headers. Finally, in order to allow an XID to represent a network interface, we define the 20-byte Ether XIDs in the format with the first 4 bytes representing the interface identifier, followed by 6 bytes representing the MAC address, and all remaining 10 bytes set as 0. The following XID is an example with network interface identifier 0x0a, and MAC address 01:02:03:04:05:06:
eth-0000000a01020304050600000000000000000000
More specifically, this project comprises the following steps:
1. Implementing the Ether principal in Linux XIA. As discussed above, one needs to implement two tables: the local table is for the XIDs that are local, so they move the last node pointer in the XIP header; the main table is for the XIDs that the Ethernet principal will forward. In order to cache L2 headers, this principal needs to support MAC-address changes of interfaces like the one shown in this example:
ip link set eth0 lladrr 01:02:03:04:05:06
This is essential because if the MAC addresses of the interfaces it has entries in the main table changed, it also needs to change the cached header, so that it can update the source MAC addresses of the headers that are going out through that interface.
2. Add the Ether principal to xiaconf. You need to change the xip tool to add the ability to manipulate the Ether principal via xip eth. More specifically, you need to define functions for adding/deleting/dumping entries in the principal’s forwarding table via xip tool. Notice that you can get the interface identifier in user space using an ioctl call with SIOCGIFINDEX.
3. Comparing the performance of the Ether principal with the HID principal, e.g., the number of resolution per second, packets forwarding speed. One can do this comparison patching net-eval. This step should produce a graphic that ensures us that adopting Ether principal will not degrade performance as well as test the correctness of the new code.
4. Having your code merged into Linux XIA. This will require you to go through a careful code review, but once you meet our standards, your code will be part of Linux XIA and will evolve with it.
5. Summarizing your work in up five slides. These slides will be merged into our presentation of your work during the 2017 GSoC mentor summit.
Potential mentor: Pranav Goswami, and Qiaobin Fu.
Allocated mentor: Pranav Goswami.
GSoC student: Saurav Kumar (submitted proposal).
Being proficient in C programming, and comfortable with non-trivial data structures is an essential requirement for this project. While one does not need a large background in kernel developing for this project, your proposal should show that you can deal with it.
The essential items to learn before starting this project is How to write a principal, Linux's neighbour subsystem, RCU, HID principal, and Neighbourhood Watch Protocol (NWP) from Cody’s presentation.
One can learn more about Linux's neighbour subsystem reading chapter 7 of the book Linux Kernel Networking: Implementation and Theory, and chapter 27 of the book Understanding Linux Network Internals, as well as its code in the kernel. The entry to point to the kernel code of Linux’s neighbour subsystem is its header file neighbour.h. Besides that, you should be able to compile the Linux XIA kernel, and execute one of the experiments described on this wiki.
The Longest Prefix Matching (LPM) principal enables Linux XIA to leverage IP routing infrastructure to build a fully routable XIA network. The student André Eleuterio worked with mentor Cody Doucette to implement the LPM principal during Google Summer of Code 2015. The goal of this project is to retool André’s contribution with poptrie, a recent data structure that employs a number of features available in modern general-purpose processors to look up the longest prefix lightning fast.
This project comprises the following steps:
1. Adapting poptrie’s code to XIDs. Poptrie’s code is tuned to 32-bit IPv4 addresses, where XIA’s XIDs are 160 bits long. Moreover, poptrie’s code uses a dedicated buddy memory allocator, which is not needed in the kernel. Therefore, one has to add support for XIDs, and replace the buddy allocator with malloc(3).
2. Testing the correctness of the XIA poptrie. One can reuse Garnaik Sumeet’s evaluation framework to test the XIA poptrie against the other algorithms supported in his framework with a number of different FIBs. This test will ensure that the XIA poptrie is dependable.
3. Integrating the XIA poptrie into the LPM principal. Besides adjusts that will emerge during the integration, we expect that you will adapt the XIA poptrie to support RCU in order to enable Linux XIA to forward on LPM XIDs without aquiring locks.
4. Comparing the performance of the new LPM principal against the old one. One can do this comparison patching net-eval. This step should produce a graphic that highlights the impressive improvement accomplished.
5. Having your code merged into Linux XIA. This will require you to go through a careful code review, but once you meet our standards, your code will be part of Linux XIA, and will evolve with it.
6. Summarizing your work in up five slides. These slides will be merged into our presentation of your work during the 2016 GSoC mentor summit.
Potential mentor: Cody Doucette.
Allocated mentor: Cody Doucette.
GSoC student: Vaibhav Raj Gupta (blog).
Being proficient in C programming, and comfortable with non-trivial data structures is an essential requirement for this project. While one does not need a large background in kernel developing for this project, your proposal should show that you can deal with it.
The essential items to learn before starting this project is poptrie, poptrie’s code, RCU, and the LPM principal. Besides that, you should be able to compile the Linux XIA kernel, and execute one of the experiments described on this wiki.
Principals that have flat XIDs, such as AD and HID principals, use the default hash table available in Linux XIA to implement their forwarding information base (FIB). This hash table supports concurrent writers per buckets, lockless readers, and automatically grows its bucket array. Michel Machado designed and implemented this hash table for Linux XIA, which is represented in struct fib_xid_table
, in 2011.
Kernel developers implemented the relativistic hash table in the kernel in 2014. This hash table, also known as rhashtable, includes all features of the one available in Linux XIA plus it can shrink its bucket array, and requires less memory per item in the table. Thus, the goal of this project is to replace the original FIB hash table of Linux XIA with rhashtable. Besides the obvious benefits of this upgrade, it will reduce the codebase of Linux XIA. Reducing the codebase has the two positive side effects: less code to maintain, and less code to review during an eventual merge of Linux XIA into the mainline kernel.
More specifically, this project comprises the following steps:
1. Integrating rhashtable into Linux XIA using the generic FIB API of XIA. Cody Doucette designed and implemented the generic FIB API of XIA to enable principals to use any data structure to implement their FIBs. The first principal to take advantage of this API was the LPM principal, so the code of the LPM principal is a good starting example. The idea in this step is to create a new principal copying the code of AD principal, and replace the default hash table with the rhashtable.
2. Comparing the performance of the alternative AD principal with the original AD principal. One can do this comparison patching net-eval. This step should produce a graphic that ensures us that adopting rhashtable will not be a performance regression as well as test the correctness of the new code.
3. Integrating the rhashtable FIB in the XIA kernel module. The alternative AD principal is a good way to start implementing and testing the rhashtable, but once it proves to be a fruitful path, one has to move the rhashtable integration into the XIA kernel module, so all principals that use the default hash table are upgraded.
4. Checking that the final integration did not break Linux XIA. The idea is to rerun the comparison using the original AD principal that now should be using rhashtable. The performance should match the one obtained by the alternative AD principal in the previous comparison, and no bugs should be found.
5. Having your code merged into Linux XIA. This will require you to go through a careful code review, but once you meet our standards, your code will be part of Linux XIA, and will evolve with it.
6. Summarizing your work in up five slides. These slides will be merged into our presentation of your work during the 2016 GSoC mentor summit.
Potential mentor: Qiaobin Fu.
Allocated mentor: Qiaobin Fu.
GSoC student: Sachin Paryani (GitHub).
Being proficient in C programming, and comfortable with non-trivial data structures is an essential requirement for this project. While one does not need a large background in kernel developing for this project, your proposal should show that you can deal with it.
The essential items to learn before starting this project is rhashtable, RCU, and the code of the original FIB hash table of Linux XIA. One can learn more about rhashtable reading the paper Resizable, Scalable, Concurrent Hash Tables via Relativistic Programming, and its code in the kernel. The entry to point to the kernel code of rhashtable is its header file rhashtable.h. Besides that, you should be able to compile the Linux XIA kernel, and execute one of the experiments described on this wiki.
XIA Linux Containers (XLXC) is a set of scripts written in Ruby that emulates networks in a single host. These scripts significantly lower the amount of effort required to test Linux XIA, which means less debugging time, an easy way to experiment with Linux XIA, and a valuable testing environment for a future control plane of Linux XIA. XLXC leverages Linux containers to virtualize network nodes (i.e. end-hosts and routers), and Linux bridges to connect these network nodes into a couple topologies. The goal of this project is to replace the Linux bridges with Open vSwitch (OVS) and support arbitrary topologies.
More specifically, this project comprises the following steps:
1. Defining a topology language. This language will be used to represent the topology and parameters of the nodes and links. Examples of parameters: HID XIDs of nodes, IP addresses of nodes, and capacity and loss rate of the links. Instead of dealing with the design of a new language, the suggested approach is to define a friendly library in Ruby that enables one to describe the topology using Ruby code.
2. Replacing Linux bridges with OVS. Although this step is not strictly necessary in this project, it's here because adopting OVS in XLXC will help extending OVS to support XIA's headers; another project.
3. Extending XLXC to support arbitrary topologies. Currently, XLXC only supports complete and star topologies, but more complicated network configurations would be useful for experiments. This is where we'll use OVS.
4. Exporting the topology files to yEd. Being able to visualize a topology in a graphical way will be important for debugging complex topologies. Not to mention that one can use the figures in presentations and publications.
5. (Extra credits) Importing topologies from RocketFuel. The idea is to be able to read topologies from RocketFuel and generate them in the language of XLXC, so XLXC can emulate them.
Potential mentors: Cody Doucette, Yuguang Li.
Allocated mentor: Rahul Kumar.
GSoC student: Aryaman Gupta.
Given that XLXC is written in Ruby, knowing how to code in Ruby is the most critical background. Knowing LXC and OVS certainly helps, but these two can be learned, and our mentor can answer your questions. yEd's files are XML files, so one can figure them out just generating a couple of files and reading them.
The open source project Mininet has a goal similar to XLXC, but for IP. Mininet is written in Python, and should be mostly readable for someone proficient in Ruby, but not in Python. Thus, studying the code of Mininet will likely uncover solutions for some of challenges of this project.
Linux XIA needs to hash XID types to unique buckets in a couple of places of its codebase. This mapping must be highly efficient because it affects the speed of routing packets, so if there is a guarantee that no two XID types hash to the same bucket, the code can make fewer memory accesses. This problem is called perfect hashing in the literature. This need of Linux XIA is similar to the one IPv4 has with its protocol field; the equivalent field in IPv6 is the next header field. However, IP's problem is much simpler because its protocol field is only a byte long, whereas an XID type is four bytes long. The currently implemented solution in Linux XIA is limited and will become unwieldy as the number of principals grows. So the goal of this project is to investigate what is the best perfect hashing for mapping XID types, and implement it in Linux XIA.
This project comprises the following steps:
1. Researching perfect hashing algorithms to implement.
2. Implementing an evaluation environment for the chosen algorithms.
3. Implementing the chosen algorithms to run in the evaluation environment.
4. Evaluating all implemented algorithms.
5. (Research credits) Designing and implementing a perfect hashing better than all others already evaluated. If you get this extra item done, your mentor will evaluate if your algorithm is suitable for publication, and work with you to write the paper.
6. Implementing the best algorithm into Linux XIA. Notice that the new algorithm may require a change of interface, which would trigger the need for patches for all implemented principals. But given that the interface is small, and will still be small, these patches should not be a burden. The current interface used by principals includes the following functions: vxt_register_xidty()
, vxt_unregister_xidty()
, xt_to_vxt(),
and xt_to_vxt_rcu()
.
Potential mentors: Michel Machado, Qiaobin Fu.
Allocated mentor: Qiaobin Fu.
GSoC student: Pranav Goswami.
Student: Kritik Bhimani.
The developer of this project does not need previous knowledge of perfect hashing or kernel programming. The developer does not need knowledge of perfect hashing because perusing the literature could only be skipped if you are an expert in the field. Although the last item of this project necessarily involves kernel programming, the work is narrowly scoped to files include/net/xia_vxidty.h and net/xia/vxidty.c, so you will not need to learn much to implement it in the kernel. In fact, this project is a good opportunity for someone that wants to learn kernel programming.
Having said all that, the developer must be proficient in C programming.
There is a lot literature about perfect hashing. This section only intends to get you started. An entry pointer for the literature is the PhD thesis Near-Optimal Space Perfect Hashing Algorithms of Fabiano Cupertino Botelho.
There are two open source libraries of perfect hashing functions: CMPH - C Minimal Perfect Hashing Library and GNU gperf. Writing small test applications with both libraries is a good way to get your feet wet.
The Longest Prefix Matching (LPM) principal is a multiuse principal. One can populate IP routing information from routing protocols such as BGP, OSPF, and IS-IS into the LPM principal's routing table to build a fully routable XIA network reusing legacy routing infrastructure. Or, one can use LPM to emulate the routing of the original Serval's ServiceID. Or yet, one can use LPM to dynamically manage the partitions of a memcached cluster in the network instead of relying on the applications to do this. The goal of this project is to implement this Swiss-army-knife principal.
This project has two implementation paths that developers can choose from. The first one figures out what data structure one should use to implement the LPM, whereas the second recognizes that LPM is so useful that having a simpler version implemented is better than having a perfect one on paper. Of course, each path requires a different set of skills, and the results of one path influences the other.
The first path comprises the following steps:
1. Researching data structures to implement the LPM principal.
2. Implementing an evaluation environment for the chosen algorithms.
3. Implementing the chosen algorithms to run in the evaluation environment.
4. Evaluating all implemented algorithms.
5. (Research credits) Designing and implementing an algorithm better than all others already evaluated. If you get this extra item done, your mentor will evaluate if your algorithm is suitable for publication, and work with you to write the paper.
The second path comprises the following steps:
1. Identifying a reasonable data structure to implement the LPM principal.
2. Implementing the chosen data structure in the kernel.
3. Stress testing the implemented data structure.
4. Implementing the LPM principal.
5. Extending the xip command to control the LPM principal.
6. Documenting a simple demo of LPM principal using XLXC. An example is the zFilter principal page, which shows a simple demo of principal zFilter.
Potential mentors: Cody Doucette, Qiaobin Fu, Michel Machado.
Allocated mentor for first path: Michel Machado.
Student for first path: Garnaik Sumeet (blog).
Allocated mentor for second path: Cody Doucette.
GSoC student for second path: André Ferreira Eleuterio (blog).
Both paths require a good understanding of XIA, especially how routing is performed, and how LPM is expected to work. Either the developer already knows it, or she must get to know XIA well. This wiki and our mentor can guide this part of the reading.
The first path has two critical challenges. One is to make sure the research literature is perused thoroughly, and the second is to design and implement a realistic evaluation environment. Failing on one of these two aspects compromises the outcome of the whole project. Thus, a promising developer must focus on demonstrating how she will address these challenges.
The second path is more focused on kernel programming. The developer must either know or get comfortable with RCU since it is largely used in Linux XIA. Familiarizing yourself with XIA's internals, and having a mental image of what implementing a principal takes are going to be important. The former demand is addressed reading the PhD thesis Linux XIA: An Interoperable Meta Network Architecture, and the source code of Linux XIA. The latter has been summarized on the page How to write a principal. This project does not required a seasoned kernel programmer, but one will not have time learn everything from scratch.
Open vSwitch (OVS) has been slated for managing the network infrastructure of Massachusetts Open Cloud (MOC). Since OVS works at the link layer, and Linux XIA leverages the network infrastructure of the Linux kernel, OVS and Linux XIA already interoperate. However, OVS is currently oblivious to XIA. So one cannot employ OpenFlow rules, which inspect headers above the link layer, over XIP packets. Given that we expect that MOC will afford Linux XIA its first large-scale deployment when it becomes operational, we want to make sure that Linux XIA can take full advantage of MOC's network infrastructure. Not to mention that OVS is largely used by others for similar goals. So the goal of this project is to extend OVS to support the XIP header to widen the deployability of Linux XIA.
More specifically, this project comprises the following steps:
1. Extending OVS to support the XIP header. A good entry point for this step is section Development of OVS' FAQ.
2. Extending OVS' OpenFlow to support the implementation of the LPM principal.
3. Extending OVS's tools (i.e. ovs-vsctl, ovs-ofctl, ovs-appctl, etc.) to support the extensions introduced by the previous steps.
4. Implementing a demo that stress-tests, evaluates, and highlights the new functionality. If XLXC already supports OVS at this point, it would offer a valuable environment to build this demo. The demo could use OVS rules to load balance connections between two Serval servers, filter packets according a given criterion, reroute an XID to another network switch, or show the LPM in action.
5. (Extra credits) If this project is successful, our mentor will work with you to submit your code upstream.
Potential mentors: Michel Machado, Cody Doucette.
This project has not been allocated.
This project requires a lot of kernel programming, so those still getting used to it should skip this project; there will not be time to learn kernel programming and still finish the project. Having some experience with OVS is helpful, but given that there is abundant documentation online about it, the needed knowledge can be quickly picked up. Developers interested in this project must either know the internals of OVS in the kernel, or be willing to read large chunks of code to grasp it. While being familiar with XIA helps to understand the project, the necessary XIA knowledge for executing it is limited.
Read as much as possible of how OVS works, and experiment with it as you learn; this will help you to read OVS' kernel code. Finally, dive into OVS' kernel code.
All grants that have generously supported the development of Linux XIA are listed on our Funding page.