Introduction

It is sometimes desirable to fine tune the selection of local/remote NIs used for communication. For example currently if there are two networks an o2ib and a tcp network, both will be used. Especially if the traffic volume is low the credits criteria will be equivalent between the nodes, and both networks will be used in round robin. However, the user might want to use one network for all traffic and keep the other network free unless the other network goes down.

User Defined Selection Policies (UDSP) will allow this type of control. 

UDSPs are configured from lnetctl via either command line or YAML config files and then passed to the kernel. Policies are applied to all local networks and remote peers then stored in the kernel. Whenever new peers/peer_nis/local networks/local nis are added they are matched against the rules.

The user interface is recorded here.

Use Cases

Preferred Network

If a node can be reached on two LNet networks, it is sometimes desirable to designate a fail-over network. Currently in lustre there is the concept of High Availability (HA) which allows servicenode nids to be defined as described in the lustre manual section 11.2. By using the syntax described in that section, two nids to the same peer can also be defined. However, this approach suffers from the current limitation in the lustre software, where the NIDs are exposed to layers above LNet. It is ideal to keep network failures handling contained within LNet and only let lustre worry about defining HA. 

Given this it is desirable to have two LNet networks defined on a node, each could have multiple interfaces. Then have a way to tell LNet to always use one network until it is no longer available, IE: all interfaces in that network are down.

In this manner we separate the functionality of defining fail-over pairs from defining fail-over networks.

Preferred NIDs

In a scenario where servers are being upgraded with new interfaces to be used in Multi-Rail, it's possible to add interfaces, for example MLX-EDR interfaces to the server. The user might want to continue making the existing QDR clients use the QDR interface, while new clients can use the EDR interface or even both interfaces. By specifying rules on the clients that prefer a specific interface this behavior can be achieved.

Preferred local/remote NID pairs

This is a finer tuned method of specifying an exact path, by not only specifying a priority to a local interface or a remote interface, but by specifying concrete pairs of interfaces that are most preferred. A peer interface can be associated with multiple local interfaces if necessary, to have a N:1 relationship between local interfaces and remote interfaces.

Refer to Olaf's LUG 2016/LAD 2016 PPT for more context.

DLC APIs

The DLC library will provide the outlined APIs to expose a way to create, delete and show rules.

Once rules are created and stored in the kernel, they are assigned an ID. This ID is returned and shown in the show command, which dumps the rules. This ID can be referenced later to delete a rule. The process is described in more details below.

/*
 * lustre_lnet_add_net_sel_pol
 *   Add a net selection policy. If there already exists a 
 *   policy for this net it will be updated.
 *      net - Network for the selection policy
 *      priority - priority of the rule
 */
int lustre_lnet_add_net_sel_pol(char *net, int priority);
 
/*
 * lustre_lnet_del_net_sel_pol
 *   Delete a net selection policy.
 *      net - Network for the selection policy
 *      id - [OPTIONAL] ID of the policy. This can be retrieved via a show command.
 */
int lustre_lnet_del_net_sel_pol(char *net, int id);
 
/*
 * lustre_lnet_show_net_sel_pol
 *   Show configured net selection policies.
 *      net - filter on the net provided.
 */
int lustre_lnet_show_net_sel_pol(char *net);
 
/*
 * lustre_lnet_add_nid_sel_pol
 *   Add a nid selection policy. If there already exists a 
 *   policy for this nid it will be updated. NIDs can be either
 *   local NIDs or remote NIDs.
 *      nid - NID for the selection policy
 *      priority - priority of the rule
 */
int lustre_lnet_add_nid_sel_pol(char *nid, int priority);
 
/*
 * lustre_lnet_del_nid_sel_pol
 *   Delete a nid selection policy.
 *      nid - NID for the selection policy
 *      id - [OPTIONAL] ID of the policy. This can be retrieved via a show command.
 */
int lustre_lnet_del_nid_sel_pol(char *nid, int id);
 
/*
 * lustre_lnet_show_nid_sel_pol
 *   Show configured nid selection policies.
 *      nid - filter on the NID provided.
 */
int lustre_lnet_show_nid_sel_pol(char *nid);
 
/*
 * lustre_lnet_add_nid_sel_pol
 *   Add a peer to peer selection policy. If there already exists a 
 *   policy for the pair it will be updated.
 *      src_nid - source NID
 *      dst_nid - destination NID
 *      priority - priority of the rule
 */
int lustre_lnet_add_peer_sel_pol(char *src_nid, char *dst_nid, int priority);
 
/*
 * lustre_lnet_del_peer_sel_pol
 *   Delete a peer to peer selection policy.
 *      src_nid - source NID
 *      dst_nid - destination NID
 *      id - [OPTIONAL] ID of the policy. This can be retrieved via a show command.
 */
int lustre_lnet_del_peer_sel_pol(char *src_nid, char *dst_nid, int id);


/*
 * lustre_lnet_show_peer_sel_pol
 *   Show peer to peer selection policies.
 *      src_nid - [OPTIONAL] source NID. If provided the output will be filtered
 *                on this value.
 *      dst_nid - [OPTIONAL] destination NID. If provided the output will be filtered
 *                on this value.
 */
int lustre_lnet_show_peer_sel_pol(char *src_nid, char *dst_nid);


Data structures

User space/Kernel space Data structures

/*
 * describes a network:
 *  nw_id: can be the base network name, ex: o2ib or a full network id, ex: o2ib3.
 *  nw_expr: an expression to describe the variable part of the network ID
 *           ex: tcp* - all tcp networks
 *           ex: tcp[1-5] - resolves to tcp1, tcp2, tcp3, tcp4 and tcp5.
 */
struct lustre_lnet_network_descr {
	__u32 nw_id;
    struct cfs_expr_list *nw_expr;
};
 
/*
 * lustre_lnet_network_rule
 *   network rule
 *      nwr_link - link on rule list
 *      nwr_descr - network descriptor
 *      nwr_priority - priority of the rule.
 * 		nwr_id - ID of the rule assigned while deserializing if not already assigned.
 */
struct lustre_lnet_network_rule {
	struct list_head nwr_link;
	struct lustre_lnet_network_descr nwr_descr;
	__u32 nwr_priority;
	__u32 nwr_id
};
 
/*
 * lustre_lnet_nid_range_descr
 *   nidr_expr - expression describing the IP part of the NID
 *   nidr_nw - a description of the network
 */
struct lustre_lnet_nidr_range_descr {
	struct list_head nidr_expr;
    struct lustre_lnet_network_descr nidr_nw;
};

/*
 * lustre_lnet_nidr_range_rule
 *  Rule for the nid range.
 *     nidr_link - link on the rule list
 *     nidr_descr - descriptor of the nid range
 *     priority - priority of the rule
 */
struct lustre_lnet_nidr_range_rule {
    struct list_head nidr_link;
	struct lustre_lnet_nidr_range_descr nidr_descr;
	int priority;
};

/*
 * lustre_lnet_p2p_rule
 *  Rule for the peer to peer.
 *     p2p_link - link on the rule list
 *     p2p_src_descr - source nid range
 *     p2p_dst_descr - destination nid range
 *     priority - priority of the rule
 */
struct lustre_lnet_p2p_rule {
    struct list_head p2p_link;
	struct lustre_lnet_nidr_range_descr p2p_src_descr;
	struct lustre_lnet_nidr_range_descr p2p_dst_descr;
	int priority;
};

IOCTL Data structures

enum lnet_sel_rule_type {
	LNET_SEL_RULE_NET = 0,
	LNET_SEL_RULE_NID,
	LNET_SEL_RULE_P2P
};
 
struct lnet_expr {
	__u32 ex_lo;
	__u32 ex_hi;
	__u32 ex_stride;	
};
 
struct lnet_net_descr {
	__u32 nsd_net_id;
	struct lnet_expr nsd_expr;
};
 
struct lnet_nid_descr {
	struct lnet_expr nir_ip[4];
	struct lnet_net_descr nir_net;
};
 
struct lnet_ioctl_net_rule {
	struct lnet_net_descr nsr_descr
	__u32 nsr_prio;
	__u32 nsr_id
};

struct lnet_ioctl_nid_rule {
	struct lnet_nid_descr nir_descr;
	__32 nir_prio;
	__u32 nir_id;
};
 
sturct lnet_ioctl_net_p2p_rule {
	struct lnet_nid_descr p2p_src_descr;
	struct lnet_nid_descr p2p_dst_descr;
	__u32 p2p_prio;
	__u32 p2p_id;
};
 
/* 
 * lnet_ioctl_rule_blk
 *  describes a set of rules of the same type to transfer to the kernel.
 *		rule_hdr - header information describing the total size of the transfer
 * 		rule_type - type of rules included
 * 		rule_size - size of each individual rule. Can be used to check backwards compatibility
 * 		rule_count - number of rules included in the bulk.
 * 		rule_bulk - pointer to the user space allocated memory containing the rules.
 */
struct lnet_ioctl_rule_blk {
	struct libcfs_ioctl_hdr rule_hdr;
	enum lnet_sel_rule_type rule_type;
    __u32 rule_size;
	__u32 rule_count;
	void __user *rule_bulk;
};

Serialization/Deserialization

Both userspace and kernel space are going to store the rules in the data structures described above. However once userspace has parsed and stored the rules it'll need to serialize it and send it to the kernel. 

The serialization process will use the IOCTL datastructures defined above. The process itself is straightforward. The rules as stored in the user space or the kernel are in a linked list. But each rule is of deterministic size and form. For example an IP is described as four struct cfs_range_expr structures. This can be translated into four struct lnet_expr structures. 

As an example a serialized list of net rules are going to look as follows:

The rest of the rules will look very similar as above, except that the list of rules included in the memory pointed to by rule_bulk is going to contain the pertinent structure format.

On the receiving end the process is reversed to rebuild the linked lists.

Common functions that can be called from user space and kernel space will be created to serialize and deserialize the rules:

/*
 * lnet_sel_rule_serialize()
 * 	Serialize the rules pointed to by rules into the memory block that is provided. In order for this
 *  API to work in both Kernel and User space the bulk pointer needs to be passed in. When this API
 *  is called in the kernel, it is expected that the bulk memory is allocated in userspace. This API
 *  is intended to be called from the kernel to serialize the rules before sending it to user space
 * 		rules [IN] - rules to be serialized
 * 		rule_type [IN] - rule type to be serialized
 * 		bulk_size [IN] - size of memory allocated.
 *  	bulk [OUT] - allocated block of memory where the serialized rules are stored.
 */
int lnet_sel_rule_serialize(struct list_head *rules, enum lnet_sel_rule_type rule_type, __u32 *bulk_size, void __user *bulk);
 
/*
 * lnet_sel_rule_deserialize()
 * 	Given a bulk of rule_type rules, deserialize and append rules to the linked
 *  list passed in. Each rule is assigned an ID > 0 if an ID is not already assigned 
 * 		bulk [IN] - memory block containing serialized rules
 * 		bulk_size [IN] - size of bulk memory block
 * 		rule_type [IN] - type of rule to deserialize
 * 		rules [OUT] - linked list to append the deserialized rules to
 */
int lnet_sel_rule_deserialize(void __user *bulk, __u32_bulk_size, enum lnet_sel_rule_type rule_type, struct list_head *rules);

 

Policy IOCTL Handling

Three new IOCTLs will need to be added: IOC_LIBCFS_ADD_RULES, IOC_LIBCFS_DEL_RULES, and IOC_LIBCFS_GET_RULES.

IOC_LIBCFS_ADD_RULES

The handler for the IOC_LIBCFS_ADD_RULES will perform the following operations:

  1. call lnet_sel_rule_deserialize()
  2. Iterate through all the local networks and apply the rules
  3. Iterate through all the peers and apply the rules.
  4. splice the new list with the existing rules in the process resolving any conflicts. New rules always trump old rules (no pun intended).

Application of the rules will be done under api_mutex_lock and the exclusive lnet_net_lock to avoid having the peer or local net lists changed while the rules are being applied.

There will be different lists one for each rule type. The rules are iterated and applied whenever:

  1. A local network interface is added.
  2. A remote peer/peer_net/peer_ni is added

IOC_LIBCFS_DEL_RULES

The handler for IOC_LIBCFS_DEL_RULES will delete the rules which the ID of the rule passed in or if no ID is passed in then the exact rule is matched.

There will be no other actions taken on rule removal. Once the rule has been applied it will remain applied until the objects it has been applied to are removed.

IOC_LIBCFS_GET_RULES

The handler for the IOC_LIBCFS_GET_RULES will call lnet_sel_rule_serialize() on the master linked list for the type of the rule identified in struct lnet_ioctl_rule_bulk.

It fills as many rules as can fit in the bulk by examining the result of (rule_hdr.ioc_len - sizeof(struct lnet_ioctl_rule_blk)) / rule_size . That number of rules are serialized and placed in the bulk memory block. The IOCTL returns ENOSPC if the given bulk memory block is not enough to hold all the rules. It assigns the number of rules serialized in rule_count. The userspace process can make another call with the number of rules to skip set in rule_count. The handler will skip that indicated number of rules and fill the new bulk memory with the remaining rules. This process can be repeated until all the rules are returned to userspace.

In userpsace the rules are printed in the same YAML format as they are parsed in.

Policy Application

Net Rule

The net which matches the rule will be assigned the priority defined in the rule.

NID Rule

The local_ni or the peer_ni which match that NID will be assigned the priority defined in the rule.

Peer to Peer Rule

NIDs for local_nis matching the source NID pattern in the peer to peer rule will be added to a list on the peer_nis which NID match the destination NID pattern.

Selection Algorithm Integration

Currently the selection algorithm performs its job in the following general steps:

  1. determine the best network to communicate to the destination peer by looking at all the LNet networks the peer is on.
    1. select the network with the highest priority
  2. for each selected network go through all the local NIs and keep track of the best_ni based on:
    1. It's priority
    2. NUMA distance
    3. available credits
    4. round robin
  3. Skip any networks which are lower priority than the "active" one. If there are multiple networks with the same priority then the best_ni is selected from amongst them using the above criteria.
  4. Once the best_ni has been selected, select the best peer_ni available by going through the list of the peer_nis on the selected network.  Select the peer_ni based on:
    1. The priority of the peer_ni.
    2. if the NID of the best_ni is on the preferred local NID list of the peer_ni. It is placed there through the application of the peer to peer rules.
    3. available credits
    4. round robin

Misc

As an example, in a dragonfly topology as diagrammed below, a node can have multiple interfaces on the same network, but some interfaces are not optimized to go directly to the destination group. So if the selection algorithm is operating without any rules, it could select a local interface which is less than optimal.

The clouds in the diagram below represents a group of LNet nodes on the o2ib network. The admin should know which node interfaces resolve to a direct path to the destination group. Therefore, giving priority for a local NID within a network is a way to ensure that messages always prefer the optimized paths.

The diagram above was inspired by: https://www.ece.tufts.edu/~karen/classes/final_presentation/Dragonfly_Topology_Long.pptx

Refer to the above power point for further discussion on the dragon-fly topology.

Example

TBD: I'm thinking in a topology such as the one represented above, the sys admin would configure the routing properly, such that messages heading to a particular IP destination on a different group would get routed to the correct edge router, and from there to the destination group. When LNet is layered on top of this topology there will be no need to explicitly specify a rule, as all necessary routing rules will be defined in the routing tables of the kernel. The assumption here is that Infinitband IB over IP would obey the standard linux routing rules.