Table of Contents |
---|
Page properties | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Adding Resiliency to LNet
...
In the following discussion, node will often be the shorthand for local node, while peer will be shorthand for peer node or remote node.
...
PUT,
...
ACK,
...
GET,
...
REPLY
Within LNet there are three cases of interest: PutPUT, PutPUT+AckACK, and GetGET+ReplyREPLY.
- PutPUT: The simplest message type is a bare Put PUT message. This message is sent on the wire, and after that no further response is expected by LNet. In terms of error handling, this means that a failure to send can result in an error, but any kind of failure after the message has been sent will result in the message being dropped without notification. There is nothing LNet can do in the latter case, it will be up to the higher layers that use LNet to detect what is going on and take whatever corrective action is required.
- PutPUT+AckACK: The sender of a Put can ask from for an Ack ACK from the receiver. The Ack ACK message is generated by LNet on the receiving node (the peer), and this is done as soon as the Put has been received. In contrast to a bare put this means LNet on the sender can track whether an Ack ACK arrives, and if it does not promptly arrive it can take some action. Such an action can take two forms: inform the upper layers that the Ack ACK took too long to arrive, or retry within LNet itself.
- GetGET+ReplyREPLY: With a Get GET message, there is always a Reply REPLY from the receivedpeer. Prior to the GetGET, the sender and receiver arrange an agreement on the MD that the data for the Reply REPLY must be obtained from, so LNet on the receiving node can generate the Reply REPLY message as soon as the Get GET has been received. Failure detection and handling is similar to the PutPUT+Ack ACK case.
The protocols used to implement LNet tend to be connection-oriented, and implement some kind of handshake or ack protocol that tells the sender that a message has been received. As long as the LND actually reports errors to LNet (not a given, alas) this means that in practice the sender of a message can reliably determine whether the message was successfully sent to another node. When the destination node is on the same LNet network, this is sufficient to enable LNet itself to detect failures even in the bare Put case. But in a routed configuration this only guarantees that the LNet router received the message, and if the LNet router then fails to forward it, a bare Put will be lost without trace.
...
Users of LNet that send bare Put messages must implement their own methods to detect whether a message was lost. The general rule is simple: the recipient of a Put is supposed react somehow, and if the reaction doesn't happen within a set amount of time, the sender assumes that either the Put PUT was lost, or the recipient is in some other kind of trouble.
For our purposes PtlRPC is of interest. PtlRPC messages can be classified as Request+Response pairs. Both a Request and a Response are built from one or more Get or Put PUT messages. A node that sends a PtlRPC Request requires the receiver to send a Response within a set amount of time, and failing this the Request times out and PtlRPC takes corrective action.
...
The interfaces that LNet provides to the upper layers should work as follows. Set up an MD (Memory Descriptor) to send data from (for a PutPUT) or receive data into (for a Get). An event handler is associated with the MD. Then call LNetGet() or LNetPut() as appropriate.
...
If all goes well, the event handler sees two events: LNET_EVENT_SEND
to indicate the Get GET message was sent, and LNET_EVENT_REPLY
to indicate the Reply REPLY message was received. Note that the send event can happen after the reply event (this is actually the typical case).
If sending the Get GET message failed, LNET_EVENT_SEND
will include an error status, no LNET_EVENT_REPLY
will happen, and clean up must be done accordingly. If the return value of LNetGet() indicates an error then sending the message certainly failed, but a 0 return does not imply success, only that no failure has yet been encountered.
A damaged Reply REPLY message will be dropped, and does not result in an LNET_EVENT_REPLY
. Effectively the only way for LNET_EVENT_REPLY
to have an error status is if LNet detects a timeout before the Reply REPLY is received.
LNetPut() LNET_ACK_REQ
The caller of LNetPut()
requests an Ack ACK by using LNET_ACK_REQ
as the value of the ack
parameter.
A Put PUT with an Ack ACK is similar to a GetGET + Reply REPLY pair. The events in this case are LNET_EVENT_SEND
and LNET_EVENT_ACK
.
For a PutPUT, the LNET_EVENT_SEND
indicates that the MD is no longer used by the LNet code and the caller is free do discard or re-use it.
As with send, LNET_EVENT_ACK
is expected to only carry an error indication if there was a timeout before the Ack ACK was received.
LNetPut() LNET_NOACK_REQ
The caller of LNetPut()
requests no Ack ACK by using LNET_NOACK_REQ
as the value of the ack
parameter.
A Put PUT without an Ack ACK will only generate an LNET_EVENT_SEND
, which indicates that the MD can now be re-used or discarded.
...
- Node interface reports failure. This includes the interface itself being healthy but it noting that the cable connecting it to a switch, or the switch port, is somehow not working right.
- Peer interface not reachable. A peer interface that should be reachable from the node interface cannot be reached. Depending on the error this can result in "fast" error or a timeout in the LND-level protocol.
- Some peer interfaces on a net not reachable. The node interface appears to be OK, but there are interfaces several peers it cannot talk to.
- All peer interfaces on a net not reachable. The node interface appears to be OK, but cannot talk to any peer interface.
- All interfaces of a peer not reachable. All LNDs report errors when talking to a specific peer, but have no problem talking to other peers.
- PutPUT+Ack ACK or GetGET+Reply REPLY timeout. The LND gives no failure indication, but the Ack ACK or Reply REPLY takes too long to arrive.
- Dropped PutPUT. Everything appears to work, except it doesn't.
...
LNet can mark the interface down, and depending on the capabilies capabilities of the LND either recheck periodically or wait for the LND to mark the interface up.
...
LNet might treat this as the "remote interface not reachable" case for all the interfaces of the remote node. That is, without much difference due to apparently all interfaces of the remote node being down, except for a log message indicating this.
...
PUT+
...
ACK Or
...
GET+
...
REPLY Timeout
This is the case where the LND does not signal any problem, so the Ack ACK for a Put PUT or Reply REPLY for a Get GET should arrive promptly, with the only delays due to credit-based throttling, and yet it does not do so. Note that this assumes that were where possible the LND layer already implements reasonably tight timeouts, so that LNet can assume the problem is somewhere else.
LNet can impose a "reply timeout", and retry over a different path if there is one available. However, if the assumption about the LND is valid, then the implication is that the node is in trouble. So an alternative is to force the upper layers to cope.
One argument for nevertheless implementing this facility in LNet is that it means the upper layers to have to re-invent and re-implement this wheel time and again.
Dropped Put
will timeout after the configured or passed in transaction timeout and will send an event to the ULP indicating that the PUT/GET has timed out without receiving the expected ACK/REPLY respectively.
Dropped PUT
No problem was signaled No problem was signalled by the LND, and there is no Ack ACK that we could time out waiting for. LNet does not have enough information to do anything, so the upper layers ULP must do so instead.
If this case must be made tractable, LNet can be changed to make the Ack non-optional.
...
LNet Resend
...
Handling
When there are multiple paths available for a message, it makes sense to try to and resend it on failure. But where should the resending logic be implemented?
The easiest path is to tell upper layers to resend. For example, PtlRPC has some related logic already. Except that when PtlRPC detects a failure, it disconnects, reconnects, and triggers a recovery operation. This is a fairly heavy-weight process, while the type of resending logic desired is to "just try another path" which differs from what exists today and needs to be implemented for each user.
The alternative then is to have LNet resend a message. There should be some limit to the number of retries, and a limit to the amount of time spent retrying. Otherwise we are requiring the upper layers to implement a timer on each LNetGet() and LNetPut() call to guarantee progress. This introduces an LNet "retry timeout" (as opposed to the "reply timeout" discussed above) as the amount of time LNet after which LNet gives up.
In terms of timeouts, this then gives us the following relationships, from shortest to longest:
- LND Timeout: LND declares that a message won't arrive.
- IB timeout is (default?) slightly less than 4 seconds
- LND timeout is
timeout
module parameter foro2ib
andgni
,sock_timeout
module parameter forsock
?
- LNet Reply Timeout: LNet declares an Ack/Reply won't arrive. > LND Timeout * (max hops -1)
- Depends on the route!
- LNet Retry Timeout: LNet gives up on retries. > LNet Reply Timeout * max LNet retries
- Depends on the route!
peer_timeout
module parameter: peer is declared dead. Either use for LNet Retry Timeout, or > LNet Retry Timeout. Using thepeer_timeout
for the LNet Retry Timeout has the advantage of reducing the number of tunable parameters. A disadvantage is that thepeer_timeout
is currently a per-LND parameter (each LND has its own tunable value), effectively limiting the number of retries to 1 when the LND timed out.
It is not completely obvious how this scheme interacts with Lustre's timeout
parameter (the Lustre RPC timeout, from which a number of timeouts are derived), but at first glance it seems that at least peer_timeout
< Lustre timeout
.
LNet Health Version 2.0
There are three types of failures that LNet needs to deal with:
- Local Interface failure
- Remote Interface failure
- Timeouts
- LND detected Timeout
- LNet detected Timeout
Local Interface Failure
Local interface failures will be detected in one of two ways
- Synchronously as a return failure to the call to
lnd_send()
- Asynchronously as an event that could be detected at a later point.
- These asynchronous events can be as a result of a send operations
- They can also be independent of send operations, as failures are detected with the underlying device, for example a "link down" event.
Synchronous Send Failures
lnet_select_pathway
can fail for the following reasons:
- Shutdown in progress
- Out of memory
- Interrup signal received
- Discovery error.
- An MD bind failure
-EINVAL
-HOSTUNREACH
Invalid information given
- Message dropped
- Aborting message
- no route found
- Internal failure
All these cases are resource errors and it does not make sense to resend the message as any resend will likely run into the same problem.
Asynchronous Send Failures
LNet should resend the message:
- On LND transmit timeout
- On LND connection failure
- On LND send failure
When there is a message send failure due to the reasons outlined above. The behavior should be as follows:
- The local or remote interface health is updated
- Failure statistics incremented
- A resend is issued on a different local interface if there is one available. If there is not one available attempt the same interface again.
- The message will continuously be resent until the timeout expires or the send succeeds.
A new field in the msg, msg_status, will be added. This field will hold the send status of the message.
When a message encounters one of the errors above, the LND will update the msg_status field appropriately and call lnet_finalize()
lnet_finalize() will check if the message has timed out or if it needs to be resent and will take action on it. lnet_finalize() currently calls lnet_complete_msg_locked() to continue the processing. If the message has not been sent, then lnet_finalize() should call another function to resend, lnet_resend_msg_locked().
When a message is initially sent it's taged with a deadline for this message. The deadline is the current time + peer_timeout. While the message has not timedout it will be resent if it needs to. The deadline is checked everytime we enter lnet_finalize(). When the deadline is reached without successful send, then the MD will be detached.
While the message is in the sending state the MD will not be detached.
lnet_finalize() will also update the statistics (or call a function to update the statistics).
Resending
It is possible that a send can fail immediately. In this case we need to take active measures to ensure that we do not enter a tight loop resending until the timeout expires. This could peak the CPU consumption unexpectedly.
To do that the last sent time will be kept in the message. If the message is not sent successfully on any of the existing interfaces, then it will be placed on a queue and will be resent after a specific deadline expires. This will be termed a "(re)send procedure". An interval must expire between each (re)send procedure. A (re)send procedure will iterate through all local and remote peers, depending on the source of the send failure.
The router pinger thread will be refactored to handle resending messages. The router pinger thread is started irregardless and only used to ping gateways if any are configured. Its operation will be expanded to check the pending message queue and re-send messages.
To keep the code simple when resending the health value of the interface that had the problem will be updated. If we are sending to a non-MR peer, then we will use the same src_nid, otherwise the peer will be confused. The selection algorithm will then consider the health value when it is selecting the NI or peer NI.
There are two aspects to consider when sending a message
- The congestion of the local and remote interfaces
- The health of the local and remote interfaces
The selection algorithm will take an average of these two values and will determine the best interface to select. To make the comparison proper, the health value of the interface will be set to the same value as the credits on initialization. It will be decremented on failure to send and incremented on successful send.
A health_value_range module parameter will be added to control the sensitiveness of the selection. If it is set to 0, then the best interface will be selected. A value higher than 0 will give a range in which to select the interface. If the value is large enough it will in effect be equivalent to turning off health comparison.
...
LNet Resiliency feature shall ensure that failures are delt with at the LNet level by resending the message on the available local and remote interfaces with in a timeout provided.
LNet shall use a trickle down approach for managing timeouts. The ULP (pltrpc or other upper layer protocol) shall provide a timeout value in the call to LNetPut() or LNetGet(). LNet shall use that as the transaction timeout value to wait for an ACK or REPLY. LNet shall further provide a configuration parameter for the number of retries. The number of retries shall allow the user to specify a maximum number of times LNet shall attempt to resend an unsuccessful message. LNet shall then calculate the message timeout by dividing the transaction timeout with the number of retries. LNet shall pass the calculated message timeout to the LND, which will use it to ensure that the LND protocol completes an LNet message within the message timeout. If the LND is not able to complete the message within the provided timeout it will close the connection and drop all messages on that connection. It will afterword proceed to call into LNet via lnet_finailze()
to notify it of the error encountered.
For PUT that doesn't require an ACK the timeout will be used to provide the transaction timeout to the LND. In that case LNet will resend the PUT if the LND detects an issue with the transmit. LNet shall be able to send a TIMEOUT event to the ULP if the PUT was not successfully transmited. However, if the PUT is successfully transmited, there is no way for LNet to determine if it has been processed properly on the receiving end.
Feature Specification
System Overview
Gliffy Diagram | ||||
---|---|---|---|---|
|
lnet_msg
is a structure used to keep information on the data that will be transmitted over the wire. It does not itself go over the wire. lnet_msg
is passed to the LND for transmission.
Before it's passed to the LND it is placed on an active list, msc_active
. The diagram below describes the datastructures
Gliffy Diagram | ||||||
---|---|---|---|---|---|---|
|
The CPT is determined by the lnet_cpt_of_nid_locked()
function. lnet_send()
running in the context of the calling threads place a message on msc_active just before sending it to the LND. lnet_parse()
places messages on msc_active when it receives it from the LND.
msc_active
represent all messages which are currently being processed by LNet.
lnet_finalize()
, running in the context of the calling threads, likely the LND scheduler threads, will determine if a message needs to be resent and place it on the resend list. The resend list is a list of all messages which are currently awaiting a resend.
A monitor thread monitors and ensures that messages which have expired are finalized. This processing is detailed in later sections.
Resiliency vs. Reliability
There are two concepts that need to stay separate. Reliability of RPC messages and LNet Resiliency. This feature attempts to add LNet Resiliency against local and immediate next hop interface failure. End-to-end reliability is to ensure that upper layer messages, namely RPC messages, are received and processed by the final destination, and take appropriate action in case this does not happen. End-to-end reliability is the responsibility of the application that uses LNet, in this case ptlrpc. Ptlrpc already has a mechanism to ensure this.
To clarify the terminology further, LNET MESSAGE should be used to describe one of the following messages:
- LNET_MSG_PUT
- LNET_MSG_GET
- LNET_MSG_ACK
- LNET_MSG_GET
- LNET_MSG_HELLO
LNET TRANSACTION should be used to describe
- LNET_MSG_PUT, LNET_MSG_ACK sequence
- LNET_MSG_GET, LNET_MSG_REPLY sequence
NEXT-HOP should describe a peer that is exactly one hop away.
The role of LNet is to ensure that an LNET MESSAGE arrives at the NEXT-HOP, and to flag when a transaction fails to complete.
Upper layers should ensure that the transaction it requests to initiate completes successfully, and take appropriate action otherwise.
Failure Areas
There are three areas of failures that LNet needs to deal with:
- Local Interface failure
- Remote Interface failure
- Timeouts
- LND detected Timeout
- LNet detected Timeout
Timeout values will be provided by the ULP in the LNetPut() and LNetGet() APIs.
Health Value Updates
Two values will be added:
- health_value: Each NI (local and remote) will have a health value
- The health_value will be initialized to 1000
- 1000 is chosen in order to granulary select between interfaces based on the value. Otherwise it is arbitrary
- When a transient error is detected on an interface, such as a timeout, the health_value is decremented by health_sensitivity
- The health_value will be initialized to 1000
- health_sensitivity: This is a global configuration parameter. It determines how long an NI takes to recover or how sensitive a system is to message send failure.
- An NI's health_value is decremented by health_sensivitiy on a transient error.
- An NI is then placed on a queue to recover.
- An NI is pinged or pinged from once a second.
- Every successful ping would increment the NIs health_value by 1.
- It takes health_sensitivity pings to bring the interface back to its original health status.
- If a ping fails during the recovery process the health_value is decremented further by health_sensitivity.
- This will ensure that an unstable NI which has frequent errors, will be preferred less.
- The health_sensitivity can be set to 0 to turn off health evaluation.
- That means that an interface will remain healthy no matter what happens.
- Basically turn off NI selection based on health.
Each NI will have a health_value associated with it. Each NI's health value is initialized to 1000
There are two types of errors that could occur on an NI:
- Hard failures: These are failures communicated by the underlying device driver to the LND and in turn the LND propagates it up to LNet
- Transient failures: These are failures such as timeouts on the system.
Hard Failures
Hard failures only apply to local interfaces, since there is no way to know if a remote interface has encountered one.
It's possible that the local interface might get into a hard failure scenario by receiving one of these events from the o2iblnd. socklnd needs to be investigated to determine if there are similar cases:
...
A new LNet/LND Api will be created to pass these events from the LND to LNet.
Timeouts
LND Detected Timeouts
Upper layers request from LNet to send a GET or a PUT via LNetGet()
and LNetPut()
APIs. LNet then calls into the LND to complete the operation. The LND encapsulates the LNet message into an LND specific message with its own message type. For example in the o2iblnd it is kib_msg_t
.
When the LND transmits the LND message it sets a tx_deadline
for that particular transmit. This tx_deadline
remains active until the remote has confirmed receipt of the message. Receipt of the message at the remote is when LNet is informed that a message has been received by the LND, done via lnet_parse()
, then LNet calls back into the LND layer to receive the message.
Therefore if a tx_deadline
is hit, it is safe to assume that the remote end has not received the message. The reasons are described further below.
By handling the tx_deadline
properly we are able to account for almost all next-hop failures. LNet would've done its best to ensure that a message has arrived at the immediate next hop.
The tx_deadline
is LND-specific, and derived from the timeout
(or sock_timeout
) module parameter of the LND.
LNet Detected Timeouts
As mentioned above at the LNet layer LNET_MSG_PUT can be told to expect LNET_MSG_ACK to confirm that the LNET_MSG_PUT has been processed by the destination. Similarly LNET_MSG_GET expects an LNET_MSG_REPLY to confirm that the LNET_MSG_GET has been successfully processed by the destination.
The pair LNET_MSG_PUT+LNET_MSG_ACK and LNET_MSG_GET+LNET_MSG_REPLY is not covered by the tx_deadline in the LND. If the upper layer does not take precautions it could wait forever on the LNET_MSG_ACK or LNET_MSG_REPLY. Therefore it is reasonable to expect that LNET provides a TIMEOUT event if either of these messages are not received within the expected timeout.
The question is whether LNet should resend the LNET_MSG_PUT or LNET_MSG_GET if it doesn't receive the corresponding response.
Consider the case where there are multiple LNet routers between two nodes, N1 and N2. These routers can possibly be routing between different Hardware, example OPA and MLX. N1 via the LND can reliably determine the health of the next-hop's interfaces. It can not however reliably determine the health of further hops in the chain. Each node can determine the health of the immediate next-hops. Therefore, each node in the path can be trusted to ensure that the message has arrived at the immediate next hop.
If there is a failure along the path and N1 does not receive the expected LNET_MSG_ACK or LNET_MSG_REPLY, and it knows that the message has been received by its next-hop, it has no way to determine where the failure happened. If it decides to resend the message, then there is no way to reliably select a reasonable peer_ni. Especially considering that the message has in fact been received properly by the next-hop. We can then say that we will simply try all the peer_nis of the destination. But in fact this will already be done by the node in the chain which is encountering a problem completing the message with its next-hop. So the net effect is the same. If both are implemented, then duplication of messages is a certainty.
Furthermore the responsibility of end-to-end reliability falls on the shoulder of layers using LNet. Ptlrpc's design clearly takes the end-to-end reliability of RPCs in consideration. By adding an LNET_ACK_TIMEOUT and LNET_REPLY_TIMEOUT (or add an error status in the current events), then ptlrpc can react to the error status appropriately.
The argument against this approach is mixed clusters, where not all nodes are MR capable. In this case we can not rely on intermediary nodes to try all the interfaces of its next-hop. However, as is assumed in the Multi-Rail design if not all nodes are MR capable, then not all Multi-Rail features are expected to work.
This appraoch would add the LNet resiliency required and avoid the many corner cases that will need to be addressed when receiving message which have already been processed.
Relationship between different timeouts.
There are multiple timeouts kept at different layers of the code. It is important to set the timeout defaults such that it works best, and to give guidance on how the different timeouts interact together.
Looking at timeouts from a bottom up approach:
- IB/TCP/GNI re-send timeout
- LND transmit timeout
- The timeout to wait for before a transmit fails and
lnet_finalize()
is called with an appropriate error code. This will result in a resend.
- The timeout to wait for before a transmit fails and
- Transaction timeout
- timeout after which LNet sends a timeout event for a missing REPLY/ACK.
- Message timeout
- timeout after which LNet abandons resending a message.
- Resend interval
- The interval between each (re)send procedure.
- RPC timeout
- The
INITIAL_CONNECT_TIMEOUT
is set to 5 sec - ldlm_timeout and obd_timeout are tunables and default to
LDLM_TIMEOUT_DEFAULT
andOBD_TIMEOUT_DEFAULT
.
- The
IB/TCP/GNI re-send timeout < LND transmit timeout < RPC timeout.
A retry count can be specified. That's the number of times to resend after the LND transmit timeout expires.
The timeout value before failing an LNET_MSG_[PUT | GET]
will be:
message timeout = (retry count * LND transmit timeout) + (resend interval * retry count)
where
retry count = min(retry count, 5)
message timeout <= transaction timeout
It has been observed that mount could hang for a long time if discovery ping is not responded to. This could happen if an OST is down while a client mounts the File System. In this case it does not make sense to hold up the mount procedure while discovery is taking place. For some cases like discovery the algorithm would specify a different timeout other than what's configured.
Other cases where a timeout can be specified which overrides the configured timeout is router ping and manual ping.
One issue to consider is currently the LND transmit timeout defaults to 50s. So if we do retry up to five times we could be held up for 2500s, which would be unacceptable.
The question to answer is, does it make sense for the LND transmit timeout to be set to 50s? Even though the IB/TCP/GNI timeout can be long, it might make more sense to pre-empt that communication stack and attempt to resend the message from the LNet layer on a different interface, or even reuse the same interface if only on is available.
Resiliency vs. Reliability
There are two concepts that need to stay separate. Reliability of RPC messages and LNet Resiliency. This feature attempts to add LNet Resiliency against local and immediate next hop interface failure. End-to-end reliability is to ensure that upper layer messages, namely RPC messages, are received and processed by the final destination, and take appropriate action in case this does not happen. End-to-end reliability is the responsibility of the application that uses LNet, in this case ptlrpc. Ptlrpc already has a mechanism to ensure this.
To clarify the terminology further, LNET MESSAGE should be used to describe one of the following messages:
- LNET_MSG_PUT
- LNET_MSG_GET
- LNET_MSG_ACK
- LNET_MSG_GET
- LNET_MSG_HELLO
LNET TRANSACTION should be used to describe
- LNET_MSG_PUT, LNET_MSG_ACK sequence
- LNET_MSG_GET, LNET_MSG_REPLY sequence
NEXT-HOP should describe a peer that is exactly one hop away.
The role of LNet is to ensure that an LNET MESSAGE arrives at the NEXT-HOP, and to flag when a transaction fails to complete.
Upper layers should ensure that the transaction it requests to initiate completes successfully, and take appropriate action otherwise.
Reasons for timeout
The discussion here refers to the LND Transmit timeout.
Timeouts could occur due to several reasons:
- The message is on the sender's queue and is not posted within the timeout
- This indicates that the local interface is too busy and is unable to process the messages on its queue.
- The message is posted but the transmit is never completed
- An actual culprit can not be determined in this scenario. It could be a sender issue, a receiver issue or a network issue.
- The message is posted, the transmit is completed, but the remote never acknowledges.
- In the IBLND, there are explicit acknowledgements in most cases when the message is received and forwarded to the LNet layer. Look below for more details.
- If an LND message is in waiting state and it didn't receive the expected response, then this indicates an issue at the remote's LND, either at the lower protocol, IB/TCP, or the notification at the LNet layer is not being processed in a timely fashion.
Each of these scenarios can be handled differently
Desired Behavior
The desired behavior is listed for each of the above scenarios:
Scenario 1 - Message not posted
- Connection is closed
- The local interface health is updated
- Failure statistics incremented
- A resend is issued on a different local interface if there is one available.
- If no other local interface is present, or all are in failed mode, then the send fails.
Scenario 2 - Transmit not completed
- Connection is closed
- The local and remote interface health is updated
- Failure statistics incremented on both local and remote
- A resend is issued on a different path all together if there is one available.
- If no other path is present then the send fails.
Scenario 3 - No acknowledgement by remote
- Connection is closed
- The remote interface health is updated
- Failure statistics incremented
- A resend is issued on a different remote interface if there is one available.
- If no other remote interface is present then the send fails.
Note, that the behavior outlined is consistent with the explcit error cases identified in previous section. Only Scenario 2, diverges as a different path is selected all together, but still the same code structure is used.
Implementation Specifics
All of these cases should end up calling lnet_finalize()
API with the proper return code. lnet_finalize()
will be the funnel where all these events shall be processed in a consistent manner. When the message is completed via lnet_complete_msg_locked()
, the error is checked and the proper behavior as described above is executed.
Peer_timeout
In the cases when a GET or a PUT transaction is initiated an associated deadline needs to be tagged to the corresponding transaction. This deadline indicates how long LNet should wait for a REPLY or an ACK before it times out the entire transaction.
A new thread is required to check if a transaction deadline has expired. OW: Can a timer do this? Or is one timer per message too resource-intensive? If a queue is used, then ideally new messages can simply be added to the tail, with their deadline always >= the current tail. With the queue sorted by deadline the checker thread can look at the deadline of the message at the head of the tail to determine how long it sleeps.
When a transaction deadline expires an appropriate event is generated towards PTLRPC.
When a the REPLY or the ACK is received the message is removed from the check queue of the thread and success event is generated towards PTLRPC.
Within a transaction deadline, if there is a determination that the GET or PUT message failed to be sent to the next-hop then the GET or PUT can be resent.
OW: How is this deadline determined? Naming this section peer_timeout suggests you want to use that? Conceptually we can distinguish between an LNet transaction timeout and an LNet peer timeout.
Resend Window
Resends are terminated when the peer_timeout for a message expires.
Resends should also terminate if all local_nis and/or peer_nis are in bad health. New messages can still use paths that have less than optimal health.
A message is resent after the LND transmit deadline expires, or on failure return code. Both these paths are handled in the same manner, since a transmit deadline triggers a call to lnet_finalize(). Both inline and asynchronous errors also endup in lnet_finalize().
Therefore the least number of transmits = peer_timeout / LND transmit deadline.
Depending on the frequency of errors, LNet may do more re-transmits. LNet will stop re-transmitting and declare a peer dead, if the peer_timeout expires or all the different paths have been tried with no success.
In the default case where LND transmit timeout is set to 50 seconds and the peer_timeout is set to 180 seconds, then LNet will re-transmit 3 times before it declares the peer dead.
peer_timeout can be increased to fit in more re-transmits or LND transmit timeout can be decreased.
Alexey Lyashkov made a presentation at LAD 16 that outlines the best values for all Lustre timeouts. It can be accessed here.
Locking
MD is always protected by the lnet_res_lock
, which is CPT specific.
Other data structures such as the_lnet.ln_msg_containers
, peer_ni, local ni, etc are protected by the lnet_net_lock
.
The MD should be kept intact during the resend procedure. If there is a failure to resend then the MD should be released and message memory freed.
Selection Algorithm with Health
Algorithm Parameters
...
Note that when communicating with an NMR peer we need to ensure that the source NI is always the same: there are a few places where the upper layers use the src nid from the message header to determine its originating node, as opposed to using something like a UUID embedded in the message. This means when sending to an NMR node we need to pick a NI and then stick with that going forward.
Note: When sending to a router that scenario boils down to considering the router as the next-hop peer. The final destination peer NIs are no longer considered in the selection. The next-hop can then be MR or non-MR and the code will deal with it accordingly.
A1C - src specified, local dst, mr dst
- find the local ni given src_nid
- if no local ni found fail
- if local ni found is down, then fail
- find peer identified by the dst_nid
- select the best peer_ni for that peer
- take into account the health of the peer_ni (if we just demerit the peer_ni it can still be the best of the bunch. So we need to keep track of the peer_nis/local_nis a message was sent over, so we don't revisit the same ones again. This should be part of the message)
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- if this is a resend, do not select the same peer_ni again unless no other peer_nis are available and that peer_ni is not in a HARD_ERROR state.
A2C - src specified, route to dst, mr dst
- find local ni given src_nid
- if no local ni found fail
- if local ni found is down, then fail
- find router to dst_nid
- If no router present then fail.
- find best peer_ni (for the router) to send to
- take into account the health of the peer_ni
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- If this is a resend and the peer_nis is not specified, do not select the same peer_ni again. The original destination NID can be found in the message.
- Keep trying to send to the peer_ni even if it has been used before, as long as it is not in a HARD_ERROR state.
A1D - src specified, local dst, nmr dst
- find local ni given src nid
- if no local_ni found fail
- if local ni found is down, then fail
- find peer_ni using dst_nid
- send to that peer_ni
- If this is a resend retry the send on the peer_ni unless that peer_ni is in a HARD_ERROR state, then fail.
A2D - src specified, route to dst, nmr dst
- find local_ni given the src_nid
- if no local_ni found fail
- if local ni found is down, then fail
- find router to go through to that peer_ni
- send to the NID of that router.
- If this is a resend retry the send on the peer_ni unless that peer_ni is in a HARD_ERROR state, then fail.
B1C - src any, local dst, mr dst
- select the best_ni to send from, by going through all the local_nis that can reach any of the networks the peer is on
- consider local_ni health in the selection by selecting the local_ni with the best health value.
- If this is a resend do not select a local_ni that has already been used.
- select the best_peer_ni that can be reached by the best_ni selected in the previous step
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- If this is a resend and the resend peer_ni is not specified do not consider a peer_ni that has already been used for sending as long as there are other peer_nis available for selection. Loop around and re-use peer-nis in round robin.
- peer_nis that are selected cannot be in HARD_ERROR state.
- send the message over that path.
B2C - src any, route to dst, mr dst
- find the router that can reach the dst_nid
- find the peer for that router. (The peer is MR)
- go to B1C
B1D - src any, local dst, nmr dst
- find
peer_ni
usingdst_nid
(non-MR, so this is the onlypeer_ni
candidate)- no issue if
peer_ni
is healthy - try this
peer_ni
even if it is unhealthy if this is the 1st attempt to send this message - fail if resending to an unhealthy
peer_ni
- no issue if
- pick the preferred local_NI for this
peer_ni
if set- If the preferred local_NI is not healthy, fail sending the message and let the upper layers deal with recovery.
- otherwise if preferred local_NI is not set, then pick a healthy local NI and make it the preferred NI for this
peer_ni
- send over this path
B2D - src any, route dst, nmr dst
find route to
dst_nid
find
peer_ni
of routerno issue if
peer_ni
is healthytry this
peer_ni
even if it is unhealthy if this is the 1st attempt to send this messagefail if resending to an unhealthy
peer_ni
pick the preferred local_NI for the
dst_nid
if setIf the preferred local_NI is not healthy, fail sending the message and let the upper layers deal with recovery.
otherwise if preferred local_NI is not set, then pick a healthy local NI and make it the preferred NI for this
peer_ni
send over this path
Resend Behavior
LNet will keep attempting to resend a message across different local/remote NIs as long as the interfaces are only in "soft" failure state. Interfaces are demerited when we fail to send over them due to a timeout. This is opposed to a hard failure which is reported by the underlying HW indicating that this interface can no longer be used for sending and receiving.
LNet will terminate resends of a message in one of the following conditions
- Peer timeout expires
- No interfaces available that can be used.
- A message is sent successfully.
For hard failures there needs to be a method to recover these interfaces. This can be done through a ping of the interface whether it is local or remote, since that ping will tell us if an interface is up or down.
The router checker infrastructure currently does this exact job for routers. This infrastructure can be expanded to also query the local or remote NIs which are down.
Selection of the local_ni or peer_ni will be dependent on the following criteria:
- Has the best health value
- skip interfaces in HARD_ERROR state
- closest NUMA (for local interfaces)
- most available credits
- Round Robin.
The interfaces which have soft failures will be demerited so it will naturally be selected as a last option.
Work Items
- refactor lnet_select_pathway() as described above.
- Health Value Maintenance/Demerit system
- Selection based on Health Value and not resending over already used interfaces unless non are available.
- Handling the new events in IBLND and passing them to LNet
- Handling the new events in SOCKLND and passing them to LNet
- Adding LNet level transaction timeout (or reuse the peer timeout) and cancelling a resend on timeout
- Handling timeout case in ptlrpc
Patches
...
Transient failures
Transient Interface failures will be detected in one of two ways
- Synchronously as a return failure to the call to
lnd_send()
- Asynchronously as an event that could be detected at a later point.
- These asynchronous events can be a result of a send operations
Synchronous Send Failures
lnet_select_pathway
() can fail for the following reasons:
- Shutdown in progress
- Out of memory
- Interrup signal received
- Discovery error.
- An MD bind failure
-EINVAL
-HOSTUNREACH
Invalid information given
- Message dropped
- Aborting message
- no route found
- Internal failure
1, 2, 5, 6 and 10 are resource errors and it does not make sense to resend the message as any resend will likely run into the same problem.
Asynchronous Send Failures
LNet should resend the message:
- On LND transmit timeout
- On LND connection failure
- On LND send failure
Resend Handling
When there is a message send failure due to the reasons outlined above. The behavior should be as follows:
- The local or remote interface health is decremented
- Failure statistics incremented
- A resend is issued on a different local interface if there is one available. If there is none available attempt the same interface again.
- The message will continuously be resent until one of the following criteria is fulfilled:
- Message is completed successfully.
- Retry-count is reached
- Transaction timeout expires
Two new fields will be added to lnet_msg:
- msg_status - bit field that indicates the type of failure which requires a resend
- msg_deadline - the deadline for the message calculated by, send time + transaction timeout
Code Block |
---|
struct lnet_msg {
...
__u32 msg_status;
ktime msg_deadline;
...
} |
When a message encounters one of the errors above, the LND will update the msg_status
field appropriately and call lnet_finalize()
lnet_finalize()
will check if the message has timed out or if it needs to be resent and will take action on it. lnet_finalize()
currently calls lnet_complete_msg_locked()
to continue the processing. If the message has not been sent, then lnet_finalize()
should call another function to resend, lnet_resend_msg_locked()
.
lnet_resend_msg_locked()
shall queue the message on a resend queue and wake up a thread responsible for resending messages, the monitor thread portrayed in the above diagram.
When a message is initially sent it's tagged with a deadline for this message. The message will be placed on the active queue. If the message is not completed within that timeout it will be finalized and removed from the active queue. A timeout event will be passed to the ULP.
If the LND times out and LNet attemps to resend, it'll place the message on the resend queue. A message can be on both the active and resend queue.
As shown in the diagram above both lnet_send()
and lnet_parse()
put messages on the active queue. lnet_finalize()
consumes messages off the active queue when it's time to decommit them.
When the LND calls lnet_finalize()
on a timed out message, lnet_finalize()
will put the message on the resend queue and wake up the monitor thread.
The Monitor Thread
The router checker thread will be refactored to full fill the following responsibilities:
- Check the active queue once per second for expired messages
- The monitor thread will wake up ever second and check the top of the active queue, IE the oldest message on the list. If that message has expired it updates its status to TIMEDOUT and finalizes it. Finalizeing the message will include removing it from the active queue and the resend queue. It then moves on to the next message on the list and stops once it find a message that has not expired.
- Check if there are any messages to resend on the resend_queue
- If there are any messages queued, it'll call
lnet_send()
on each one.
- If there are any messages queued, it'll call
- Check if there are any peers on the local_ni recovery queue.
- local_nis are a bit tricky to recover. How do you determine if a local NI is good again. Do we ping a random peer NI on the same network as the local NI? If so then what if this local NI has a problem? We could be introducing other failure handling not associated with the local NI recovery during its recovery process.
- Best approach at this time is to ping itself.
- Pinging itself will force the ping message to travel down the entire stack, LND, Verbs/TCP and IB/HFI/IP. This should be sufficient to determine if the interface has recovered from the transient error encountered.
- The time delay to recover the interface will also allow for the LNDs queue to empty out under congestion.
- Check if there are any peers on the remote_ni recovery queue
- ping the remote ni
- Unfortunately, that could result in using an unhealty local NI, but there is no way around that.
- In that case we will manage the health_value of the local NI and remote NI as described above.
The assumption is that under normal circumstances the number of re-sends should be low, so the thread will not add any logic to pace out the resend rate, such as what lnet_finalize()
does.
In case of immediate failures, for example route failure, the message will not make it on the network. There is a risk that immediate failure could trigger a burst of resends for the message. This could be exaggerated if there is only one interface in the system.
This will be metigated by having a maximum number of retry count. This is a configured value and will cap the number of resends in this case.
Setting retry count to 0 will turn off retries completely and will trigger a message to fail and propagated up on first failure encountered.
It is possible that a message can be on the resend queue when it either completes or times out. In both of these case it will be removed from the resend queue as well as the active queue and finalized.
Remote triggered timeout
A PUT not receiving an ACK or a GET not receiving a REPLY is considered a remote timeout. This could happen if the peer is too busy to respond in a timely way or if the peer is going through a reboot, or other unforseen circustances.
The other case which falls into this category is LND timeouts because of missing LND acknowledgment, ex IBLND_MSG_PUT_ACK.
In these cases LNet can not issue a resend safely. The message could've already been received on the remote end and processed. In this case we must add the ability to handle duplicate requests, which is outside the scope of this project.
Therefore, handling this category of failures will be delegated to the ULP. LNet will pass up a timeout event to the ULP.
Protection
The message will continue to be protected by the LNet net CPT lock to ensure mutual access.
When the message is committed, lnet_msg_commit(), the message cpt is assigned. This cpt value is then used to protect the message in subsequent usages. Relevant to this discussion is when the message is examined in lnet_finalize()
and in the monitor thread and either removed from the active queue or placed on the resend queue.
Timeout Value
This section discusses how LNet shall calculate its timeouts
There are two options:
- ULP provided timeout
- LNet configurable timeout
ULP Provided Timeout
The ULP, ex: ptlrpc, will set the timeout in the lnet_libmd structure. This timeout indicates how long the ptlrpc is willing to wait for an RPC response.
LNet can then set its own timeouts based on that timeout.
The draw back of this approach is that the timeouts set is specific to the ULP. For example in ptlrpc this would be the time to wait for an RPC reply.
As shown in the diagram below the timeout provided by ptlrpc would cover the RPC response sent by the peer ptlrpc. While LNet needs to ensure that a single LNET_MSG_PUT makes it to peer LNet.
Therefore, the ULP timeout is related to the LNet transaction timeout only in the sense that it must be less than ULP timeout, but a reasonable LNet transaction timeout can not be derived from the ULP timeout, other than to say it must be less.
This might be enough of a reason to pass the ULP down and have the following logic:
Code Block |
---|
transaction_timeout = min(ULP_timeout / num_timeout, global_transaction_timeout / num_retries) |
This way the LNet transaction timeout can be set a lot less than the ULP timeout, considering that the ULP timeout can vary its timeout, based on an adaptive backoff algorithm.
Gliffy Diagram | ||||
---|---|---|---|---|
|
This trickle down approach has the advantage of simplifying the configuration of the LNet Resiliency feature, as well as making the timeout consistent through out the system, instead of configuring the LND timeout to be much larger than the pltrpc timeout as it is now.
The timeout parameter will be passed down to LNet by setting it in the struct lnet_libmd
Code Block |
---|
struct lnet_libmd {
struct list_head md_list;
struct lnet_libhandle md_lh;
struct lnet_me *md_me;
char *md_start;
unsigned int md_offset;
unsigned int md_length;
unsigned int md_max_size;
int md_threshold;
int md_refcount;
unsigned int md_options;
unsigned int md_flags;
unsigned int md_niov; /* # frags at end of struct */
void *md_user_ptr;
struct lnet_eq *md_eq;
struct lnet_handle_md md_bulk_handle;
/*
* timeout to wait for this MD completion
*/
ktime md_timeout;
union {
struct kvec»····iov[LNET_MAX_IOV];
lnet_kiov_t»····kiov[LNET_MAX_IOV];
} md_iov;
}; |
LNet Configurable timeout
A transaction timeout value will be configured in LNet and used as described above. This approach avoids passing down the ULP timeout and relies on the sys admin to set the timeout in LNet to a reasonable value, based on the network requirements of the system.
That timeout value will still be used to calculate the LND timeout as in the above approach.
Conclusion on Approach
The second approach will be taken for the first phase, as it reduces the scope of the work and allows the project to meet the deadline for 2.12.
Selection Algorithm
The selection algorithm will be modified to take health into account and will operate according to the following logic:
Code Block |
---|
for every peer_net in peer {
local_net = peer_net
if peer_net is not local
select a router
local_net = router->net
for every local_ni on local_net
check if local_ni has best health_value
check if local_ni is nearest MD NUMA
check if local_ni has the most available credits
check if we need to use round robin selection
If above criteria is satisfied
best_ni = local_ni
for every peer_ni on best_ni->net
check if peer_ni has best health value
check if peer_ni has the most available credits
check if we need to use round robin selection
If above criteria is satisfied
best_peer_ni = peer_ni
send(best_ni, peer_ni)
} |
The above algorithm will always prefer NI's that are the most healthy. This is important because dropping even one message will likely result in client evictions. So it is important to always ensure we're using the best path possible.
LND Interface
LNet shall calculate the message timeout as follows:
message timeout = transaction timeout / retry count
The message timeout will be stored in the lnet_msg structure and passed down to the LND via lnd_send().
LND Transmits (o2iblnd specific discussion)
ULP requests from LNet to send a GET or a PUT via LNetGet()
and LNetPut()
APIs. LNet then calls into the LND to complete the operation. The LND can complete the LNet PUT/GET via a set of LND messages as shown in the diagrams below.
When the LND transmits the LND message it sets a tx_deadline
for that particular transmit. This tx_deadline
remains active until the remote has confirmed receipt of the message, if an aknwoledgment is expected or if a no acknowledgement is expected then when the tx is completed the tx_deadline is completed. Receipt of the message at the remote is when LNet is informed that a message has been received by the LND, done via lnet_parse()
, then LNet calls back into the LND layer to receive the message.
By handling the tx_deadline
properly we are able to account for almost all next-hop failures. LNet would've done its best to ensure that a message has arrived at the immediate next hop.
The tx_deadline
is LND-specific, and derived from the timeout
(or sock_timeout
) module parameter of the LND.
Gliffy Diagram | ||||
---|---|---|---|---|
|
LND timeout
PUT
Gliffy Diagram | ||||
---|---|---|---|---|
|
GET
Gliffy Diagram | ||||
---|---|---|---|---|
|
A third type of message that the LND sends is the IBLND_MSG_IMMEDIATE. The data is embedded in the message and posted. There is no handshake in this case.
For the PUT case described in the sequence diagram, the initiator sends two messages:
- IBLND_MSG_PUT_REQ
- IBLND_MSG_PUT_DONE
Both of these messages are sent using the same tx structure. The tx is allocated and placed on a waiting queue. When the IBLND_MSG_PUT_ACK is received the waiting tx is looked up and used to send the IBLND_MSG_PUT_DONE.
When kiblnd_queue_tx_locked()
is called for IBLND_MSG_PUT_REQ it sets the tx_deadline as follows:
Code Block |
---|
timeout_ns = *kiblnd_tunables.kib_timeout * NSEC_PER_SEC;
tx->tx_deadline = ktime_add_ns(ktime_get(), timeout_ns); |
When kiblnd_queu_tx_locked() is called for IBLND_MSG_PUT_DONE it reset the tx_deadline again.
This presents an obstacle for the LNet Resiliency feature. LNet provides a timeout for the LND as described above. From LNet's perspective this deadline is for the LNet PUT message. However, if we simply use that value for the timeout_ns calculation, then in essence will will be waing for 2 * LND timeout for the completion of the LNet PUT message. This will mean less re-transmits.
Therefore, the LND, since it has knowledge of its own protocols will need to divide the timeout provided by LNet by the number of transmits it needs to do to complete the LNet level message:
- LNET_MSG_GET: Requires only IBLND_MSG_GET_REQ. Use the LNet provided timeout as is.
- LNET_MSG_PUT: Requires IBLND_MSG_PUT and IBLND_MSG_PUT_DONE to complete the LNET_MSG_PUT. Use LNet provided timeout / 2
- LNET_MSG_GET/LNET_MSG_PUT with < 4K payload: Requires IBLND_MSG_IMMEDIATE. Use LNet provided timeout as is.
System Timeouts
There are multiple timeouts kept at different layers of the code. The LNet Resiliency will attempt to reduce the complexity and ambiguity of setting the timeouts in the system.
This will be done by using a trickle down approach as mentioned before. The top level transaction timeout will be provided to LNet for each PUT/GET send request. If one is not provided LNet will use a configurable default.
LNet will calculate the following timeouts from the transaction timeout:
- Message timeout = Transaction timeout / retry count
- LND timeout = Message timeout / number of LND messages used to complete an LNet PUT/GET
Implementation Specifics
Reasons for timeout
The discussion here refers to the LND Transmit timeout.
Timeouts could occur due to several reasons:
- The message is on the sender's queue and is not posted within the timeout
- This indicates that the local interface is too busy and is unable to process the messages on its queue.
- The message is posted but the transmit is never completed
- An actual culprit can not be determined in this scenario. It could be a sender issue, a receiver issue or a network issue.
- The message is posted, the transmit is completed, but the remote never acknowledges.
- In the IBLND, there are explicit acknowledgements in most cases when the message is received and forwarded to the LNet layer. Look below for more details.
- If an LND message is in waiting state and it didn't receive the expected response, then this indicates an issue at the remote's LND, either at the lower protocol, IB/TCP, or the notification at the LNet layer is not being processed in a timely fashion.
Each of these scenarios can be handled differently
Desired Behavior
The desired behavior is listed for each of the above scenarios:
Scenario 1 - Message not posted
- Connection is closed
- The local interface health is decremented
- Failure statistics incremented
- A resend is issued.
- Selection algorithm will prefer less the unhealthy NI
Scenario 2 - Transmit not completed
- Connection is closed
- The local and remote interface health is updated
- Failure statistics incremented on both local and remote
- Selection algorithm will prefer less the unhealthy NIs
Scenario 3 - No acknowledgement by remote
- Connection is closed
- The remote interface health is updated
- Failure statistics incremented
- Selection algorithm will prefer less the unhealthy NIs
Selection Algorithm with Health
Algorithm Parameters
Parameter | Values | |
SRC NID | Specified (A) | Not specified (B) |
DST NID | local (1) | not local (2) |
DST NID | MR ( C ) | NMR (D) |
Note that when communicating with an NMR peer we need to ensure that the source NI is always the same: there are a few places where the upper layers use the src nid from the message header to determine its originating node, as opposed to using something like a UUID embedded in the message. This means when sending to an NMR node we need to pick a NI and then stick with that going forward.
Note: When sending to a router that scenario boils down to considering the router as the next-hop peer. The final destination peer NIs are no longer considered in the selection. The next-hop can then be MR or non-MR and the code will deal with it accordingly.
A1C - src specified, local dst, mr dst
- find the local ni given src_nid
- if no local ni found fail
- if local ni found is down, then fail
- find peer identified by the dst_nid
- select the best peer_ni for that peer
- take into account the health of the peer_ni (if we just demerit the peer_ni it can still be the best of the bunch. So we need to keep track of the peer_nis/local_nis a message was sent over, so we don't revisit the same ones again. This should be part of the message)
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- if this is a resend, do not select the same peer_ni again unless no other peer_nis are available and that peer_ni is not in a HARD_ERROR state.
A2C - src specified, route to dst, mr dst
- find local ni given src_nid
- if no local ni found fail
- if local ni found is down, then fail
- find router to dst_nid
- If no router present then fail.
- find best peer_ni (for the router) to send to
- take into account the health of the peer_ni
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- If this is a resend and the peer_nis is not specified, do not select the same peer_ni again. The original destination NID can be found in the message.
- Keep trying to send to the peer_ni even if it has been used before, as long as it is not in a HARD_ERROR state.
A1D - src specified, local dst, nmr dst
- find local ni given src nid
- if no local_ni found fail
- if local ni found is down, then fail
- find peer_ni using dst_nid
- send to that peer_ni
- If this is a resend retry the send on the peer_ni unless that peer_ni is in a HARD_ERROR state, then fail.
A2D - src specified, route to dst, nmr dst
- find local_ni given the src_nid
- if no local_ni found fail
- if local ni found is down, then fail
- find router to go through to that peer_ni
- send to the NID of that router.
- If this is a resend retry the send on the peer_ni unless that peer_ni is in a HARD_ERROR state, then fail.
B1C - src any, local dst, mr dst
- select the best_ni to send from, by going through all the local_nis that can reach any of the networks the peer is on
- consider local_ni health in the selection by selecting the local_ni with the best health value.
- If this is a resend do not select a local_ni that has already been used.
- select the best_peer_ni that can be reached by the best_ni selected in the previous step
- If this is a resend and the resend peer_ni is specified, then select this peer_ni if it is healthy, otherwise continue with the algorithm.
- If this is a resend and the resend peer_ni is not specified do not consider a peer_ni that has already been used for sending as long as there are other peer_nis available for selection. Loop around and re-use peer-nis in round robin.
- peer_nis that are selected cannot be in HARD_ERROR state.
- send the message over that path.
B2C - src any, route to dst, mr dst
- find the router that can reach the dst_nid
- find the peer for that router. (The peer is MR)
- go to B1C
B1D - src any, local dst, nmr dst
- find
peer_ni
usingdst_nid
(non-MR, so this is the onlypeer_ni
candidate)- no issue if
peer_ni
is healthy - try this
peer_ni
even if it is unhealthy if this is the 1st attempt to send this message - fail if resending to an unhealthy
peer_ni
- no issue if
- pick the preferred local_NI for this
peer_ni
if set- If the preferred local_NI is not healthy, fail sending the message and let the upper layers deal with recovery.
- otherwise if preferred local_NI is not set, then pick a healthy local NI and make it the preferred NI for this
peer_ni
- send over this path
B2D - src any, route dst, nmr dst
find route to
dst_nid
find
peer_ni
of routerno issue if
peer_ni
is healthytry this
peer_ni
even if it is unhealthy if this is the 1st attempt to send this messagefail if resending to an unhealthy
peer_ni
pick the preferred local_NI for the
dst_nid
if setIf the preferred local_NI is not healthy, fail sending the message and let the upper layers deal with recovery.
otherwise if preferred local_NI is not set, then pick a healthy local NI and make it the preferred NI for this
peer_ni
send over this path
Work Items
- refactor lnet_select_pathway() as described above.
- Health Value Maintenance/Demerit system
- Selection based on Health Value and not resending over already used interfaces unless non are available.
- Handling the new events in IBLND and passing them to LNet
- Handling the new events in SOCKLND and passing them to LNet
- Adding LNet level transaction timeout (or reuse the peer timeout) and cancelling a resend on timeout
- Handling timeout case in ptlrpc
Progress
Code Block |
---|
LNet Health
Refactor lnet_select_pathway() - DONE
add health value per ni - DONE
add lnet_health_range - DONE
handle local timeouts - DONE
When re-sending a message we don't need to ensure we send to the same peer_ni as the original send. There are two cases to consider:
MR peer: we can just use the current selection algorithm to resend a message
Non-MR peer: there will only be on peer_ni anyway (or preferred NI will be set) and we'll need to use the same local NI when sending to a Non-MR.
Modify the LNDs to set the appropriate error code on timeout
handle tx timeout due being stuck on the queues for too long
Due to local problem.
At this point we should be able to handle trying different interfaces if there is an interface timeout
o2iblnd
socklnd
Introduce retry_count
Only resend up to the retry_count
This should be user configurable
Should have a max value of 5 retries
Rate limit resend rate
Introduce resend_interval
Make sure to pace out the resends by that interval
We need to guard against situations where there is an immediate failure which triggers an immediate resend, causing a resend tight loop
Refactor the router pinger thread to handle resending.
lnet_finalize() queues those messages on a queue and wakes up the router pinger thread
router pinger wakes up every second (or if woken up manually) goes through the queue, timesout and fails any messages that have passed their deadline. Checks if a message to be resent is not being resent before its resend interval. Resends any messages that need to be resent.
Introduce an LND API to read the retransmit timeout.
Calculate the message timeout as follows:
message timeout = (retry count * LND transmit timeout) + (resend interval * retry count)
Message timeout is the timeout by which LNet abandons retransmits
This implies that LNet has detected some sort of a failure while sending a message
use the message timeout instead of the peer timeout as the deadline for the message
If the message timesout a failure event is propagated to the top layer.
o2iblnd
socklnd
handle local NIs down events from the LND.
NIs are flagged as down and are not considered as part of the selection process.
Can only come up by another event from the LND.
o2iblnd
socklnd
Move the peer timeout from the LND to the LNet.
It should still be per NI.
Add userspace support for setting retry count
Add userspace support for setting retransmit interval
Add peer_ni_healthvalue
This value will reflect the health of the peer_ni and should be initially set the peer credits.
Modify the selection algorithm to select the peer_ni based on the average of the health value and the credits
Adjust the peer_ni health value due to failure/successs
On Success the health value should be incremented if it's not at its maximum value.
On Failure the health value should be decremented (stays >= 0)
Failures will either be due to remote tx timeout or network error
Modify the LNDs to set the appropriate error code on tx timeout
o2iblnd
socklnd
Handle transaction timeout
Transaction timeout is the deadline by which LNet knows that a PUT or a GET did not receive the ACK or REPLY respectively.
When a PUT or a GET is sent successfully.
It is then put on a queue if it expects and ACK or a REPLY
router pinger will wake up every second and will check if these messages have not received the expected response within the timeout specified. If not then we'll need to time it out.
Provide a mechanism to over ride the transaction timeout.
When sending a message the caller of LNetGet()/LNetPut() should specify a timeout for the transaction. If not provided then it defaults to the global transaction timeout.
Add a transaction timeout even to be send to the upper layer.
Handle transaction timeout in the upper layer (ptlrpc)
Add userspace support for maximum transaction timeout
This was added in 2.11 to solve the blocked mount
Add the following statistics
The number of resends due to local tx timeout per local NI
The number of resends due to the remote tx timeout per peer NI
The number of resends due to a network timeout per local and peer NI
The number of local tx timeouts
The number of remote tx timeouts
The number of network timeouts
The number of local interface down events
The number of local interface up events.
The average time it takes to successfully send a message per peer NI
The average time it takes to successfully complete a transaction per peer NI |
...
O2IBLND Detailed Discussion
...
Timeout Handling
LND TX Timeout
PUT
Gliffy Diagram | ||||
---|---|---|---|---|
|
...
- TX timeout can be triggered because the TX remains on one of the outgoing queues for too long. This would indicate that there is something wrong with the local NI. It's either too congested or otherwise problematic. This should result in us trying to resend using a different local NI but possibly to the same peer_ni.
- TX timeout can be triggered because the TX is posted via (ib_post_send()) but it has not completed. In this case we can safely conclude that the peer_ni is either congested or otherwise down. This should result in us trying to resent to a different peer_ni, but potentially using the same local NI.
GET
Gliffy Diagram | ||||
---|---|---|---|---|
|
...