conn→ibc_credits
is set to the negotiated queue depthkiblnd_passive_connect()
kiblnd_check_connreply()
kiblnd_post_tx_locked()
is passed a credit value.IBLND_MSG_PUT_NAK, IBLND_MSG_PUT_ACK, IBLND_MSG_PUT_DONE, IBLND_MSG_GET_DONE, IBLND_MSG_NOOP
(for v2 protocol) require no creditsmsg→ibm_credits
is set to conn→ibc_outstanding_credits
( conn→ibc_outstanding_credits
is described below)kiblnd_handle_rx()
, depending on the message we'll either increment conn→ibc_outstanding_credits
or conn→ibc_reserved_credits
by 1 after we call ib_post_recv()
to receive buffers.ibc_reserved_credits
is used to transfer messaged for the ibc_tx_queue_rsrved
queue to to the ibc_tx_queue
for sending.ibc_outstanding_credits
is assigned to msg→ibm_credits
and sent to the peeribc_outstanding_credits
. This is passed in the next transmit to the peer. That takes us back to (2)The underlying assumption in the connection management algorithm is that both sides are exchanging messages. If there is a change in the call flow where one side simply sends events with the other side not responding using IMMEDIATE messages, the initiating side will run out of credits and will be stuck since none of the credits are being returned.
It might be better to return the credit on an IMMEDIATE message once the tx is completed. When receiving immediate message do not increment ibc_outstanding_credits
.
Queue depth is negotiated as follows:
Patch https://review.whamcloud.com/#/c/28850/5 leverages this algorithm by decreasing the active's queue depth when attempting to create the qp, then sending the adjusted queue depth to the passive. The passive creates its connection structure and reduces the queue depth if necessary, then sends it back to the active, which uses that as the connection queue depth and credits.
There are some MLX5 limitation when calculating the max_send_wr which relies on the negotiatied Queue Depth.
From:
Here is the reply I got from a Mellanox engineer: Hi, I sent in the past an explanation to this list and I am going to repeat it. The number reported for max_qp_wr is the maximum value the HCA supports. But it is not guaranteed that this maximum is supported for any configuration of a QP. For example, the number of send SGEs and the transport service can affect this max value. From the spec: 11.2.1.2 QUERY HCA Description: Returns the attributes for the specified HCA. The maximum values defined in this section are guaranteed not-to-exceed values. It is possible for an implementation to allocate some HCA resources from the same space. In that case, the maximum values returned are not guaranteed for all of those resources simultaneously Mlx5 supported devices work as described above. Mlx4 supported devices has some flexibility allowing it to user larger work queues so this is why you can define 16K WRs for mlx4 and for mlx5 you can do only 8K (in your specific case). |
Snippet from a discussion with MLX engineer
max_qp_wr; / Maximum number of outstanding WR on any work queue / I checked max_qp_wr on the cards I have: ConnecX-3, ConnectX-3 Pro (mlx4): 16351 Connect-IB, ConnectX-4, ConnectX-5 (mlx5): 32768 Amir, Looks like max_send_wr and max_recv_wr are limited by ib_device_attr.max_qp_wr But I see a lot of other checks at QP creating in drivers/infiniband/hw/mlx5/qp.c:create_kernel_qp() and drivers/infiniband/hw/mlx5/qp.c:calc_sq_size(). I aslo see the following note "There may be RDMA devices that for specific transport types may support less outstanding Work Requests than the maximum reported value" |
Concurrent sends were intended to limit the maximum number of in-flight transfers for the entire system. However, we were multiplying the max_send_wr with concurrent sends which implied that it's per connection, which is not true.
It is better to remove concurrent sends tunable completely as that will simplify the code and instead rely on the queue depth to limit the in-flight transfers per connection.
The jury is still up on this change. It needs to be tested in the filed to see if it'll have a negative impact on performance.