<!--\r
div.Section1\r
{page:Section1;}\r
+span.GramE\r
+ {}\r
-->\r
</style>\r
</head>\r
<h1 align="center">User's Manual</h1>\r
<h2 align="center">Release 2.3</h2>\r
<h3 align="center">\r
-<!--webbot bot="Timestamp" S-Type="EDITED" S-Format="%m/%d/%Y" startspan -->03/29/2010<!--webbot bot="Timestamp" endspan i-checksum="12643" --></h3>\r
+<!--webbot bot="Timestamp" S-Type="EDITED" S-Format="%m/%d/%Y" startspan -->04/23/2010<!--webbot bot="Timestamp" endspan i-checksum="12549" --></h3>\r
<h2 align="left"><u>Overview</u></h2>\r
<p align="left"><span style="FONT-SIZE: 12pt; FONT-FAMILY: 'Times New Roman'">\r
The OpenFabrics Enterprise Distribution for Windows package is composed of software modules intended \r
<li>\r
<h4 align="left"><a href="#DAPLTEST">DAPLtest</a></h4></li>\r
<li>\r
- <h4 align="left"><a href="#DAPLtest-examples">DAPLtest Examples</a></h4>\r
- </li>\r
+ <h4 align="left"><a href="#DAPLtest-examples">DAPLtest Examples</a></h4></li>\r
<li>\r
<p align="left"><b><a href="#DAT_App_Build">DAT Application Build</a></b><br>\r
<br> </li>\r
<h3 align="left"><u><font color="#0000FF"><a href="#QLOGICVNIC">QLogic VNIC_Driver</a></font></u></h3></li>\r
<li>\r
<h3 align="left"><u><a href="#InfiniBand_Software_Development_Kit">\r
- InfiniBand Software Development Kit</a></u></h3></li>\r
+ OFED Software Development Kit</a></u></h3>\r
+<ul>\r
+ <li>\r
+ <p align="left"><a href="#OFED_InfiniBand_Verbs">OFED InfiniBand Verbs</a><br>\r
+ </li>\r
+ <li>\r
+ <p align="left"><a href="#RDMA_CM_-_Communications_Manager">RDMA CM - Communications Manager</a><br>\r
+ </li>\r
+</ul>\r
+</li>\r
<li>\r
<h3 align="left"><a href="#WinVerbs">WinVerbs</a></h3></li>\r
</ul>\r
<br>\r
-g get multicast group info<br>\r
<br>\r
--m get multicast member info. If a group is specified, limit the<br>\r
-output to the group specified and print one line containing only<br>\r
-the GUID and node description for each entry. Example: saquery<br>\r
+-m get multicast member info. If a group is specified, limit the\r
+output to the group specified and print one line containing only\r
+the GUID and node description for each entry. Example: saquery\r
-m 0xc000<br>\r
<br>\r
-x get LinkRecord info<br>\r
<br>\r
--src-to-dst<br>\r
-get a PathRecord for <src:dst> where src and dst are either node<br>\r
+get a PathRecord for <src:dst> where src and dst are either node\r
names or LIDs<br>\r
<br>\r
--sgid-to-dgid<br>\r
-get a PathRecord for sgid to dgid where both GIDs are in an IPv6<br>\r
-format acceptable to inet_pton(3).<br>\r
+get a PathRecord for sgid to dgid where both GIDs are in an IPv6 format \r
+acceptable to inet_pton.<br>\r
<br>\r
-C <ca_name><br>\r
use the specified ca_name.<br>\r
<br>\r
-a set activity count<br>\r
<br>\r
-<br>\r
COMMON OPTIONS<br>\r
+<br>\r
Most OFED diagnostics take the following common flags. The exact list<br>\r
of supported flags per utility can be found in the usage message and<br>\r
can be shown using the util_name -h syntax.<br>\r
target ioc guid parameters as input.<br> </span></p>\r
<p class="MsoPlainText" style="MARGIN: 0in 0in 0pt">\r
<span style="font-size: 12pt; font-family: Times New Roman">\r
-Usage:--\r
+Usage:\r
<br><br>\r
-<span style="text-decoration: underline;">To create child vnic devices</span>\r
+<span style="text-decoration: underline;">Create child vnic devices</span>\r
<br><br>\r
qlgcvnic_config -c {caguid} {iocguid} {instanceid} {interface description}\r
<br><br>\r
<h3> <a href="#TOP"><font color="#000000"><return-to-top></font></a></h3>\r
<p> </p>\r
<BLOCKQUOTE></BLOCKQUOTE>\r
-<h2><a name="InfiniBand_Software_Development_Kit">InfiniBand Software \r
+<h2><a name="InfiniBand_Software_Development_Kit">OFED Software \r
Development Kit</a></h2>\r
<hr>\r
-<p>If selected during a OFED install, the IB Software Development Kit will \r
-be installed as '%SystemDrive%\IBSDK'. Underneath the IBSDK\ folder you will find an \r
-include folder 'Inc\', library definition files 'Lib\' along with a \r
-'Samples' folder.</p>\r
+<p>If selected during install, the OFED Software Development Kit will \r
+be installed as '%SystemDrive%\OFED_SDK'. Underneath the OFED_SDK\ folder you will find \r
+the following folders:</p>\r
+<ul>\r
+ <li> 'Inc\' include files</li>\r
+ <li>'Lib\'ibrary definition files</li>\r
+ <li>'Samples' example codes to demonstrate building and use of OFED \r
+ interfaces.</li>\r
+</ul>\r
<h4>Compilation:</h4>\r
<blockquote>\r
- <p>Add the additional include path '%SystemDrive%\IBSDK\Inc'; resource files \r
+ <p>Add the additional include path '%SystemDrive%\OFED_SDK\Inc'; resource files \r
may also use this path.</p>\r
</blockquote>\r
<h4>Linking:</h4>\r
<blockquote>\r
- <p>Add the additional library search path '%SystemDrive%\IBSDK\Lib'.</p>\r
+ <p>Add the additional library search path '%SystemDrive%\OFED_SDK\Lib'.</p>\r
<p>Include dependent libraries: ibal.lib and complib.lib, or ibal32.lib & \r
complib32.lib for win32 applications on 64-bit platforms.</p>\r
</blockquote>\r
<h4>Samples:</h4>\r
<ul>\r
- <li>DDK\ demonstrates how to build an IB application in the <b>Windows \r
- Server 2003 SP1 DDK </b> (Driver Development Kit) environment.<br>\r
+ <li>WDK\ demonstrates how to build an OFED application in the <b>Windows</b>\r
+ <b>Driver Kit (WDK) </b>environment.<br>\r
+ Consult the README.txt file for details.<br>\r
+ See <a href="http://www.microsoft.com/whdc/Devtools/wdk/default.mspx">\r
+ http://www.microsoft.com/whdc/Devtools/wdk/default.mspx</a> for WDK details.<br>\r
+ </li>\r
+ <li>rdma_bw\ demonstrates how to build an OFED IB verbs \r
+ application in the Visual Studio environment.<br>\r
+ Consult the README.txt file for details.<br>\r
+ </li>\r
+ <li>cmtest\ demonstrates how to build an IB (ibal) application in the \r
+ Visual Studio environment.<br>\r
Consult the README.txt file for details.<br>\r
- See <a href="http://www.microsoft.com/whdc/devtools/ddk/default.mspx">\r
- http://www.microsoft.com/whdc/devtools/ddk/default.mspx</a> for DDK \r
- installation details.<br>\r
</li>\r
- <li>VS\ demonstrates how to build an IB application in the <b>Windows Server \r
- 2003 R2 SP1 </b>Visual Studio 2005 environment.<br>\r
- Consult the README.txt file for details.</li>\r
</ul>\r
\r
+<p align="left"><a href="#TOP"><font color="#000000"><<b>return-to-top</b>></font></a></p>\r
+<p align="left"> </p>\r
+<h2 align="left"><a name="OFED_InfiniBand_Verbs">OFED InfiniBand Verbs</a></h2>\r
+<hr>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<p align="left"><b>NAME</b><br>\r
+<br> \r
+libibverbs.lib - OpenFabrics Enterprise Distribution (OFED) Infiniband verbs library <br><br>\r
+<b>SYNOPSIS<br>\r
+</b><br> \r
+#include <infiniband/verbs.h><br>\r
+<br><b>DESCRIPTION</b></p>\r
+<blockquote>\r
+ <p align="left">This library is an implementation of the verbs based on the Infiniband \r
+specification volume 1.2 chapter 11. It handles the control path of creating, \r
+modifying, querying and destroying resources such as Protection Domains (PD), \r
+Completion Queues (CQ), Queue-Pairs (QP), Shared Receive Queues (SRQ), Address \r
+Handles (AH), Memory Regions (MR). It also handles sending and receiving data \r
+posted to QPs and SRQs, getting completions from CQs using polling and \r
+completions events.<br><br>The control path is implemented through system calls to the uverbs kernel module \r
+which further calls the low level HW driver. The data path is implemented through \r
+calls made to low level HW library which in most cases interacts directly with \r
+the HW providing kernel and network stack bypass (saving context/mode switches) \r
+along with zero copy and an asynchronous I/O model.<br><br>Typically, under network and RDMA programming, there are operations which \r
+involve interaction with remote peers (such as address resolution and connection \r
+establishment) and remote entities (such as route resolution and joining a \r
+multicast group under IB), where a resource managed through IB verbs such as QP \r
+or AH would be eventually created or effected from this interaction. In such \r
+cases, applications whose addressing semantics is based on IP can use librdmacm \r
+(see rdma_cm) which works in conjunction with libibverbs.<br><br>This library is thread safe library and verbs can be called from every thread in \r
+the process (the same resource can even be handled from different threads, for \r
+example: ibv_poll_cq can be called from more than one thread).<br><br>However, it is up to the user to stop working with a resource after it was \r
+destroyed (by the same thread or by any other thread), this may result a \r
+segmentation fault.</p>\r
+ <p align="left">The following shall be declared as functions and may also be defined as\r
+macros.</p>\r
+</blockquote>\r
+<blockquote>\r
+ <p align="left">Function prototypes are provided in \r
+<span style="font-size: 12pt; ">\r
+ %SystemDrive%</span>\OFED_SDK\inc\infiniband\verbs.h.<br>\r
+ <br>Link to \r
+ %SystemDrive%\OFED_SDK\lib\libibverbs.lib</p>\r
+</blockquote>\r
+<p align="left"><b>Device functions</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_device **<a href="#IBV_GET_DEVICE_LIST">ibv_get_device_list</a>(int *num_devices);</p>\r
+ <p align="left">void <a href="#IBV_FREE_DEVICE_LIST">ibv_free_device_list</a>(struct ibv_device **list);</p>\r
+ <p align="left">const char *<a href="#IBV_GET_DEVICE_NAME">ibv_get_device_name</a>(struct ibv_device *device);</p>\r
+ <p align="left">uint64_t <a href="#IBV_GET_DEVICE_GUID">ibv_get_device_guid</a>(struct ibv_device *device);</p>\r
+</blockquote>\r
+<p align="left"><b>Context functions</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_context *<a href="#IBV_OPEN_DEVICE">ibv_open_device</a>(struct ibv_device *device);</p>\r
+ <p align="left">int <a href="#IBV_CLOSE_DEVICE">ibv_close_device</a>(struct ibv_context *context);</p>\r
+</blockquote>\r
+<p align="left"><b>Queries</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_QUERY_DEVICE">ibv_query_device</a>(struct ibv_context *context,\r
+struct ibv_device_attr *device_attr);</p>\r
+ <p align="left">int <a href="#IBV_QUERY_PORT">ibv_query_port</a>(struct ibv_context *context, uint8_t port_num,\r
+struct ibv_port_attr *port_attr);</p>\r
+ <p align="left">int <a href="#IBV_QUERY_PKEY">ibv_query_pkey</a>(struct ibv_context *context, uint8_t port_num,\r
+int index, uint16_t *pkey);</p>\r
+ <p align="left">int <a href="#IBV_QUERY_GID">ibv_query_gid</a>(struct ibv_context *context, uint8_t port_num,\r
+int index, union ibv_gid *gid);</p>\r
+</blockquote>\r
+<p align="left"><b>Asynchronous events</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_GET_ASYNC_EVENT">ibv_get_async_event</a>(struct ibv_context *context,\r
+struct ibv_async_event *event);</p>\r
+ <p align="left">void <a href="#IBV_ACK_ASYNC_EVENT">ibv_ack_async_event</a>(struct ibv_async_event *event);</p>\r
+</blockquote>\r
+<p align="left"><b>Protection Domains</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_pd *<a href="#IBV_ALLOC_PD">ibv_alloc_pd</a>(struct ibv_context *context);</p>\r
+ <p align="left">int <a href="#IBV_DEALLOC_PD">ibv_dealloc_pd</a>(struct ibv_pd *pd);</p>\r
+</blockquote>\r
+<p align="left"><b>Memory Regions</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_mr *<a href="#IBV_REG_MR">ibv_reg_mr</a>(struct ibv_pd *pd, void *addr,\r
+size_t length, enum ibv_access_flags access);</p>\r
+ <p align="left">int <a href="#IBV_DEREG_MR">ibv_dereg_mr</a>(struct ibv_mr *mr);</p>\r
+</blockquote>\r
+<p align="left"><b>Address Handles</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_ah *<a href="#IBV_CREATE_AH">ibv_create_ah</a>(struct ibv_pd *pd, struct ibv_ah_attr *attr);<br><br>int\r
+ <a href="#IBV_INIT_AH_FROM_WC">ibv_init_ah_from_wc</a>(struct ibv_context *context, uint8_t port_num,\r
+struct ibv_wc *wc, struct ibv_grh *grh,\r
+struct ibv_ah_attr *ah_attr);<br><br>struct ibv_ah *<a href="#IBV_CREATE_AH_FROM_WC">ibv_create_ah_from_wc</a>(struct ibv_pd *pd, struct ibv_wc *wc,\r
+struct ibv_grh *grh, uint8_t port_num);<br><br>int <a href="#IBV_DESTROY_AH">ibv_destroy_ah</a>(struct ibv_ah *ah);</p>\r
+</blockquote>\r
+<p align="left"><b>Completion event channels</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_comp_channel *<a href="#IBV_CREATE_COMP_CHANNEL">ibv_create_comp_channel</a>(struct ibv_context \r
+ *context);</p>\r
+</blockquote>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_DESTROY_COMP_CHANNEL">ibv_destroy_comp_channel</a>(struct ibv_comp_channel *channel);</p>\r
+</blockquote>\r
+<p align="left"><b>Completion Queues Control</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_cq *<a href="#IBV_CREATE_CQ">ibv_create_cq</a>(struct ibv_context *context, int cqe,\r
+void *cq_context,\r
+struct ibv_comp_channel *channel,\r
+int comp_vector);<br><br>int <a href="#IBV_DESTROY_CQ">ibv_destroy_cq</a>(struct ibv_cq *cq);<br><br>int\r
+ <a href="#IBV_RESIZE_CQ">ibv_resize_cq</a>(struct ibv_cq *cq, int cqe);</p>\r
+</blockquote>\r
+<p align="left"><b>Reading Completions from CQ</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_POLL_CQ">ibv_poll_cq</a>(struct ibv_cq *cq, int num_entries, struct ibv_wc *wc);</p>\r
+</blockquote>\r
+<p align="left"><b>Requesting / Managing CQ events</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_REQ_NOTIFY_CQ">ibv_req_notify_cq</a>(struct ibv_cq *cq, int solicited_only);</p>\r
+ <p align="left">int <a href="#IBV_GET_CQ_EVENT">ibv_get_cq_event</a>(struct ibv_comp_channel *channel,\r
+struct ibv_cq **cq, void **cq_context);</p>\r
+ <p align="left">void <a href="#IBV_ACK_CQ_EVENTS">ibv_ack_cq_events</a>(struct ibv_cq *cq, unsigned int nevents);</p>\r
+</blockquote>\r
+<p align="left"><b>Shared Receive Queue control</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_srq *<a href="#IBV_CREATE_SRQ">ibv_create_srq</a>(struct ibv_pd *pd, struct ibv_srq_init_attr *srq_init_attr);<br>\r
+ <br>int <a href="#IBV_DESTROY_SRQ">ibv_destroy_srq</a>(struct ibv_srq *srq);<br><br>int\r
+ <a href="#IBV_MODIFY_SRQ">ibv_modify_srq</a>(struct ibv_srq *srq, struct ibv_srq_attr *srq_attr, enum ibv_srq_attr_mask srq_attr_mask);<br>\r
+ <br>int <a href="#IBV_QUERY_SRQ">ibv_query_srq</a>(struct ibv_srq *srq, struct ibv_srq_attr *srq_attr);</p>\r
+</blockquote>\r
+<p align="left"><b>eXtended Reliable Connection control</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_xrc_domain *<a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a>(struct ibv_context *context, int fd, int oflag);<br>\r
+ <br>int <a href="#IBV_CLOSE_XRC_DOMAIN">ibv_close_xrc_domain</a>(struct ibv_xrc_domain *d);<br><br>struct ibv_srq *<a href="#IBV_CREATE_XRC_SRQ">ibv_create_xrc_srq</a>(struct ibv_pd *pd, struct ibv_xrc_domain *xrc_domain, struct ibv_cq *xrc_cq, struct ibv_srq_init_attr *srq_init_attr);<br>\r
+ <br>int <a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a>(struct ibv_qp_init_attr *init_attr, uint32_t *xrc_rcv_qpn);<br>\r
+ <br>int <a href="#IBV_MODIFY_XRC_RCV_QP">ibv_modify_xrc_rcv_qp</a>(struct ibv_xrc_domain *xrc_domain, uint32_t xrc_qp_num, struct ibv_qp_attr *attr, int attr_mask);<br>\r
+ <br>int <a href="#IBV_QUERY_XRC_RCV_QP">ibv_query_xrc_rcv_qp</a>(struct ibv_xrc_domain *xrc_domain, uint32_t xrc_qp_num, struct ibv_qp_attr *attr, int attr_mask, struct ibv_qp_init_attr *init_attr);<br>\r
+ <br>int <a href="#IBV_REG_XRC_RCV_QP">ibv_reg_xrc_rcv_qp</a>(struct ibv_xrc_domain *xrc_domain, uint32_t xrc_qp_num);<br>\r
+ <br>int <a href="#IBV_UNREG_XRC_RCV_QP">ibv_unreg_xrc_rcv_qp</a>(struct ibv_xrc_domain *xrc_domain, uint32_t xrc_qp_num);</p>\r
+</blockquote>\r
+<p align="left"><b>Queue Pair control</b></p>\r
+<blockquote>\r
+ <p align="left">struct ibv_qp *<a href="#IBV_CREATE_QP">ibv_create_qp</a>(struct ibv_pd *pd, struct ibv_qp_init_attr *qp_init_attr);<br>\r
+ <br>int <a href="#IBV_DESTROY_QP">ibv_destroy_qp</a>(struct ibv_qp *qp);<br><br>int\r
+ <a href="#IBV_MODIFY_QP">ibv_modify_qp</a>(struct ibv_qp *qp, struct ibv_qp_attr *attr, enum ibv_qp_attr_mask attr_mask);<br>\r
+ <br>int <a href="#IBV_QUERY_QP">ibv_query_qp</a>(struct ibv_qp *qp, struct ibv_qp_attr *attr, enum ibv_qp_attr_mask attr_mask, struct ibv_qp_init_attr *init_attr);</p>\r
+</blockquote>\r
+<p align="left"><b>Posting Work Requests to QPs/SRQs</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_POST_SEND">ibv_post_send</a>(struct ibv_qp *qp, struct ibv_send_wr *wr, struct ibv_send_wr **bad_wr);<br>\r
+ <br>int <a href="#IBV_POST_RECV">ibv_post_recv</a>(struct ibv_qp *qp, struct ibv_recv_wr *wr, struct ibv_recv_wr **bad_wr);<br>\r
+ <br>int <a href="#IBV_POST_SRQ_RECV">ibv_post_srq_recv</a>(struct ibv_srq *srq, struct ibv_recv_wr *recv_wr, struct ibv_recv_wr **bad_recv_wr);</p>\r
+</blockquote>\r
+<p align="left"><b>Multicast group</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_ATTACH_MCAST">ibv_attach_mcast</a>(struct ibv_qp *qp, union ibv_gid *gid, uint16_t lid);</p>\r
+ <p align="left">int <a href="#IBV_DETACH_MCAST">ibv_detach_mcast</a>(struct ibv_qp *qp, union ibv_gid *gid, uint16_t lid);</p>\r
+</blockquote>\r
+<p align="left"><b>General functions</b></p>\r
+<blockquote>\r
+ <p align="left">int <a href="#IBV_RATE_TO_MULT">ibv_rate_to_mult</a>(enum ibv_rate rate);<br><br>enum ibv_rate\r
+ <a href="#IBV_MULT_TO_RATE">mult_to_ibv_rate</a>(int mult);<br> </p>\r
+</blockquote>\r
+<p align="left"><b>SEE ALSO</b></p>\r
+<blockquote>\r
+ <p align="left"><a href="#IBV_GET_DEVICE_LIST">ibv_get_device_list</a>, \r
+ <a href="#IBV_FREE_DEVICE_LIST">ibv_free_device_list</a>,<br>\r
+ <a href="#IBV_GET_DEVICE_NAME">ibv_get_device_name</a>, \r
+ <a href="#IBV_GET_DEVICE_GUID">ibv_get_device_guid</a>, \r
+ <a href="#IBV_OPEN_DEVICE">ibv_open_device</a>,<br>\r
+ <a href="#IBV_CLOSE_DEVICE">ibv_close_device</a>, \r
+ <a href="#IBV_QUERY_DEVICE">ibv_query_device</a>, \r
+ <a href="#IBV_QUERY_PORT">ibv_query_port</a>,<br>\r
+ <a href="#IBV_QUERY_PKEY">ibv_query_pkey</a>, <a href="#IBV_QUERY_GID">ibv_query_gid</a>, \r
+ <a href="#IBV_GET_ASYNC_EVENT">ibv_get_async_event</a>,<br>\r
+ <a href="#IBV_GET_ASYNC_EVENT">ibv_ack_async_event</a>,\r
+ <a href="#IBV_ALLOC_PD">ibv_alloc_pd</a>, <a href="#IBV_DEALLOC_PD">ibv_dealloc_pd</a>,\r
+ <a href="#IBV_REG_MR">ibv_reg_mr</a>,<br><a href="#IBV_DEREG_MR">ibv_dereg_mr</a>, \r
+ <a href="#IBV_CREATE_AH">ibv_create_ah</a>, <a href="#IBV_INIT_AH_FROM_WC">ibv_init_ah_from_wc</a>,\r
+ <a href="#IBV_CREATE_AH_FROM_WC">ibv_create_ah_from_wc</a>,<br>\r
+ <a href="#IBV_DESTROY_AH">ibv_destroy_ah</a>,\r
+ <a href="#IBV_CREATE_COMP_CHANNEL">ibv_create_comp_channel</a>,<br>\r
+ <a href="#IBV_DESTROY_COMP_CHANNEL">ibv_destroy_comp_channel</a>,\r
+ <a href="#IBV_CREATE_CQ">ibv_create_cq</a>, <a href="#IBV_DESTROY_CQ">ibv_destroy_cq</a>,<br>\r
+ <a href="#IBV_RESIZE_CQ">ibv_resize_cq</a>, <a href="#IBV_POLL_CQ">ibv_poll_cq</a>,\r
+ <a href="#IBV_REQ_NOTIFY_CQ">ibv_req_notify_cq</a>,<br>\r
+ <a href="#IBV_GET_CQ_EVENT">ibv_get_cq_event</a>,\r
+ <a href="#IBV_ACK_CQ_EVENTS">ibv_ack_cq_events</a>,\r
+ <a href="#IBV_CREATE_SRQ">ibv_create_srq</a>,<br><a href="#IBV_DESTROY_SRQ">ibv_destroy_srq</a>,\r
+ <a href="#IBV_MODIFY_SRQ">ibv_modify_srq</a>, <a href="#IBV_QUERY_SRQ">ibv_query_srq</a>,<br>\r
+ <a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a>,\r
+ <a href="#IBV_CLOSE_XRC_DOMAIN">ibv_close_xrc_domain</a>,\r
+ <a href="#IBV_CREATE_XRC_SRQ">ibv_create_xrc_srq</a>,<br>\r
+ <a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a>,\r
+ <a href="#IBV_MODIFY_XRC_RCV_QP">ibv_modify_xrc_rcv_qp</a>,<br>\r
+ <a href="#IBV_QUERY_XRC_RCV_QP">ibv_query_xrc_rcv_qp</a>,\r
+ <a href="#IBV_REG_XRC_RCV_QP">ibv_reg_xrc_rcv_qp</a>,\r
+ <a href="#IBV_UNREG_XRC_RCV_QP">ibv_unreg_xrc_rcv_qp</a>,<br>\r
+ <a href="#IBV_POST_SRQ_RECV">ibv_post_srq_recv</a>, <a href="#IBV_CREATE_QP">ibv_create_qp</a>,\r
+ <a href="#IBV_DESTROY_QP">ibv_destroy_qp</a>, <a href="#IBV_MODIFY_QP">ibv_modify_qp</a>,<br>\r
+ <a href="#IBV_QUERY_QP">ibv_query_qp</a>, <a href="#IBV_POST_SEND">ibv_post_send</a>,\r
+ <a href="#IBV_POST_RECV">ibv_post_recv</a>,<br><a href="#IBV_ATTACH_MCAST">ibv_attach_mcast</a>,\r
+ <a href="#IBV_DETACH_MCAST">ibv_detach_mcast</a>,\r
+ <a href="#IBV_RATE_TO_MULT">ibv_rate_to_mult</a>,\r
+ <a href="#IBV_MULT_TO_RATE">mult_to_ibv_rate</a></p>\r
+</blockquote>\r
+<p align="left"><br><b>AUTHORS</b></p>\r
+<blockquote>\r
+ <p align="left">Dotan Barak <dotanb@mellanox.co.il><br>Or Gerlitz <ogerlitz@voltaire.com><br>Stan Smith <<a href="mailto:stan.smith@intel.com">stan.smith@intel.com</a>></p>\r
+</blockquote>\r
+<p align="left"><a href="#TOP"><font color="#000000">\r
+<span style="font-size: 12pt; ">\r
+<</span></font></a></span><a href="#TOP"><font color="#000000"><b>return-to-top</b><span style="font-size: 12pt; ">></span></font></a><span style="font-size: 12pt; "></p>\r
+<p align="left"> </p>\r
+<h3><a name="IBV_GET_DEVICE_LIST">IBV_GET_DEVICE_LIST</a></h3>\r
+<h3><a name="IBV_FREE_DEVICE_LIST">IBV_FREE_DEVICE_LIST</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_get_device_list, ibv_free_device_list - get and release list of available \r
+RDMA devices<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_device **ibv_get_device_list(int </b><i>*num_devices</i><b>);</b>\r
+\r
+<b>void ibv_free_device_list(struct ibv_device </b><i>**list</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_get_device_list()</b> returns a NULL-terminated array of RDMA devices \r
+currently available. The argument <i>num_devices</i> is optional; if not NULL, \r
+it is set to the number of devices returned in the array.
+<p><b>ibv_free_device_list()</b> frees the array of devices <i>list</i> returned \r
+by <b>ibv_get_device_list()</b>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_get_device_list()</b> returns the array of available RDMA devices, or \r
+sets <i>errno</i> and returns NULL if the request fails. If no devices are found \r
+then <i>num_devices</i> is set to 0, and non-NULL is returned.
+<p><b>ibv_free_device_list()</b> returns no value.</p>\r
+<h4>ERRORS</h4>\r
+<dl COMPACT>\r
+ <dt><b>EPERM</b> </dt>\r
+ <dd>Permission denied.
+ </dd>\r
+ <dt><b>ENOSYS</b> </dt>\r
+ <dd>No kernel support for RDMA.
+ </dd>\r
+ <dt><b>ENOMEM</b> </dt>\r
+ <dd>Insufficient memory to complete the operation.</dd>\r
+</dl>\r
+<h4>NOTES</h4>\r
+Client code should open all the devices it intends to use with <b>\r
+ibv_open_device()</b> before calling <b>ibv_free_device_list()</b>. Once it \r
+frees the array with <b>ibv_free_device_list()</b>, it will be able to use only \r
+the open devices; pointers to unopened devices will no longer be valid.\r
+<a NAME="lbAH"> </a>
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_GET_DEVICE_NAME">ibv_get_device_name</a></b>, <b>\r
+<a href="#IBV_GET_DEVICE_GUID">ibv_get_device_guid</a></b>, <b>\r
+<a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b><p> </p>\r
+<h3><a name="IBV_GET_DEVICE_GUID">IBV_GET_DEVICE_GUID</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_get_device_guid - get an RDMA device's GUID
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>uint64_t ibv_get_device_guid(struct ibv_device </b><i>*device</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_get_device_name()</b> returns the Global Unique IDentifier (GUID) of the \r
+RDMA device <i>device</i>.<h4>RETURN VALUE</h4>\r
+<b>ibv_get_device_guid()</b> returns the GUID of the device in network byte \r
+order. <a NAME="lbAF"> </a>
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_GET_DEVICE_LIST">ibv_get_device_list</a></b>, <b>\r
+<a href="#IBV_GET_DEVICE_NAME">ibv_get_device_name</a></b>, <b>\r
+<a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_GET_DEVICE_NAME">IBV_GET_DEVICE_NAME</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_get_device_name - get an RDMA device's name<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>const char *ibv_get_device_name(struct ibv_device </b><i>*device</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_get_device_name()</b> returns a human-readable name associated with the \r
+RDMA device <i>device</i>.<h4>RETURN VALUE</h4>\r
+<b>ibv_get_device_name()</b> returns a pointer to the device name, or NULL if \r
+the request fails.<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_GET_DEVICE_LIST">ibv_get_device_list</a></b>, <b>\r
+<a href="#IBV_GET_DEVICE_GUID">ibv_get_device_guid</a></b>, <b>\r
+<a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b><h3>\r
+<br>\r
+<br>\r
+<a name="IBV_OPEN_DEVICE">IBV_OPEN_DEVICE</a></h3>\r
+<h3><a name="IBV_CLOSE_DEVICE">IBV_CLOSE_DEVICE</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_open_device, ibv_close_device - open and close an RDMA device context
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_context *ibv_open_device(struct ibv_device </b><i>*device</i><b>);</b>\r
+\r
+<b>int ibv_close_device(struct ibv_context </b><i>*context</i><b>);</b>\r
+</pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_open_device()</b> opens the device <i>device</i> and creates a context \r
+for further use.
+<p><b>ibv_close_device()</b> closes the device context <i>context</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_open_device()</b> returns a pointer to the allocated device context, or \r
+NULL if the request fails.
+<p><b>ibv_close_device()</b> returns 0 on success, -1 on failure.</p>\r
+<h4>NOTES</h4>\r
+<b>ibv_close_device()</b> does not release all the resources allocated using \r
+context <i>context</i>. To avoid resource leaks, the user should release all \r
+associated resources before closing a context.
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_GET_DEVICE_LIST">ibv_get_device_list</a></b>, <b>\r
+<a href="#IBV_QUERY_DEVICE">ibv_query_device</a></b>, <b>\r
+<a href="#IBV_QUERY_PORT">ibv_query_port</a></b>,\r
+<b><a href="#IBV_QUERY_GID">ibv_query_gid</a></b>, <b><a href="#IBV_QUERY_PKEY">ibv_query_pkey</a></b><p> </p>\r
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_GET_ASYNC_EVENT">IBV_GET_ASYNC_EVENT</a></h3>\r
+<h3>\r
+<br>\r
+<br>\r
+<a name="IBV_ACK_ASYNC_EVENT">IBV_ACK_ASYNC_EVENT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_get_async_event, ibv_ack_async_event - get or acknowledge asynchronous \r
+events <a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_get_async_event(struct ibv_context </b><i>*context</i><b>,</b>\r
+<b> struct ibv_async_event </b><i>*event</i><b>);</b>\r
+\r
+<b>void ibv_ack_async_event(struct ibv_async_event </b><i>*event</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_get_async_event()</b> waits for the next async event of the RDMA device \r
+context <i>context</i> and returns it through the pointer <i>event</i>, which is \r
+an ibv_async_event struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_async_event {\r
+union {\r
+struct ibv_cq *cq; /* CQ that got the event */\r
+struct ibv_qp *qp; /* QP that got the event */\r
+struct ibv_srq *srq; /* SRQ that got the event */\r
+int port_num; /* port number that got the event */\r
+} element;\r
+enum ibv_event_type event_type; /* type of the event */\r
+};\r
+</pre>\r
+<p>One member of the element union will be valid, depending on the event_type \r
+member of the structure. event_type will be one of the following events: </p>\r
+<p><i>QP events:</i> </p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_EVENT_QP_FATAL </b>Error occurred on a QP and it transitioned to \r
+ error state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_QP_REQ_ERR </b>Invalid Request Local Work Queue Error </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_QP_ACCESS_ERR </b>Local access violation error </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_COMM_EST </b>Communication was established on a QP </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_SQ_DRAINED </b>Send Queue was drained of outstanding \r
+ messages in progress </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_PATH_MIG </b>A connection has migrated to the alternate \r
+ path </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_PATH_MIG_ERR </b>A connection failed to migrate to the \r
+ alternate path </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_QP_LAST_WQE_REACHED </b>Last WQE Reached on a QP associated \r
+ with an SRQ </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p><i>CQ events:</i> </p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_EVENT_CQ_ERR </b>CQ is in error (CQ overrun) </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p><i>SRQ events:</i> </p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_EVENT_SRQ_ERR </b>Error occurred on an SRQ </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_SRQ_LIMIT_REACHED </b>SRQ limit was reached </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p><i>Port events:</i> </p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_EVENT_PORT_ACTIVE </b>Link became active on a port </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_PORT_ERR </b>Link became unavailable on a port </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_LID_CHANGE </b>LID was changed on a port </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_PKEY_CHANGE </b>P_Key table was changed on a port </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_SM_CHANGE </b>SM was changed on a port </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_EVENT_CLIENT_REREGISTER </b>SM sent a CLIENT_REREGISTER request \r
+ to a port </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p><i>CA events:</i> </p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_EVENT_DEVICE_FATAL </b>CA is in FATAL state </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p><b>ibv_ack_async_event()</b> acknowledge the async event <i>event</i>. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_get_async_event()</b> returns 0 on success, and -1 on error.
+<p><b>ibv_ack_async_event()</b> returns no value. </p>\r
+<h4>NOTES</h4>\r
+All async events that <b>ibv_get_async_event()</b> returns must be acknowledged \r
+using <b>ibv_ack_async_event()</b>. To avoid races, destroying an object (CQ, \r
+SRQ or QP) will wait for all affiliated events for the object to be \r
+acknowledged; this avoids an application retrieving an affiliated event after \r
+the corresponding object has already been destroyed.
+<p><b>ibv_get_async_event()</b> is a blocking function. If multiple threads call \r
+this function simultaneously, then when an async event occurs, only one thread \r
+will receive it, and it is not possible to predict which thread will receive it.\r
+</p>\r
+<h4>EXAMPLES</h4>\r
+The following code example demonstrates one possible way to work with async \r
+events in non-blocking mode. It performs the following steps:
+<p>1. Set the async events queue work mode to be non-blocked <br>\r
+2. Poll the queue until it has an async event <br>\r
+3. Get the async event and ack it </p>\r
+<p></p>\r
+<pre>/* change the blocking mode of the async event queue */\r
+flags = fcntl(ctx->async_fd, F_GETFL);\r
+rc = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK);\r
+if (rc < 0) {\r
+ fprintf(stderr, "Failed to change file descriptor of async event queue\n");\r
+ return 1;\r
+}\r
+\r
+/*\r
+ * poll the queue until it has an event and sleep ms_timeout\r
+ * milliseconds between any iteration\r
+ */\r
+my_pollfd.fd = ctx->async_fd;\r
+my_pollfd.events = POLLIN;\r
+my_pollfd.revents = 0;\r
+\r
+do {\r
+ rc = poll(&my_pollfd, 1, ms_timeout);\r
+} while (rc == 0);\r
+if (rc < 0) {\r
+ fprintf(stderr, "poll failed\n");\r
+ return 1;\r
+}\r
+\r
+/* Get the async event */\r
+if (ibv_get_async_event(ctx, &async_event)) {\r
+ fprintf(stderr, "Failed to get async_event\n");\r
+ return 1;\r
+}\r
+\r
+/* Ack the event */\r
+ibv_ack_async_event(&async_event);\r
+\r
+</pre>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_DEVICE">IBV_QUERY_DEVICE</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_device - query an RDMA device's attributes <a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_device(struct ibv_context </b><i>*context,</i>\r
+<b> struct ibv_device_attr </b><i>*device_attr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_device()</b> returns the attributes of the device with context <i>\r
+context</i>. The argument <i>device_attr</i> is a pointer to an ibv_device_attr \r
+struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_device_attr {\r
+char fw_ver[64]; /* FW version */\r
+uint64_t node_guid; /* Node GUID (in network byte order) */\r
+uint64_t sys_image_guid; /* System image GUID (in network byte order) */\r
+uint64_t max_mr_size; /* Largest contiguous block that can be registered */\r
+uint64_t page_size_cap; /* Supported memory shift sizes */\r
+uint32_t vendor_id; /* Vendor ID, per IEEE */\r
+uint32_t vendor_part_id; /* Vendor supplied part ID */\r
+uint32_t hw_ver; /* Hardware version */\r
+int max_qp; /* Maximum number of supported QPs */\r
+int max_qp_wr; /* Maximum number of outstanding WR on any work queue */\r
+int device_cap_flags; /* HCA capabilities mask */\r
+int max_sge; /* Maximum number of s/g per WR for non-RD QPs */\r
+int max_sge_rd; /* Maximum number of s/g per WR for RD QPs */\r
+int max_cq; /* Maximum number of supported CQs */\r
+int max_cqe; /* Maximum number of CQE capacity per CQ */\r
+int max_mr; /* Maximum number of supported MRs */\r
+int max_pd; /* Maximum number of supported PDs */\r
+int max_qp_rd_atom; /* Maximum number of RDMA Read & Atomic operations that can be outstanding per QP */\r
+int max_ee_rd_atom; /* Maximum number of RDMA Read & Atomic operations that can be outstanding per EEC */\r
+int max_res_rd_atom; /* Maximum number of resources used for RDMA Read & Atomic operations by this HCA as the Target */\r
+int max_qp_init_rd_atom; /* Maximum depth per QP for initiation of RDMA Read & Atomic operations */ \r
+int max_ee_init_rd_atom; /* Maximum depth per EEC for initiation of RDMA Read & Atomic operations */\r
+enum ibv_atomic_cap atomic_cap; /* Atomic operations support level */\r
+int max_ee; /* Maximum number of supported EE contexts */\r
+int max_rdd; /* Maximum number of supported RD domains */\r
+int max_mw; /* Maximum number of supported MWs */\r
+int max_raw_ipv6_qp; /* Maximum number of supported raw IPv6 datagram QPs */\r
+int max_raw_ethy_qp; /* Maximum number of supported Ethertype datagram QPs */\r
+int max_mcast_grp; /* Maximum number of supported multicast groups */\r
+int max_mcast_qp_attach; /* Maximum number of QPs per multicast group which can be attached */\r
+int max_total_mcast_qp_attach;/* Maximum number of QPs which can be attached to multicast groups */\r
+int max_ah; /* Maximum number of supported address handles */\r
+int max_fmr; /* Maximum number of supported FMRs */\r
+int max_map_per_fmr; /* Maximum number of (re)maps per FMR before an unmap operation in required */\r
+int max_srq; /* Maximum number of supported SRQs */\r
+int max_srq_wr; /* Maximum number of WRs per SRQ */\r
+int max_srq_sge; /* Maximum number of s/g per SRQ */\r
+uint16_t max_pkeys; /* Maximum number of partitions */\r
+uint8_t local_ca_ack_delay; /* Local CA ack delay */\r
+uint8_t phys_port_cnt; /* Number of physical ports */\r
+};</pre>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_query_device()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason). <a NAME="lbAF"> </a>
+<h4>NOTES</h4>\r
+The maximum values returned by this function are the upper limits of supported \r
+resources by the device. However, it may not be possible to use these maximum \r
+values, since the actual number of any resource that can be created may be \r
+limited by the machine configuration, the amount of host memory, user \r
+permissions, and the amount of resources already in use by other \r
+users/processes.<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>, <b>\r
+<a href="#IBV_QUERY_PORT">ibv_query_port</a></b>, <b><a href="#IBV_QUERY_PKEY">ibv_query_pkey</a></b>,\r
+<b><a href="#IBV_QUERY_GID">ibv_query_gid</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_GID">IBV_QUERY_GID</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_gid - query an InfiniBand port's GID table<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_gid(struct ibv_context </b><i>*context</i><b>, uint8_t </b><i>port_num</i><b>,</b>\r
+<b> int </b><i>index</i><b>, union ibv_gid </b><i>*gid</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_gid()</b> returns the GID value in entry <i>index</i> of port <i>\r
+port_num</i> for device context <i>context</i> through the pointer <i>gid</i>.<h4>\r
+RETURN VALUE</h4>\r
+<b>ibv_query_gid()</b> returns 0 on success, and -1 on error.<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>, <b>\r
+<a href="#IBV_QUERY_DEVICE">ibv_query_device</a></b>, <b>\r
+<a href="#IBV_QUERY_PORT">ibv_query_port</a></b>,\r
+<b><a href="#IBV_QUERY_PKEY">ibv_query_pkey</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_PKEY">IBV_QUERY_PKEY</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_pkey - query an InfiniBand port's P_Key table
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_pkey(struct ibv_context </b><i>*context</i><b>, uint8_t </b><i>port_num</i><b>,</b>\r
+<b> int </b><i>index</i><b>, uint16_t </b><i>*pkey</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_pkey()</b> returns the P_Key value (in network byte order) in entry\r
+<i>index</i> of port <i>port_num</i> for device context <i>context</i> through \r
+the pointer <i>pkey</i>.<h4>RETURN VALUE</h4>\r
+<b>ibv_query_pkey()</b> returns 0 on success, and -1 on error.
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>, <b>\r
+<a href="#IBV_QUERY_DEVICE">ibv_query_device</a></b>, <b>\r
+<a href="#IBV_QUERY_PORT">ibv_query_port</a></b>,\r
+<b><a href="#IBV_QUERY_GID">ibv_query_gid</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_PORT">IBV_QUERY_PORT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_port - query an RDMA port's attributes
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_port(struct ibv_context </b><i>*context</i><b>, uint8_t </b><i>port_num</i><b>,</b>\r
+<b> struct ibv_port_attr </b><i>*port_attr</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_port()</b> returns the attributes of port <i>port_num</i> for \r
+device context <i>context</i> through the pointer <i>port_attr</i>. The argument\r
+<i>port_attr</i> is an ibv_port_attr struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_port_attr {\r
+enum ibv_port_state state; /* Logical port state */\r
+enum ibv_mtu max_mtu; /* Max MTU supported by port */\r
+enum ibv_mtu active_mtu; /* Actual MTU */\r
+int gid_tbl_len; /* Length of source GID table */\r
+uint32_t port_cap_flags; /* Port capabilities */\r
+uint32_t max_msg_sz; /* Maximum message size */\r
+uint32_t bad_pkey_cntr; /* Bad P_Key counter */\r
+uint32_t qkey_viol_cntr; /* Q_Key violation counter */\r
+uint16_t pkey_tbl_len; /* Length of partition table */\r
+uint16_t lid; /* Base port LID */\r
+uint16_t sm_lid; /* SM LID */\r
+uint8_t lmc; /* LMC of LID */\r
+uint8_t max_vl_num; /* Maximum number of VLs */\r
+uint8_t sm_sl; /* SM service level */\r
+uint8_t subnet_timeout; /* Subnet propagation delay */\r
+uint8_t init_type_reply;/* Type of initialization performed by SM */\r
+uint8_t active_width; /* Currently active link width */\r
+uint8_t active_speed; /* Currently active link speed */\r
+uint8_t phys_state; /* Physical port state */\r
+};</pre>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_query_port()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b><a href="#IBV_DESTROY_QP">ibv_destroy_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_QP">ibv_query_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_AH">ibv_create_ah</a></b><p> </p>\r
+<p> </p>\r
+<h3><a name="IBV_ALLOC_PD">IBV_ALLOC_PD</a></h3>\r
+<h3><a name="IBV_DEALLOC_PD">IBV_DEALLOC_PD</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_alloc_pd, ibv_dealloc_pd - allocate or deallocate a protection domain (PDs)<h4>\r
+SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_pd *ibv_alloc_pd(struct ibv_context </b><i>*context</i><b>);</b>\r
+\r
+<b>int ibv_dealloc_pd(struct ibv_pd </b><i>*pd</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_alloc_pd()</b> allocates a PD for the RDMA device context <i>context</i>.
+
+<p><b>ibv_dealloc_pd()</b> deallocates the PD <i>pd</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_alloc_pd()</b> returns a pointer to the allocated PD, or NULL if the \r
+request fails.
+<p><b>ibv_dealloc_pd()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason). <a NAME="lbAF"> </a> </p>\r
+<h4>NOTES</h4>\r
+<b>ibv_dealloc_pd()</b> may fail if any other resource is still associated with \r
+the PD being freed. <a NAME="lbAG"> </a>
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_REG_MR">ibv_reg_mr</a></b>, <b><a href="#IBV_CREATE_SRQ">ibv_create_srq</a></b>, <b>\r
+<a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_AH">ibv_create_ah</a></b>, <b>\r
+<a href="#IBV_CREATE_AH_FROM_WC">ibv_create_ah_from_wc</a></b><p> </p>\r
+<p> </p>\r
+<h3><a name="IBV_REG_MR">IBV_REG_MR</a></h3>\r
+<h3><a name="IBV_DEREG_MR">IBV_DEREG_MR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_reg_mr, ibv_dereg_mr - register or deregister a memory region (MR)\r
+<a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_mr *ibv_reg_mr(struct ibv_pd </b><i>*pd</i><b>, void </b><i>*addr</i><b>,</b>\r
+<b> size_t </b><i>length</i><b>, int </b><i>access</i><b>);</b>\r
+\r
+<b>int ibv_dereg_mr(struct ibv_mr </b><i>*mr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_reg_mr()</b> registers a memory region (MR) associated with the \r
+protection domain <i>pd</i>. The MR's starting address is <i>addr</i> and its \r
+size is <i>length</i>. The argument <i>access</i> describes the desired memory \r
+protection attributes; it is either 0 or the bitwise OR of one or more of the \r
+following flags:
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_ACCESS_LOCAL_WRITE </b>Enable Local Write Access </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_ACCESS_REMOTE_WRITE </b>Enable Remote Write Access </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_ACCESS_REMOTE_READ</b> Enable Remote Read Access </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_ACCESS_REMOTE_ATOMIC</b> Enable Remote Atomic Operation Access \r
+ (if supported) </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_ACCESS_MW_BIND</b> Enable Memory Window Binding </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p>If <b>IBV_ACCESS_REMOTE_WRITE</b> or <b>IBV_ACCESS_REMOTE_ATOMIC</b> is set, \r
+then <b>IBV_ACCESS_LOCAL_WRITE</b> must be set too. </p>\r
+<p>Local read access is always enabled for the MR. </p>\r
+<p><b>ibv_dereg_mr()</b> deregisters the MR <i>mr</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_reg_mr()</b> returns a pointer to the registered MR, or NULL if the \r
+request fails. The local key (<b>L_Key</b>) field <b>lkey</b> is used as the \r
+lkey field of struct ibv_sge when posting buffers with ibv_post_* verbs, and the \r
+the remote key (<b>R_Key</b>) field <b>rkey</b> is used by remote processes to \r
+perform Atomic and RDMA operations. The remote process places this <b>rkey</b> \r
+as the rkey field of struct ibv_send_wr passed to the ibv_post_send function.
+<p><b>ibv_dereg_mr()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).</p>\r
+<h4>NOTES</h4>\r
+<b>ibv_dereg_mr()</b> fails if any memory window is still bound to this MR.<h4>\r
+SEE ALSO</h4>\r
+<b><a href="#IBV_ALLOC_PD">ibv_alloc_pd</a></b>, <b><a href="#IBV_POST_SEND">ibv_post_send</a></b>, <b>\r
+<a href="#IBV_POST_RECV">ibv_post_recv</a></b>, <b>\r
+<a href="#IBV_POST_SRQ_RECV">ibv_post_srq_recv</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_CREATE_AH">IBV_CREATE_AH</a></h3>\r
+<h3><br>\r
+<a name="IBV_DESTROY_AH">IBV_DESTROY_AH</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_ah, ibv_destroy_ah - create or destroy an address handle (AH)<h4>\r
+SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_ah *ibv_create_ah(struct ibv_pd </b><i>*pd</i><b>,</b>\r
+<b> struct ibv_ah_attr </b><i>*attr</i><b>);</b>\r
+\r
+<b>int ibv_destroy_ah(struct ibv_ah </b><i>*ah</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_ah()</b> creates an address handle (AH) associated with the \r
+protection domain <i>pd</i>. The argument <i>attr</i> is an ibv_ah_attr struct, \r
+as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_ah_attr {\r
+struct ibv_global_route grh; /* Global Routing Header (GRH) attributes */\r
+uint16_t dlid; /* Destination LID */\r
+uint8_t sl; /* Service Level */\r
+uint8_t src_path_bits; /* Source path bits */\r
+uint8_t static_rate; /* Maximum static rate */\r
+uint8_t is_global; /* GRH attributes are valid */\r
+uint8_t port_num; /* Physical port number */\r
+};\r
+\r
+struct ibv_global_route {\r
+union ibv_gid dgid; /* Destination GID or MGID */\r
+uint32_t flow_label; /* Flow label */\r
+uint8_t sgid_index; /* Source GID index */\r
+uint8_t hop_limit; /* Hop limit */\r
+uint8_t traffic_class; /* Traffic class */\r
+};\r
+</pre>\r
+<p></p>\r
+<p><b>ibv_destroy_ah()</b> destroys the AH <i>ah</i>. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_ah()</b> returns a pointer to the created AH, or NULL if the \r
+request fails.
+<p><b>ibv_destroy_ah()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_ALLOC_PD">ibv_alloc_pd</a></b>, <b>\r
+<a href="#IBV_INIT_AH_FROM_WC">ibv_init_ah_from_wc</a></b>, <b>\r
+<a href="#IBV_CREATE_AH_FROM_WC">ibv_create_ah_from_wc</a></b>
+<p align="left"> </p>\r
+<h3><br>\r
+<a name="IBV_CREATE_AH_FROM_WC">IBV_CREATE_AH_FROM_WC</a></h3>\r
+<h3><br>\r
+<a name="IBV_INIT_AH_FROM_WC">IBV_INIT_AH_FROM_WC</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_init_ah_from_wc, ibv_create_ah_from_wc - initialize or create an address \r
+handle (AH) from a work completion <a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_init_ah_from_wc(struct ibv_context </b><i>*context</i><b>, uint8_t </b><i>port_num</i><b>,</b>\r
+<b> struct ibv_wc </b><i>*wc</i><b>, struct ibv_grh </b><i>*grh</i><b>,</b>\r
+<b> struct ibv_ah_attr </b><i>*ah_attr</i><b>);</b>\r
+\r
+<b>struct ibv_ah *ibv_create_ah_from_wc(struct ibv_pd </b><i>*pd</i><b>,</b>\r
+<b> struct ibv_wc </b><i>*wc</i><b>,</b>\r
+<b> struct ibv_grh </b><i>*grh</i><b>,</b>\r
+<b> uint8_t </b><i>port_num</i><b>);</b>\r
+</pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_init_ah_from_wc()</b> initializes the address handle (AH) attribute \r
+structure <i>ah_attr</i> for the RDMA device context <i>context</i> using the \r
+port number <i>port_num</i>, using attributes from the work completion <i>wc</i> \r
+and the Global Routing Header (GRH) structure <i>grh</i>.
+
+<p><b>ibv_create_ah_from_wc()</b> creates an AH associated with the protection \r
+domain <i>pd</i> using the port number <i>port_num</i>, using attributes from \r
+the work completion <i>wc</i> and the Global Routing Header (GRH) structure <i>\r
+grh</i>. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_init_ah_from_wc()</b> returns 0 on success, and -1 on error.
+<p><b>ibv_create_ah_from_wc()</b> returns a pointer to the created AH, or NULL \r
+if the request fails. <a NAME="lbAF"> </a> </p>\r
+<h4>NOTES</h4>\r
+The filled structure <i>ah_attr</i> returned from <b>ibv_init_ah_from_wc()</b> \r
+can be used to create a new AH using <b>ibv_create_ah()</b>.
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>, <b>\r
+<a href="#IBV_ALLOC_PD">ibv_alloc_pd</a></b>, <b><a href="#IBV_CREATE_AH">ibv_create_ah</a></b>, <b>\r
+<a href="#IBV_DESTROY_AH">ibv_destroy_ah</a></b>, <b><a href="#IBV_POLL_CQ">ibv_poll_cq</a></b><p> </p>\r
+<h3><a name="IBV_CREATE_COMP_CHANNEL">IBV_CREATE_COMP_CHANNEL</a></h3>\r
+<h3><a name="IBV_DESTROY_COMP_CHANNEL">IBV_DESTROY_COMP_CHANNEL</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_comp_channel, ibv_destroy_comp_channel - create or destroy a \r
+completion event channel<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_comp_channel *ibv_create_comp_channel(struct ibv_context</b>\r
+<b> </b><i>*context</i><b>);</b>\r
+\r
+<b>int ibv_destroy_comp_channel(struct ibv_comp_channel </b><i>*channel</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_comp_channel()</b> creates a completion event channel for the RDMA \r
+device context <i>context</i>.
+
+<p><b>ibv_destroy_comp_channel()</b> destroys the completion event channel <i>\r
+channel</i>. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_comp_channel()</b> returns a pointer to the created completion \r
+event channel, or NULL if the request fails.
+<p><b>ibv_destroy_comp_channel()</b> returns 0 on success, or the value of errno \r
+on failure (which indicates the failure reason).</p>\r
+<h4>NOTES</h4>\r
+A "completion channel" is an abstraction introduced by libibverbs that does not \r
+exist in the InfiniBand Architecture verbs specification or RDMA Protocol Verbs \r
+Specification. A completion channel is essentially file descriptor that is used \r
+to deliver completion notifications to a userspace process. When a completion \r
+event is generated for a completion queue (CQ), the event is delivered via the \r
+completion channel attached to that CQ. This may be useful to steer completion \r
+events to different threads by using multiple completion channels.
+<p><b>ibv_destroy_comp_channel()</b> fails if any CQs are still associated with \r
+the completion event channel being destroyed.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_DEVICE">ibv_open_device</a></b>, <b>\r
+<a href="#IBV_CREATE_CQ">ibv_create_cq</a></b>, <b><a href="#IBV_GET_CQ_EVENT">ibv_get_cq_event</a></b><p> </p>\r
+<h3><a name="IBV_CREATE_CQ">IBV_CREATE_CQ</a></h3>\r
+<h3><a name="IBV_DESTROY_CQ">IBV_DESTROY_CQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_cq, ibv_destroy_cq - create or destroy a completion queue (CQ)\r
+<a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_cq *ibv_create_cq(struct ibv_context </b><i>*context</i><b>, int </b><i>cqe</i><b>,</b>\r
+<b> void </b><i>*cq_context</i><b>,</b>\r
+<b> struct ibv_comp_channel </b><i>*channel</i><b>,</b>\r
+<b> int </b><i>comp_vector</i><b>);</b>\r
+\r
+<b>int ibv_destroy_cq(struct ibv_cq </b><i>*cq</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_cq()</b> creates a completion queue (CQ) with at least <i>cqe</i> \r
+entries for the RDMA device context <i>context</i>. The pointer <i>cq_context</i> \r
+will be used to set user context pointer of the CQ structure. The argument <i>\r
+channel</i> is optional; if not NULL, the completion channel <i>channel</i> will \r
+be used to return completion events. The CQ will use the completion vector <i>\r
+comp_vector</i> for signaling completion events; it must be at least zero and \r
+less than <i>context</i>->num_comp_vectors.
+
+<p><b>ibv_destroy_cq()</b> destroys the CQ <i>cq</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_cq()</b> returns a pointer to the CQ, or NULL if the request \r
+fails.
+<p><b>ibv_destroy_cq()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).</p>\r
+<h4>NOTES</h4>\r
+<b>ibv_create_cq()</b> may create a CQ with size greater than or equal to the \r
+requested size. Check the cqe attribute in the returned CQ for the actual size.
+<p><b>ibv_destroy_cq()</b> fails if any queue pair is still associated with this \r
+CQ. </p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_RESIZE_CQ">ibv_resize_cq</a></b>, <b>\r
+<a href="#IBV_REQ_NOTIFY_CQ">ibv_req_notify_cq</a></b>, <b>\r
+<a href="#IBV_ACK_CQ_EVENTS">ibv_ack_cq_events</a></b>,\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b><p> </p>\r
+<h3><a name="IBV_POLL_CQ">IBV_POLL_CQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_poll_cq - poll a completion queue (CQ) <a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_poll_cq(struct ibv_cq </b><i>*cq</i><b>, int </b><i>num_entries</i><b>,</b>\r
+<b> struct ibv_wc </b><i>*wc</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_poll_cq()</b> polls the CQ <i>cq</i> for work completions and returns the \r
+first <i>num_entries</i> (or all available completions if the CQ contains fewer \r
+than this number) in the array <i>wc</i>. The argument <i>wc</i> is a pointer to \r
+an array of ibv_wc structs, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_wc {\r
+uint64_t wr_id; /* ID of the completed Work Request (WR) */\r
+enum ibv_wc_status status; /* Status of the operation */\r
+enum ibv_wc_opcode opcode; /* Operation type specified in the completed WR */\r
+uint32_t vendor_err; /* Vendor error syndrome */\r
+uint32_t byte_len; /* Number of bytes transferred */\r
+uint32_t imm_data; /* Immediate data (in network byte order) */\r
+uint32_t qp_num; /* Local QP number of completed WR */\r
+uint32_t src_qp; /* Source QP number (remote QP number) of completed WR (valid only for UD QPs) */\r
+int wc_flags; /* Flags of the completed WR */\r
+uint16_t pkey_index; /* P_Key index (valid only for GSI QPs) */\r
+uint16_t slid; /* Source LID */\r
+uint8_t sl; /* Service Level */\r
+uint8_t dlid_path_bits; /* DLID path bits (not applicable for multicast messages) */\r
+};\r
+\r
+</pre>\r
+<p>The attribute wc_flags describes the properties of the work completion. It is \r
+either 0 or the bitwise OR of one or more of the following flags: </p>\r
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_WC_GRH </b>GRH is present (valid only for UD QPs) </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_WC_WITH_IMM </b>Immediate data value is valid </dt>\r
+ <dd></dd>\r
+</dl>\r
+<p>Not all <i>wc</i> attributes are always valid. If the completion status is \r
+other than <b>IBV_WC_SUCCESS</b>, only the following attributes are valid: wr_id, \r
+status, qp_num, and vendor_err.</p>\r
+<h4>RETURN VALUE</h4>\r
+On success, <b>ibv_poll_cq()</b> returns a non-negative value equal to the \r
+number of completions found. On failure, a negative value is returned.<h4>NOTES</h4>\r
+<p>Each polled completion is removed from the CQ and cannot be returned to it.\r
+</p>\r
+<p>The user should consume work completions at a rate that prevents CQ overrun \r
+from occurrence. In case of a CQ overrun, the async event <b>IBV_EVENT_CQ_ERR</b> \r
+will be triggered, and the CQ cannot be used. </p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_POST_SEND">ibv_post_send</a></b>, <b><a href="#IBV_POST_RECV">ibv_post_recv</a></b><p> </p>\r
+<h3><a name="IBV_RESIZE_CQ">IBV_RESIZE_CQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_resize_cq - resize a completion queue (CQ)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_resize_cq(struct ibv_cq </b><i>*cq</i><b>, int </b><i>cqe</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_resize_cq()</b> resizes the completion queue (CQ) <i>cq</i> to have at \r
+least <i>cqe</i> entries. <i>cqe</i> must be at least the number of unpolled \r
+entries in the CQ <i>cq</i>. If <i>cqe</i> is a valid value less than the \r
+current CQ size, <b>ibv_resize_cq()</b> may not do anything, since this function \r
+is only guaranteed to resize the CQ to a size at least as big as the requested \r
+size.<h4>RETURN VALUE</h4>\r
+<b>ibv_resize_cq()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).<h4>NOTES</h4>\r
+<b>ibv_resize_cq()</b> may assign a CQ size greater than or equal to the \r
+requested size. The cqe member of <i>cq</i> will be updated to the actual size.<h4>\r
+SEE ALSO</h4>\r
+<a href="#IBV_CREATE_CQ">\r
+<b>ibv_create_cq</b> </a> <b><a href="#IBV_DESTROY_CQ">ibv_destroy_cq</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_GET_CQ_EVENT">IBV_GET_CQ_EVENT</a></h3>\r
+<h3><br>\r
+<a name="IBV_ACK_CQ_EVENTS">IBV_ACK_CQ_EVENTS</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_get_cq_event, ibv_ack_cq_events - get and acknowledge completion queue (CQ) \r
+events
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_get_cq_event(struct ibv_comp_channel </b><i>*channel</i><b>,</b>\r
+<b> struct ibv_cq </b><i>**cq</i><b>, void </b><i>**cq_context</i><b>);</b>\r
+\r
+<b>void ibv_ack_cq_events(struct ibv_cq </b><i>*cq</i><b>, unsigned int </b><i>nevents</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_get_cq_event()</b> waits for the next completion event in the completion \r
+event channel <i>channel</i>. Fills the arguments <i>cq</i> with the CQ that got \r
+the event and <i>cq_context</i> with the CQ's context.
+<p><b>ibv_ack_cq_events()</b> acknowledges <i>nevents</i> events on the CQ <i>cq</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_get_cq_event()</b> returns 0 on success, and -1 on error.
+<p><b>ibv_ack_cq_events()</b> returns no value. <a NAME="lbAF"> </a> </p>\r
+<h4>NOTES</h4>\r
+All completion events that <b>ibv_get_cq_event()</b> returns must be \r
+acknowledged using <b>ibv_ack_cq_events()</b>. To avoid races, destroying a CQ \r
+will wait for all completion events to be acknowledged; this guarantees a \r
+one-to-one correspondence between acks and successful gets.
+<p>Calling <b>ibv_ack_cq_events()</b> may be relatively expensive in the \r
+datapath, since it must take a mutex. Therefore it may be better to amortize \r
+this cost by keeping a count of the number of events needing acknowledgement and \r
+acking several completion events in one call to <b>ibv_ack_cq_events()</b>.</p>\r
+<h4>EXAMPLES</h4>\r
+The following code example demonstrates one possible way to work with completion \r
+events. It performs the following steps:
+<p>Stage I: Preparation <br>\r
+1. Creates a CQ <br>\r
+2. Requests for notification upon a new (first) completion event </p>\r
+<p>Stage II: Completion Handling Routine <br>\r
+3. Wait for the completion event and ack it <br>\r
+4. Request for notification upon the next completion event <br>\r
+5. Empty the CQ </p>\r
+<p>Note that an extra event may be triggered without having a corresponding \r
+completion entry in the CQ. This occurs if a completion entry is added to the CQ \r
+between Step 4 and Step 5, and the CQ is then emptied (polled) in Step 5. </p>\r
+<p></p>\r
+<pre>cq = ibv_create_cq(ctx, 1, ev_ctx, channel, 0);\r
+if (!cq) {\r
+ fprintf(stderr, "Failed to create CQ\n");\r
+ return 1;\r
+}\r
+\r
+/* Request notification before any completion can be created */\r
+if (ibv_req_notify_cq(cq, 0)) {\r
+ fprintf(stderr, "Couldn't request CQ notification\n");\r
+ return 1;\r
+}\r
+\r
+.\r
+.\r
+.\r
+\r
+/* Wait for the completion event */\r
+if (ibv_get_cq_event(channel, &ev_cq, &ev_ctx)) {\r
+ fprintf(stderr, "Failed to get cq_event\n");\r
+ return 1;\r
+}\r
+\r
+/* Ack the event */\r
+ibv_ack_cq_events(ev_cq, 1);\r
+\r
+/* Request notification upon the next completion event */\r
+if (ibv_req_notify_cq(ev_cq, 0)) {\r
+ fprintf(stderr, "Couldn't request CQ notification\n");\r
+ return 1;\r
+}\r
+\r
+/* Empty the CQ: poll all of the completions from the CQ (if any exist) */\r
+do {\r
+ ne = ibv_poll_cq(cq, 1, &wc);\r
+ if (ne < 0) {\r
+ fprintf(stderr, "Failed to poll completions from the CQ\n");\r
+ return 1;\r
+ }\r
+\r
+ /* there may be an extra event with no completion in the CQ */\r
+ if (ne == 0)\r
+ continue;\r
+\r
+ if (wc.status != IBV_WC_SUCCESS) {\r
+ fprintf(stderr, "Completion with status 0x%x was found\n", wc.status);\r
+ return 1;\r
+ }\r
+} while (ne);\r
+</pre>\r
+<p>The following code example demonstrates one possible way to work with \r
+completion events in non-blocking mode. It performs the following steps: </p>\r
+<p>1. Set the completion event channel to be non-blocked <br>\r
+2. Poll the channel until there it has a completion event <br>\r
+3. Get the completion event and ack it </p>\r
+<p></p>\r
+<pre>/* change the blocking mode of the completion channel */\r
+flags = fcntl(channel->fd, F_GETFL);\r
+rc = fcntl(channel->fd, F_SETFL, flags | O_NONBLOCK);\r
+if (rc < 0) {\r
+ fprintf(stderr, "Failed to change file descriptor of completion event channel\n");\r
+ return 1;\r
+}\r
+\r
+\r
+/*\r
+ * poll the channel until it has an event and sleep ms_timeout\r
+ * milliseconds between any iteration\r
+ */\r
+my_pollfd.fd = channel->fd;\r
+my_pollfd.events = POLLIN;\r
+my_pollfd.revents = 0;\r
+\r
+do {\r
+ rc = poll(&my_pollfd, 1, ms_timeout);\r
+} while (rc == 0);\r
+if (rc < 0) {\r
+ fprintf(stderr, "poll failed\n");\r
+ return 1;\r
+}\r
+ev_cq = cq;\r
+\r
+/* Wait for the completion event */\r
+if (ibv_get_cq_event(channel, &ev_cq, &ev_ctx)) {\r
+ fprintf(stderr, "Failed to get cq_event\n");\r
+ return 1;\r
+}\r
+\r
+/* Ack the event */\r
+ibv_ack_cq_events(ev_cq, 1);</pre>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_COMP_CHANNEL">ibv_create_comp_channel</a></b>, <b>\r
+<a href="#IBV_CREATE_CQ">ibv_create_cq</a></b>, <b><a href="#IBV_REQ_NOTIFY_CQ">ibv_req_notify_cq</a></b>,\r
+<b><a href="#IBV_POLL_CQ">ibv_poll_cq</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_REQ_NOTIFY_CQ">IBV_REQ_NOTIFY_CQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_req_notify_cq - request completion notification on a completion queue (CQ)<h4>\r
+SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_req_notify_cq(struct ibv_cq </b><i>*cq</i><b>, int </b><i>solicited_only</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_req_notify_cq()</b> requests a completion notification on the completion \r
+queue (CQ) <i>cq</i>.
+
+<p>Upon the addition of a new CQ entry (CQE) to <i>cq</i>, a completion event \r
+will be added to the completion channel associated with the CQ. If the argument\r
+<i>solicited_only</i> is zero, a completion event is generated for any new CQE. \r
+If <i>solicited_only</i> is non-zero, an event is only generated for a new CQE \r
+with that is considered "solicited." A CQE is solicited if it is a receive \r
+completion for a message with the Solicited Event header bit set, or if the \r
+status is not successful. All other successful receive completions, or any \r
+successful send completion is unsolicited. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_req_notify_cq()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).<h4>NOTES</h4>\r
+The request for notification is "one shot." Only one completion event will be \r
+generated for each call to <b>ibv_req_notify_cq()</b>. <a NAME="lbAG"> </a>
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_COMP_CHANNEL">ibv_create_comp_channel</a></b>, <b>\r
+<a href="#IBV_CREATE_CQ">ibv_create_cq</a></b>, <b><a href="#IBV_GET_CQ_EVENT">ibv_get_cq_event</a></b><p> </p>\r
+<p align="left"> </p>\r
+<h3><br>\r
+<a name="IBV_CREATE_SRQ">IBV_CREATE_SRQ</a></h3>\r
+<h3><br>\r
+<a name="IBV_CREATE_XRC_SRQ">IBV_CREATE_XRC_SRQ</a></h3>\r
+<h3><br>\r
+<a name="IBV_DESTROY_SRQ">IBV_DESTROY_SRQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_srq, ibv_destroy_srq - create or destroy a shared receive queue (SRQ)\r
+<a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_srq *ibv_create_srq(struct ibv_pd </b><i>*pd</i><b>, struct </b>\r
+<b> ibv_srq_init_attr </b><i>*srq_init_attr</i><b>);</b>\r
+\r
+<b>struct ibv_srq *ibv_create_xrc_srq(struct ibv_pd </b><i>*pd</i><b>,</b>\r
+<b> struct ibv_xrc_domain </b><i>*xrc_domain</i><b>,</b>\r
+<b> struct ibv_cq </b><i>*xrc_cq</i><b>,</b>\r
+<b> struct ibv_srq_init_attr </b><i>*srq_init_attr</i><b>);</b>\r
+\r
+<b>int ibv_destroy_srq(struct ibv_srq </b><i>*srq</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_srq()</b> creates a shared receive queue (SRQ) associated with the \r
+protection domain <i>pd</i>.
+
+<p><b>ibv_create_xrc_srq()</b> creates an XRC shared receive queue (SRQ) \r
+associated with the protection domain <i>pd</i>, the XRC domain <i>xrc_domain</i> \r
+and the CQ which will hold the XRC completion <i>xrc_cq</i>. </p>\r
+<p>The argument <i>srq_init_attr</i> is an ibv_srq_init_attr struct, as defined \r
+in <infiniband/verbs.h>. </p>\r
+<p></p>\r
+<pre>struct ibv_srq_init_attr {\r
+void *srq_context; /* Associated context of the SRQ */\r
+struct ibv_srq_attr attr; /* SRQ attributes */\r
+};\r
+\r
+struct ibv_srq_attr {\r
+uint32_t max_wr; /* Requested max number of outstanding work requests (WRs) in the SRQ */\r
+uint32_t max_sge; /* Requested max number of scatter elements per WR */\r
+uint32_t srq_limit; /* The limit value of the SRQ (irrelevant for ibv_create_srq) */\r
+};\r
+</pre>\r
+<p>The function <b>ibv_create_srq()</b> will update the <i>srq_init_attr</i> \r
+struct with the original values of the SRQ that was created; the values of \r
+max_wr and max_sge will be greater than or equal to the values requested. </p>\r
+<p><b>ibv_destroy_srq()</b> destroys the SRQ <i>srq</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_srq()</b> returns a pointer to the created SRQ, or NULL if the \r
+request fails.
+<p><b>ibv_destroy_srq()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason). </p>\r
+<h4>NOTES</h4>\r
+<b>ibv_destroy_srq()</b> fails if any queue pair is still associated with this \r
+SRQ.<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_ALLOC_PD">ibv_alloc_pd</a></b>, <b><a href="#IBV_QUERY_SRQ">ibv_modify_srq</a></b>, <b>\r
+<a href="#IBV_QUERY_SRQ">ibv_query_srq</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_MODIFY_SRQ">IBV_MODIFY_SRQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_modify_srq - modify attributes of a shared receive queue (SRQ)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_modify_srq(struct ibv_srq </b><i>*srq</i><b>,</b>\r
+<b> struct ibv_srq_attr </b><i>*srq_attr</i><b>,</b>\r
+<b> int </b><i>srq_attr_mask</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_modify_srq()</b> modifies the attributes of SRQ <i>srq</i> with the \r
+attributes in <i>srq_attr</i> according to the mask <i>srq_attr_mask</i>. The \r
+argument <i>srq_attr</i> is an ibv_srq_attr struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_srq_attr {\r
+uint32_t max_wr; /* maximum number of outstanding work requests (WRs) in the SRQ */\r
+uint32_t max_sge; /* number of scatter elements per WR (irrelevant for ibv_modify_srq) */\r
+uint32_t srq_limit; /* the limit value of the SRQ */\r
+};\r
+</pre>\r
+<p>The argument <i>srq_attr_mask</i> specifies the SRQ attributes to be \r
+modified. The argument is either 0 or the bitwise OR of one or more of the \r
+following flags: </p>\r
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_SRQ_MAX_WR </b>Resize the SRQ </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_SRQ_LIMIT </b>Set the SRQ limit </dt>\r
+ <dd></dd>\r
+</dl>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_modify_srq()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).<h4>NOTES</h4>\r
+If any of the modify attributes is invalid, none of the attributes will be \r
+modified.
+<p>Not all devices support resizing SRQs. To check if a device supports it, \r
+check if the <b>IBV_DEVICE_SRQ_RESIZE</b> bit is set in the device capabilities \r
+flags. </p>\r
+<p>Modifying the srq_limit arms the SRQ to produce an <b>\r
+IBV_EVENT_SRQ_LIMIT_REACHED</b> "low watermark" asynchronous event once the \r
+number of WRs in the SRQ drops below srq_limit. </p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_QUERY_DEVICE">ibv_query_device</a></b>, <b>\r
+<a href="#IBV_CREATE_SRQ">ibv_create_srq</a></b>, <b><a href="#IBV_DESTROY_SRQ">ibv_destroy_srq</a></b>,\r
+<b><a href="#IBV_QUERY_SRQ">ibv_query_srq</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_SRQ">IBV_QUERY_SRQ</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_srq - get the attributes of a shared receive queue (SRQ)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_srq(struct ibv_srq </b><i>*srq</i><b>, struct ibv_srq_attr </b><i>*srq_attr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_srq()</b> gets the attributes of the SRQ <i>srq</i> and returns \r
+them through the pointer <i>srq_attr</i>. The argument <i>srq_attr</i> is an \r
+ibv_srq_attr struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_srq_attr {\r
+uint32_t max_wr; /* maximum number of outstanding work requests (WRs) in the SRQ */\r
+uint32_t max_sge; /* maximum number of scatter elements per WR */\r
+uint32_t srq_limit; /* the limit value of the SRQ */\r
+}; </pre>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_query_srq()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).<h4>NOTES</h4>\r
+If the value returned for srq_limit is 0, then the SRQ limit reached ("low \r
+watermark") event is not (or no longer) armed, and no asynchronous events will \r
+be generated until the event is rearmed. <a NAME="lbAG"> </a>
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_SRQ">ibv_create_srq</a></b>, <b>\r
+<a href="#IBV_DESTROY_SRQ">ibv_destroy_srq</a></b>, <b>\r
+<a href="#IBV_MODIFY_SRQ">ibv_modify_srq</a></b><p> </p>\r
<p align="left"> </p>\r
+<h3><a name="IBV_CREATE_XRC_RCV_QP">IBV_CREATE_XRC_RCV_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_xrc_rcv_qp - create an XRC queue pair (QP) for serving as a \r
+receive-side only QP<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_create_xrc_rcv_qp(struct ibv_qp_init_attr </b><i>*init_attr</i><b>,</b>\r
+<b> uint32_t </b><i>*xrc_rcv_qpn</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_xrc_rcv_qp()</b> creates an XRC queue pair (QP) for serving as a \r
+receive-side only QP and returns its number through the pointer <i>xrc_rcv_qpn</i>. \r
+This QP number should be passed to the remote node (sender). The remote node \r
+will use <i>xrc_rcv_qpn</i> in <b>ibv_post_send()</b> when sending to an XRC SRQ \r
+on this host in the same xrc domain as the XRC receive QP. This QP is created in \r
+kernel space, and persists until the last process registered for the QP calls <b>\r
+ibv_unreg_xrc_rcv_qp()</b> (at which time the QP is destroyed).
+<p>The process which creates this QP is automatically registered for it, and \r
+should also call <b>ibv_unreg_xrc_rcv_qp()</b> at some point, to unregister. </p>\r
+<p>Processes which wish to receive on an XRC SRQ via this QP should call <b>\r
+ibv_reg_xrc_rcv_qp()</b> for this QP, to guarantee that the QP will not be \r
+destroyed while they are still using it for receiving on the XRC SRQ. </p>\r
+<p>The argument <i>qp_init_attr</i> is an ibv_qp_init_attr struct, as defined in \r
+<infiniband/verbs.h>. </p>\r
+<p></p>\r
+<pre>struct ibv_qp_init_attr {\r
+void *qp_context; /* value is being ignored */\r
+struct ibv_cq *send_cq; /* value is being ignored */ \r
+struct ibv_cq *recv_cq; /* value is being ignored */\r
+struct ibv_srq *srq; /* value is being ignored */\r
+struct ibv_qp_cap cap; /* value is being ignored */\r
+enum ibv_qp_type qp_type; /* value is being ignored */\r
+int sq_sig_all; /* value is being ignored */\r
+struct ibv_xrc_domain *xrc_domain; /* XRC domain the QP will be associated with */\r
+};\r
+</pre>\r
+<p>Most of the attributes in <i>qp_init_attr</i> are being ignored because this \r
+QP is a receive only QP and all RR are being posted to an SRQ.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_xrc_rcv_qp()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a></b>, <b>\r
+<a href="#IBV_QUERY_XRC_RCV_QP">ibv_modify_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_XRC_RCV_QP">ibv_query_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_REG_XRC_RCV_QP">ibv_reg_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_REG_XRC_RCV_QP">ibv_unreg_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_POST_SEND">ibv_post_send</a></b><p> </p>\r
+<h3><a name="IBV_MODIFY_XRC_RCV_QP">IBV_MODIFY_XRC_RCV_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_modify_xrc_rcv_qp - modify the attributes of an XRC receive queue pair (QP)<h4>\r
+SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_modify_xrc_rcv_qp(struct ibv_xrc_domain </b><i>*xrc_domain</i><b>, uint32_t </b><i>xrc_qp_num</i><b>,</b>\r
+<b> struct ibv_qp_attr </b><i>*attr</i><b>, int </b><i>attr_mask</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_modify_qp()</b> modifies the attributes of an XRC receive QP with the \r
+number <i>xrc_qp_num</i> which is associated with the XRC domain <i>xrc_domain</i> \r
+with the attributes in <i>attr</i> according to the mask <i>attr_mask</i> and \r
+move the QP state through the following transitions: Reset -> Init -> RTR. <i>\r
+attr_mask</i> should indicate all of the attributes which will be used in this \r
+QP transition and the following masks (at least) should be set:
+<p></p>\r
+<pre>Next state Required attributes\r
+---------- ----------------------------------------\r
+Init <b> IBV_QP_STATE, IBV_QP_PKEY_INDEX, IBV_QP_PORT, </b>\r
+ <b> IBV_QP_ACCESS_FLAGS </b>\r
+RTR <b> IBV_QP_STATE, IBV_QP_AV, IBV_QP_PATH_MTU, </b>\r
+ <b> IBV_QP_DEST_QPN, IBV_QP_RQ_PSN, </b>\r
+ <b> IBV_QP_MAX_DEST_RD_ATOMIC, IBV_QP_MIN_RNR_TIMER </b>\r
+</pre>\r
+<p>The user can add optional attributes as well. </p>\r
+<p>The argument <i>attr</i> is an ibv_qp_attr struct, as defined in \r
+<infiniband/verbs.h>. </p>\r
+<p></p>\r
+<pre>struct ibv_qp_attr {\r
+enum ibv_qp_state qp_state; /* Move the QP to this state */\r
+enum ibv_qp_state cur_qp_state; /* Assume this is the current QP state */\r
+enum ibv_mtu path_mtu; /* Path MTU (valid only for RC/UC QPs) */\r
+enum ibv_mig_state path_mig_state; /* Path migration state (valid if HCA supports APM) */\r
+uint32_t qkey; /* Q_Key for the QP (valid only for UD QPs) */\r
+uint32_t rq_psn; /* PSN for receive queue (valid only for RC/UC QPs) */\r
+uint32_t sq_psn; /* PSN for send queue (valid only for RC/UC QPs) */\r
+uint32_t dest_qp_num; /* Destination QP number (valid only for RC/UC QPs) */\r
+int qp_access_flags; /* Mask of enabled remote access operations (valid only for RC/UC QPs) */\r
+struct ibv_qp_cap cap; /* QP capabilities (valid if HCA supports QP resizing) */\r
+struct ibv_ah_attr ah_attr; /* Primary path address vector (valid only for RC/UC QPs) */\r
+struct ibv_ah_attr alt_ah_attr; /* Alternate path address vector (valid only for RC/UC QPs) */\r
+uint16_t pkey_index; /* Primary P_Key index */\r
+uint16_t alt_pkey_index; /* Alternate P_Key index */\r
+uint8_t en_sqd_async_notify; /* Enable SQD.drained async notification (Valid only if qp_state is SQD) */\r
+uint8_t sq_draining; /* Is the QP draining? Irrelevant for ibv_modify_qp() */\r
+uint8_t max_rd_atomic; /* Number of outstanding RDMA reads & atomic operations on the destination QP (valid only for RC QPs) */\r
+uint8_t max_dest_rd_atomic; /* Number of responder resources for handling incoming RDMA reads & atomic operations (valid only for RC QPs) */\r
+uint8_t min_rnr_timer; /* Minimum RNR NAK timer (valid only for RC QPs) */\r
+uint8_t port_num; /* Primary port number */\r
+uint8_t timeout; /* Local ack timeout for primary path (valid only for RC QPs) */\r
+uint8_t retry_cnt; /* Retry count (valid only for RC QPs) */\r
+uint8_t rnr_retry; /* RNR retry (valid only for RC QPs) */\r
+uint8_t alt_port_num; /* Alternate port number */\r
+uint8_t alt_timeout; /* Local ack timeout for alternate path (valid only for RC QPs) */\r
+};\r
+</pre>\r
+<p>For details on struct ibv_qp_cap see the description of <b>ibv_create_qp()</b>. \r
+For details on struct ibv_ah_attr see the description of <b>ibv_create_ah()</b>.\r
+</p>\r
+<p>The argument <i>attr_mask</i> specifies the QP attributes to be modified. The \r
+argument is either 0 or the bitwise OR of one or more of the following flags:\r
+</p>\r
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_QP_STATE </b>Modify qp_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_CUR_STATE </b>Set cur_qp_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_EN_SQD_ASYNC_NOTIFY </b>Set en_sqd_async_notify </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_ACCESS_FLAGS </b>Set qp_access_flags </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PKEY_INDEX </b>Set pkey_index </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PORT </b>Set port_num </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_QKEY </b>Set qkey </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_AV </b>Set ah_attr </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PATH_MTU </b>Set path_mtu </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_TIMEOUT </b>Set timeout </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RETRY_CNT </b>Set retry_cnt </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RNR_RETRY </b>Set rnr_retry </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RQ_PSN </b>Set rq_psn </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MAX_QP_RD_ATOMIC </b>Set max_rd_atomic </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_ALT_PATH </b>Set the alternative path via: alt_ah_attr, \r
+ alt_pkey_index, alt_port_num, alt_timeout </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MIN_RNR_TIMER </b>Set min_rnr_timer </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_SQ_PSN </b>Set sq_psn </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MAX_DEST_RD_ATOMIC </b>Set max_dest_rd_atomic </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PATH_MIG_STATE </b>Set path_mig_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_CAP </b>Set cap </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_DEST_QPN </b>Set dest_qp_num </dt>\r
+ <dd></dd>\r
+</dl>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_modify_xrc_rcv_qp()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).
+<h4>NOTES</h4>\r
+If any of the modify attributes or the modify mask are invalid, none of the \r
+attributes will be modified (including the QP state).
+<p>Not all devices support alternate paths. To check if a device supports it, \r
+check if the <b>IBV_DEVICE_AUTO_PATH_MIG</b> bit is set in the device \r
+capabilities flags.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a></b>, <b>\r
+<a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_XRC_RCV_QP">ibv_query_xrc_rcv_qp</a></b>
+<h3> </h3>\r
+<h3><br>\r
+<a name="IBV_OPEN_XRC_DOMAIN">IBV_OPEN_XRC_DOMAIN</a></h3>\r
+<h3><br>\r
+<a name="IBV_CLOSE_XRC_DOMAIN">IBV_CLOSE_XRC_DOMAIN</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_open_xrc_domain, ibv_close_xrc_domain - open or close an eXtended Reliable \r
+Connection (XRC) domain
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <fcntl.h></b>\r
+<b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_xrc_domain *ibv_open_xrc_domain(struct ibv_context </b><i>*context</i><b>,</b>\r
+<b> int </b><i>fd</i><b>, int </b><i>oflag</i><b>);</b>\r
+<b>int ibv_close_xrc_domain(struct ibv_xrc_domain </b><i>*d</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_open_xrc_domain()</b> open an XRC domain for the InfiniBand device \r
+context <i>context</i> or return a reference to an opened one. <i>fd</i> is the \r
+file descriptor to be associated with the XRC domain. The argument <i>oflag</i> \r
+describes the desired file creation attributes; it is either 0 or the bitwise OR \r
+of one or more of the following flags:
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>O_CREAT</b> </dt>\r
+ <dd>If a domain belonging to device named by context is already associated \r
+ with the inode, this flag has no effect, except as noted under <b>O_EXCL</b> \r
+ below. Otherwise, a new XRC domain is created and is associated with inode \r
+ specified by <i>fd</i>.
+
+ </dd>\r
+ <dt><b>O_EXCL</b> </dt>\r
+ <dd>If <b>O_EXCL</b> and <b>O_CREAT</b> are set, open will fail if a domain \r
+ associated with the inode exists. The check for the existence of the domain \r
+ and creation of the domain if it does not exist is atomic with respect to \r
+ other processes executing open with <i>fd</i> naming the same inode.
+ </dd>\r
+</dl>\r
+<p>If <i>fd</i> equals -1, no inode is is associated with the domain, and the \r
+only valid value for <i>oflag</i> is <b>O_CREAT</b>. </p>\r
+<p><b>ibv_close_xrc_domain()</b> closes the XRC domain <i>d</i>. If this is the \r
+last reference, the XRC domain will be destroyed. </p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_open_xrc_domain()</b> returns a pointer to an opened XRC, or NULL if the \r
+request fails.
+<p><b>ibv_close_xrc_domain()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).</p>\r
+<h4>NOTES</h4>\r
+Not all devices support XRC. To check if a device supports it, check if the <b>\r
+IBV_DEVICE_XRC</b> bit is set in the device capabilities flags.
+<p><b>ibv_close_xrc_domain()</b> may fail if any QP or SRQ are still associated \r
+with the XRC domain being closed.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_XRC_SRQ">ibv_create_xrc_srq</a></b>, <b>\r
+<a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a></b>,\r
+<b><a href="#IBV_MODIFY_XRC_RCV_QP">ibv_modify_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_XRC_RCV_QP">ibv_query_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_REG_XRC_RCV_QP">ibv_reg_xrc_rcv_qp</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_XRC_RCV_QP">IBV_QUERY_XRC_RCV_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_xrc_rcv_qp - get the attributes of an XRC receive queue pair (QP)
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_xrc_rcv_qp(struct ibv_xrc_domain </b><i>*xrc_domain</i><b>, uint32_t </b><i>xrc_qp_num</i><b>,</b>\r
+<b> struct ibv_qp_attr </b><i>*attr</i><b>, int </b><i>attr_mask</i><b>,</b>\r
+<b> struct ibv_qp_init_attr </b><i>*init_attr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_xrc_rcv_qp()</b> gets the attributes specified in <i>attr_mask</i> \r
+for the XRC receive QP with the number <i>xrc_qp_num</i> which is associated \r
+with the XRC domain <i>xrc_domain</i> and returns them through the pointers <i>\r
+attr</i> and <i>init_attr</i>. The argument <i>attr</i> is an ibv_qp_attr \r
+struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_qp_attr {\r
+enum ibv_qp_state qp_state; /* Current QP state */\r
+enum ibv_qp_state cur_qp_state; /* Current QP state - irrelevant for ibv_query_qp */\r
+enum ibv_mtu path_mtu; /* Path MTU (valid only for RC/UC QPs) */\r
+enum ibv_mig_state path_mig_state; /* Path migration state (valid if HCA supports APM) */\r
+uint32_t qkey; /* Q_Key of the QP (valid only for UD QPs) */\r
+uint32_t rq_psn; /* PSN for receive queue (valid only for RC/UC QPs) */\r
+uint32_t sq_psn; /* PSN for send queue (valid only for RC/UC QPs) */\r
+uint32_t dest_qp_num; /* Destination QP number (valid only for RC/UC QPs) */\r
+int qp_access_flags; /* Mask of enabled remote access operations (valid only for RC/UC QPs) */\r
+struct ibv_qp_cap cap; /* QP capabilities */\r
+struct ibv_ah_attr ah_attr; /* Primary path address vector (valid only for RC/UC QPs) */\r
+struct ibv_ah_attr alt_ah_attr; /* Alternate path address vector (valid only for RC/UC QPs) */\r
+uint16_t pkey_index; /* Primary P_Key index */\r
+uint16_t alt_pkey_index; /* Alternate P_Key index */\r
+uint8_t en_sqd_async_notify; /* Enable SQD.drained async notification - irrelevant for ibv_query_qp */\r
+uint8_t sq_draining; /* Is the QP draining? (Valid only if qp_state is SQD) */\r
+uint8_t max_rd_atomic; /* Number of outstanding RDMA reads & atomic operations on the destination QP (valid only for RC QPs) */\r
+uint8_t max_dest_rd_atomic; /* Number of responder resources for handling incoming RDMA reads & atomic operations (valid only for RC QPs) */\r
+uint8_t min_rnr_timer; /* Minimum RNR NAK timer (valid only for RC QPs) */\r
+uint8_t port_num; /* Primary port number */\r
+uint8_t timeout; /* Local ack timeout for primary path (valid only for RC QPs) */\r
+uint8_t retry_cnt; /* Retry count (valid only for RC QPs) */\r
+uint8_t rnr_retry; /* RNR retry (valid only for RC QPs) */\r
+uint8_t alt_port_num; /* Alternate port number */\r
+uint8_t alt_timeout; /* Local ack timeout for alternate path (valid only for RC QPs) */\r
+};\r
+</pre>\r
+<p>For details on struct ibv_qp_cap see the description of <b>ibv_create_qp()</b>. \r
+For details on struct ibv_ah_attr see the description of <b>ibv_create_ah()</b>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_query_xrc_rcv_qp()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).<h4>NOTES</h4>\r
+The argument <i>attr_mask</i> is a hint that specifies the minimum list of \r
+attributes to retrieve. Some InfiniBand devices may return extra attributes not \r
+requested, for example if the value can be returned cheaply.
+<p>Attribute values are valid if they have been set using <b>\r
+ibv_modify_xrc_rcv_qp()</b>. The exact list of valid attributes depends on the \r
+QP state. </p>\r
+<p>Multiple calls to <b>ibv_query_xrc_rcv_qp()</b> may yield some differences in \r
+the values returned for the following attributes: qp_state, path_mig_state, \r
+sq_draining, ah_attr (if APM is enabled).</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a></b>, <b>\r
+<a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_MODIFY_XRC_RCV_QP">ibv_modify_xrc_rcv_qp</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_REG_XRC_RCV_QP">IBV_REG_XRC_RCV_QP</a></h3>\r
+<h3><br>\r
+<a name="IBV_UNREG_XRC_RCV_QP">IBV_UNREG_XRC_RCV_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_reg_xrc_rcv_qp, ibv_unreg_xrc_rcv_qp - register and unregister a user \r
+process with an XRC receive queue pair (QP) <a NAME="lbAC"> </a>
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_reg_xrc_rcv_qp(struct ibv_xrc_domain </b><i>*xrc_domain</i><b>, uint32_t </b><i>xrc_qp_num</i><b>);</b>\r
+<b>int ibv_unreg_xrc_rcv_qp(struct ibv_xrc_domain </b><i>*xrc_domain</i><b>, uint32_t </b><i>xrc_qp_num</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_reg_xrc_rcv_qp()</b> registers a user process with the XRC receive QP \r
+(created via <b>ibv_create_xrc_rcv_qp()</b> ) whose number is <i>xrc_qp_num</i>, \r
+and which is associated with the XRC domain <i>xrc_domain</i>.
+
+<p><b>ibv_unreg_xrc_rcv_qp()</b> unregisters a user process from the XRC receive \r
+QP number <i>xrc_qp_num</i>, which is associated with the XRC domain <i>\r
+xrc_domain</i>. When the number of user processes registered with this XRC \r
+receive QP drops to zero, the QP is destroyed.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_reg_xrc_rcv_qp()</b> and <b>ibv_unreg_xrc_rcv_qp()</b> returns 0 on \r
+success, or the value of errno on failure (which indicates the failure reason).<h4>\r
+NOTES</h4>\r
+<b>ibv_reg_xrc_rcv_qp()</b> and <b>ibv_unreg_xrc_rcv_qp()</b> may fail if the \r
+number <i>xrc_qp_num</i> is not a number of a valid XRC receive QP (the QP is \r
+not allocated or it is the number of a non-XRC QP), or the XRC receive QP was \r
+created with an XRC domain other than <i>xrc_domain</i>.
+
+<p>If a process is still registered with any XRC RCV QPs belonging to some \r
+domain, <b>ibv_close_xrc_domain()</b> will return failure if called for that \r
+domain in that process. </p>\r
+<p><b>ibv_create_xrc_rcv_qp()</b> performs an implicit registration for the \r
+creating process; when that process is finished with the XRC RCV QP, it should \r
+call <b>ibv_unreg_xrc_rcv_qp()</b> for that QP. Note that if no other processes \r
+are registered with the QP at this time, its registration count will drop to \r
+zero and it will be destroyed. <a NAME="lbAG"> </a> </p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_OPEN_XRC_DOMAIN">ibv_open_xrc_domain</a></b>, <b>\r
+<a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a></b><p> </p>\r
+<p align="left"> </p>\r
+<h3><br>\r
+<a name="IBV_CREATE_QP">IBV_CREATE_QP</a></h3>\r
+<h3><br>\r
+<a name="IBV_DESTROY_QP">IBV_DESTROY_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_create_qp, ibv_destroy_qp - create or destroy a queue pair (QP)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>struct ibv_qp *ibv_create_qp(struct ibv_pd </b><i>*pd</i><b>,</b>\r
+<b> struct ibv_qp_init_attr </b><i>*qp_init_attr</i><b>);</b>\r
+\r
+<b>int ibv_destroy_qp(struct ibv_qp </b><i>*qp</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_create_qp()</b> creates a queue pair (QP) associated with the protection \r
+domain <i>pd</i>. The argument <i>qp_init_attr</i> is an ibv_qp_init_attr \r
+struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_qp_init_attr {\r
+void *qp_context; /* Associated context of the QP */\r
+struct ibv_cq *send_cq; /* CQ to be associated with the Send Queue (SQ) */ \r
+struct ibv_cq *recv_cq; /* CQ to be associated with the Receive Queue (RQ) */\r
+struct ibv_srq *srq; /* SRQ handle if QP is to be associated with an SRQ, otherwise NULL */\r
+struct ibv_qp_cap cap; /* QP capabilities */\r
+enum ibv_qp_type qp_type; /* QP Transport Service Type: IBV_QPT_RC, IBV_QPT_UC, IBV_QPT_UD or IBV_QPT_XRC */\r
+int sq_sig_all; /* If set, each Work Request (WR) submitted to the SQ generates a completion entry */\r
+struct ibv_xrc_domain *xrc_domain; /* XRC domain the QP will be associated with (valid only for IBV_QPT_XRC QP), otherwise NULL */\r
+};\r
+\r
+struct ibv_qp_cap {\r
+uint32_t max_send_wr; /* Requested max number of outstanding WRs in the SQ */\r
+uint32_t max_recv_wr; /* Requested max number of outstanding WRs in the RQ */\r
+uint32_t max_send_sge; /* Requested max number of scatter/gather (s/g) elements in a WR in the SQ */\r
+uint32_t max_recv_sge; /* Requested max number of s/g elements in a WR in the SQ */\r
+uint32_t max_inline_data;/* Requested max number of data (bytes) that can be posted inline to the SQ, otherwise 0 */\r
+};\r
+</pre>\r
+<p>The function <b>ibv_create_qp()</b> will update the <i>qp_init_attr</i>->cap \r
+struct with the actual <font SIZE="-1">QP</font> values of the QP that was \r
+created; the values will be greater than or equal to the values requested. </p>\r
+<p><b>ibv_destroy_qp()</b> destroys the QP <i>qp</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_create_qp()</b> returns a pointer to the created QP, or NULL if the \r
+request fails. Check the QP number (<b>qp_num</b>) in the returned QP.
+<p><b>ibv_destroy_qp()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).</p>\r
+<h4>NOTES</h4>\r
+<b>ibv_create_qp()</b> will fail if a it is asked to create QP of a type other \r
+than <b>IBV_QPT_RC</b> or <b>IBV_QPT_UD</b> associated with an SRQ.
+<p>The attributes max_recv_wr and max_recv_sge are ignored by <b>ibv_create_qp()</b> \r
+if the QP is to be associated with an SRQ. </p>\r
+<p><b>ibv_destroy_qp()</b> fails if the QP is attached to a multicast group.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_ALLOC_PD">ibv_alloc_pd</a></b>, <b><a href="#IBV_MODIFY_QP">ibv_modify_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_QP">ibv_query_qp</a></b>
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_MODIFY_QP">IBV_MODIFY_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_modify_qp - modify the attributes of a queue pair (QP)
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_modify_qp(struct ibv_qp </b><i>*qp</i><b>, struct ibv_qp_attr </b><i>*attr</i><b>,</b>\r
+<b> int </b><i>attr_mask</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_modify_qp()</b> modifies the attributes of QP <i>qp</i> with the \r
+attributes in <i>attr</i> according to the mask <i>attr_mask</i>. The argument\r
+<i>attr</i> is an ibv_qp_attr struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_qp_attr {\r
+enum ibv_qp_state qp_state; /* Move the QP to this state */\r
+enum ibv_qp_state cur_qp_state; /* Assume this is the current QP state */\r
+enum ibv_mtu path_mtu; /* Path MTU (valid only for RC/UC QPs) */\r
+enum ibv_mig_state path_mig_state; /* Path migration state (valid if HCA supports APM) */\r
+uint32_t qkey; /* Q_Key for the QP (valid only for UD QPs) */\r
+uint32_t rq_psn; /* PSN for receive queue (valid only for RC/UC QPs) */\r
+uint32_t sq_psn; /* PSN for send queue (valid only for RC/UC QPs) */\r
+uint32_t dest_qp_num; /* Destination QP number (valid only for RC/UC QPs) */\r
+int qp_access_flags; /* Mask of enabled remote access operations (valid only for RC/UC QPs) */\r
+struct ibv_qp_cap cap; /* QP capabilities (valid if HCA supports QP resizing) */\r
+struct ibv_ah_attr ah_attr; /* Primary path address vector (valid only for RC/UC QPs) */\r
+struct ibv_ah_attr alt_ah_attr; /* Alternate path address vector (valid only for RC/UC QPs) */\r
+uint16_t pkey_index; /* Primary P_Key index */\r
+uint16_t alt_pkey_index; /* Alternate P_Key index */\r
+uint8_t en_sqd_async_notify; /* Enable SQD.drained async notification (Valid only if qp_state is SQD) */\r
+uint8_t sq_draining; /* Is the QP draining? Irrelevant for ibv_modify_qp() */\r
+uint8_t max_rd_atomic; /* Number of outstanding RDMA reads & atomic operations on the destination QP (valid only for RC QPs) */\r
+uint8_t max_dest_rd_atomic; /* Number of responder resources for handling incoming RDMA reads & atomic operations (valid only for RC QPs) */\r
+uint8_t min_rnr_timer; /* Minimum RNR NAK timer (valid only for RC QPs) */\r
+uint8_t port_num; /* Primary port number */\r
+uint8_t timeout; /* Local ack timeout for primary path (valid only for RC QPs) */\r
+uint8_t retry_cnt; /* Retry count (valid only for RC QPs) */\r
+uint8_t rnr_retry; /* RNR retry (valid only for RC QPs) */\r
+uint8_t alt_port_num; /* Alternate port number */\r
+uint8_t alt_timeout; /* Local ack timeout for alternate path (valid only for RC QPs) */\r
+};\r
+</pre>\r
+<p>For details on struct ibv_qp_cap see the description of <b>ibv_create_qp()</b>. \r
+For details on struct ibv_ah_attr see the description of <b>ibv_create_ah()</b>.\r
+</p>\r
+<p>The argument <i>attr_mask</i> specifies the QP attributes to be modified. The \r
+argument is either 0 or the bitwise OR of one or more of the following flags:\r
+</p>\r
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_QP_STATE </b>Modify qp_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_CUR_STATE </b>Set cur_qp_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_EN_SQD_ASYNC_NOTIFY </b>Set en_sqd_async_notify </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_ACCESS_FLAGS </b>Set qp_access_flags </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PKEY_INDEX </b>Set pkey_index </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PORT </b>Set port_num </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_QKEY </b>Set qkey </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_AV </b>Set ah_attr </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PATH_MTU </b>Set path_mtu </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_TIMEOUT </b>Set timeout </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RETRY_CNT </b>Set retry_cnt </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RNR_RETRY </b>Set rnr_retry </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_RQ_PSN </b>Set rq_psn </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MAX_QP_RD_ATOMIC </b>Set max_rd_atomic </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_ALT_PATH </b>Set the alternative path via: alt_ah_attr, \r
+ alt_pkey_index, alt_port_num, alt_timeout </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MIN_RNR_TIMER </b>Set min_rnr_timer </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_SQ_PSN </b>Set sq_psn </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_MAX_DEST_RD_ATOMIC </b>Set max_dest_rd_atomic </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_PATH_MIG_STATE </b>Set path_mig_state </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_CAP </b>Set cap </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_QP_DEST_QPN </b>Set dest_qp_num </dt>\r
+ <dd></dd>\r
+</dl>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_modify_qp()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).<h4>NOTES</h4>\r
+If any of the modify attributes or the modify mask are invalid, none of the \r
+attributes will be modified (including the QP state).
+<p>Not all devices support resizing QPs. To check if a device supports it, check \r
+if the <b>IBV_DEVICE_RESIZE_MAX_WR</b> bit is set in the device capabilities \r
+flags. </p>\r
+<p>Not all devices support alternate paths. To check if a device supports it, \r
+check if the <b>IBV_DEVICE_AUTO_PATH_MIG</b> bit is set in the device \r
+capabilities flags. </p>\r
+<p>The following tables indicate for each QP Transport Service Type, the minimum \r
+list of attributes that must be changed upon transitioning QP state from: Reset \r
+--> Init --> RTR --> RTS. </p>\r
+<p></p>\r
+<pre>For QP Transport Service Type <b> IBV_QPT_UD</b>:\r
+\r
+Next state Required attributes\r
+---------- ----------------------------------------\r
+Init <b> IBV_QP_STATE, IBV_QP_PKEY_INDEX, IBV_QP_PORT, </b>\r
+ <b> IBV_QP_QKEY </b>\r
+RTR <b> IBV_QP_STATE </b>\r
+RTS <b> IBV_QP_STATE, IBV_QP_SQ_PSN </b>\r
+</pre>\r
+<p></p>\r
+<pre>For QP Transport Service Type <b> IBV_QPT_UC</b>:\r
+\r
+Next state Required attributes\r
+---------- ----------------------------------------\r
+Init <b> IBV_QP_STATE, IBV_QP_PKEY_INDEX, IBV_QP_PORT, </b>\r
+ <b> IBV_QP_ACCESS_FLAGS </b>\r
+RTR <b> IBV_QP_STATE, IBV_QP_AV, IBV_QP_PATH_MTU, </b>\r
+ <b> IBV_QP_DEST_QPN, IBV_QP_RQ_PSN </b>\r
+RTS <b> IBV_QP_STATE, IBV_QP_SQ_PSN </b>\r
+</pre>\r
+<p></p>\r
+<pre>For QP Transport Service Type <b> IBV_QPT_RC</b>:\r
+\r
+Next state Required attributes\r
+---------- ----------------------------------------\r
+Init <b> IBV_QP_STATE, IBV_QP_PKEY_INDEX, IBV_QP_PORT, </b>\r
+ <b> IBV_QP_ACCESS_FLAGS </b>\r
+RTR <b> IBV_QP_STATE, IBV_QP_AV, IBV_QP_PATH_MTU, </b>\r
+ <b> IBV_QP_DEST_QPN, IBV_QP_RQ_PSN, </b>\r
+ <b> IBV_QP_MAX_DEST_RD_ATOMIC, IBV_QP_MIN_RNR_TIMER </b>\r
+RTS <b> IBV_QP_STATE, IBV_QP_SQ_PSN, IBV_QP_MAX_QP_RD_ATOMIC, </b>\r
+ <b> IBV_QP_RETRY_CNT, IBV_QP_RNR_RETRY, IBV_QP_TIMEOUT</b></pre>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_QUERY_QP">ibv_create_qp</a></b>, <b><a href="#IBV_DESTROY_QP">ibv_destroy_qp</a></b>, <b>\r
+<a href="#IBV_QUERY_QP">ibv_query_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_AH">ibv_create_ah</a></b><p> </p>\r
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_POST_RECV">IBV_POST_RECV</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_post_recv - post a list of work requests (WRs) to a receive queue<h4>\r
+SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_post_recv(struct ibv_qp </b><i>*qp</i><b>, struct ibv_recv_wr </b><i>*wr</i><b>,</b>\r
+<b> struct ibv_recv_wr </b><i>**bad_wr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_post_recv()</b> posts the linked list of work requests (WRs) starting \r
+with <i>wr</i> to the receive queue of the queue pair <i>qp</i>. It stops \r
+processing WRs from this list at the first failure (that can be detected \r
+immediately while requests are being posted), and returns this failing WR \r
+through <i>bad_wr</i>.
+
+<p>The argument <i>wr</i> is an ibv_recv_wr struct, as defined in <infiniband/verbs.h>.\r
+</p>\r
+<p></p>\r
+<pre>struct ibv_recv_wr {\r
+uint64_t wr_id; /* User defined WR ID */\r
+struct ibv_recv_wr *next; /* Pointer to next WR in list, NULL if last WR */\r
+struct ibv_sge *sg_list; /* Pointer to the s/g array */\r
+int num_sge; /* Size of the s/g array */\r
+};\r
+\r
+struct ibv_sge {\r
+uint64_t addr; /* Start address of the local memory buffer */\r
+uint32_t length; /* Length of the buffer */\r
+uint32_t lkey; /* Key of the local Memory Region */\r
+};</pre>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_post_recv()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).
+<h4>NOTES</h4>\r
+The buffers used by a WR can only be safely reused after WR the request is fully \r
+executed and a work completion has been retrieved from the corresponding \r
+completion queue (CQ).
+<p>If the QP <i>qp</i> is associated with a shared receive queue, you must use \r
+the function <b>ibv_post_srq_recv()</b>, and not <b>ibv_post_recv()</b>, since \r
+the QP's own receive queue will not be used. </p>\r
+<p>If a WR is being posted to a UD QP, the Global Routing Header (GRH) of the \r
+incoming message will be placed in the first 40 bytes of the buffer(s) in the \r
+scatter list. If no GRH is present in the incoming message, then the first bytes \r
+will be undefined. This means that in all cases, the actual data of the incoming \r
+message will start at an offset of 40 bytes into the buffer(s) in the scatter \r
+list. </p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b><a href="#IBV_POST_SEND">ibv_post_send</a></b>, <b>\r
+<a href="#IBV_POST_SRQ_RECV">ibv_post_srq_recv</a></b>,\r
+<b><a href="#IBV_POLL_CQ">ibv_poll_cq</a></b>
+<p> </p>\r
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_POST_SEND">IBV_POST_SEND</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_post_send - post a list of work requests (WRs) to a send queue
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_post_send(struct ibv_qp </b><i>*qp</i><b>, struct ibv_send_wr </b><i>*wr</i><b>,</b>\r
+<b> struct ibv_send_wr </b><i>**bad_wr</i><b>);</b> </pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_post_send()</b> posts the linked list of work requests (WRs) starting \r
+with <i>wr</i> to the send queue of the queue pair <i>qp</i>. It stops \r
+processing WRs from this list at the first failure (that can be detected \r
+immediately while requests are being posted), and returns this failing WR \r
+through <i>bad_wr</i>.
+
+<p>The argument <i>wr</i> is an ibv_send_wr struct, as defined in <infiniband/verbs.h>.\r
+</p>\r
+<p></p>\r
+<pre>struct ibv_send_wr {\r
+uint64_t wr_id; /* User defined WR ID */\r
+struct ibv_send_wr *next; /* Pointer to next WR in list, NULL if last WR */\r
+struct ibv_sge *sg_list; /* Pointer to the s/g array */\r
+int num_sge; /* Size of the s/g array */\r
+enum ibv_wr_opcode opcode; /* Operation type */\r
+int send_flags; /* Flags of the WR properties */\r
+uint32_t imm_data; /* Immediate data (in network byte order) */\r
+union {\r
+struct {\r
+uint64_t remote_addr; /* Start address of remote memory buffer */\r
+uint32_t rkey; /* Key of the remote Memory Region */\r
+} rdma;\r
+struct {\r
+uint64_t remote_addr; /* Start address of remote memory buffer */ \r
+uint64_t compare_add; /* Compare operand */\r
+uint64_t swap; /* Swap operand */\r
+uint32_t rkey; /* Key of the remote Memory Region */\r
+} atomic;\r
+struct {\r
+struct ibv_ah *ah; /* Address handle (AH) for the remote node address */\r
+uint32_t remote_qpn; /* QP number of the destination QP */\r
+uint32_t remote_qkey; /* Q_Key number of the destination QP */\r
+} ud;\r
+} wr;\r
+uint32_t xrc_remote_srq_num; /* SRQ number of the destination XRC */\r
+};\r
+\r
+struct ibv_sge {\r
+uint64_t addr; /* Start address of the local memory buffer */\r
+uint32_t length; /* Length of the buffer */\r
+uint32_t lkey; /* Key of the local Memory Region */\r
+};\r
+</pre>\r
+<p>Each QP Transport Service Type supports a specific set of opcodes, as shown \r
+in the following table: </p>\r
+<p></p>\r
+<pre>OPCODE | IBV_QPT_UD | IBV_QPT_UC | IBV_QPT_RC | IBV_QPT_XRC\r
+----------------------------+------------+------------+------------+------------\r
+IBV_WR_SEND | X | X | X | X\r
+IBV_WR_SEND_WITH_IMM | X | X | X | X\r
+IBV_WR_RDMA_WRITE | | X | X | X\r
+IBV_WR_RDMA_WRITE_WITH_IMM | | X | X | X\r
+IBV_WR_RDMA_READ | | | X | X\r
+IBV_WR_ATOMIC_CMP_AND_SWP | | | X | X\r
+IBV_WR_ATOMIC_FETCH_AND_ADD | | | X | X\r
+</pre>\r
+<p>The attribute send_flags describes the properties of the <font SIZE="-1">WR</font>. \r
+It is either 0 or the bitwise <font SIZE="-1">OR</font> of one or more of the \r
+following flags: </p>\r
+<p></p>\r
+<dl COMPACT>\r
+ <dt><b>IBV_SEND_FENCE </b>Set the fence indicator. Valid only for QPs with \r
+ Transport Service Type <b>IBV_QPT_RC</b> </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_SEND_SIGNALED </b>Set the completion notification indicator. \r
+ Relevant only if QP was created with sq_sig_all=0 </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_SEND_SOLICITED </b>Set the solicited event indicator. Valid only \r
+ for Send and RDMA Write with immediate </dt>\r
+ <dd></dd>\r
+ <dt><b>IBV_SEND_INLINE </b>Send data in given gather list as inline data\r
+ </dt>\r
+ <dd>in a send WQE. Valid only for Send and RDMA Write. The L_Key will not be \r
+ checked.
+ </dd>\r
+</dl>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_post_send()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).
+<h4>NOTES</h4>\r
+The user should not alter or destroy AHs associated with WRs until request is \r
+fully executed and a work completion has been retrieved from the corresponding \r
+completion queue (CQ) to avoid unexpected behavior.
+<p>The buffers used by a WR can only be safely reused after WR the request is \r
+fully executed and a work completion has been retrieved from the corresponding \r
+completion queue (CQ). However, if the IBV_SEND_INLINE flag was set, the buffer \r
+can be reused immediately after the call returns.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_XRC_RCV_QP">ibv_create_xrc_rcv_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_AH">ibv_create_ah</a></b>,\r
+<b><a href="#IBV_POST_RECV">ibv_post_recv</a></b>, <b>\r
+<a href="#IBV_POST_SRQ_RECV">ibv_post_srq_recv</a></b>, <b>\r
+<a href="#IBV_POLL_CQ">ibv_poll_cq</a></b><p> </p>\r
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_POST_SRQ_RECV">IBV_POST_SRQ_RECV</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_post_srq_recv - post a list of work requests (WRs) to a shared receive queue \r
+(SRQ)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_post_srq_recv(struct ibv_srq </b><i>*srq</i><b>, struct ibv_recv_wr </b><i>*wr</i><b>,</b>\r
+<b> struct ibv_recv_wr </b><i>**bad_wr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_post_srq_recv()</b> posts the linked list of work requests (WRs) starting \r
+with <i>wr</i> to the shared receive queue (SRQ) <i>srq</i>. It stops processing \r
+WRs from this list at the first failure (that can be detected immediately while \r
+requests are being posted), and returns this failing WR through <i>bad_wr</i>.
+
+<p>The argument <i>wr</i> is an ibv_recv_wr struct, as defined in <infiniband/verbs.h>.\r
+</p>\r
+<p></p>\r
+<pre>struct ibv_recv_wr {\r
+uint64_t wr_id; /* User defined WR ID */\r
+struct ibv_recv_wr *next; /* Pointer to next WR in list, NULL if last WR */\r
+struct ibv_sge *sg_list; /* Pointer to the s/g array */\r
+int num_sge; /* Size of the s/g array */\r
+};\r
+\r
+struct ibv_sge {\r
+uint64_t addr; /* Start address of the local memory buffer */\r
+uint32_t length; /* Length of the buffer */\r
+uint32_t lkey; /* Key of the local Memory Region */\r
+};</pre>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_post_srq_recv()</b> returns 0 on success, or the value of errno on \r
+failure (which indicates the failure reason).<h4>NOTES</h4>\r
+The buffers used by a WR can only be safely reused after WR the request is fully \r
+executed and a work completion has been retrieved from the corresponding \r
+completion queue (CQ).
+<p>If a WR is being posted to a UD QP, the Global Routing Header (GRH) of the \r
+incoming message will be placed in the first 40 bytes of the buffer(s) in the \r
+scatter list. If no GRH is present in the incoming message, then the first bytes \r
+will be undefined. This means that in all cases, the actual data of the incoming \r
+message will start at an offset of 40 bytes into the buffer(s) in the scatter \r
+list.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b><a href="#IBV_POST_SEND">ibv_post_send</a></b>, <b>\r
+<a href="#IBV_POST_RECV">ibv_post_recv</a></b>, <b>\r
+<a href="#IBV_POLL_CQ">ibv_poll_cq</a></b>
+<p> </p>\r
+<p> </p>\r
+<h3><br>\r
+<a name="IBV_QUERY_QP">IBV_QUERY_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_query_qp - get the attributes of a queue pair (QP)<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_query_qp(struct ibv_qp </b><i>*qp</i><b>, struct ibv_qp_attr </b><i>*attr</i><b>,</b>\r
+<b> int </b><i>attr_mask</i><b>,</b>\r
+<b> struct ibv_qp_init_attr </b><i>*init_attr</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_query_qp()</b> gets the attributes specified in <i>attr_mask</i> for the \r
+QP <i>qp</i> and returns them through the pointers <i>attr</i> and <i>init_attr</i>. \r
+The argument <i>attr</i> is an ibv_qp_attr struct, as defined in <infiniband/verbs.h>.
+<p></p>\r
+<pre>struct ibv_qp_attr {\r
+enum ibv_qp_state qp_state; /* Current QP state */\r
+enum ibv_qp_state cur_qp_state; /* Current QP state - irrelevant for ibv_query_qp */\r
+enum ibv_mtu path_mtu; /* Path MTU (valid only for RC/UC QPs) */\r
+enum ibv_mig_state path_mig_state; /* Path migration state (valid if HCA supports APM) */\r
+uint32_t qkey; /* Q_Key of the QP (valid only for UD QPs) */\r
+uint32_t rq_psn; /* PSN for receive queue (valid only for RC/UC QPs) */\r
+uint32_t sq_psn; /* PSN for send queue (valid only for RC/UC QPs) */\r
+uint32_t dest_qp_num; /* Destination QP number (valid only for RC/UC QPs) */\r
+int qp_access_flags; /* Mask of enabled remote access operations (valid only for RC/UC QPs) */\r
+struct ibv_qp_cap cap; /* QP capabilities */\r
+struct ibv_ah_attr ah_attr; /* Primary path address vector (valid only for RC/UC QPs) */\r
+struct ibv_ah_attr alt_ah_attr; /* Alternate path address vector (valid only for RC/UC QPs) */\r
+uint16_t pkey_index; /* Primary P_Key index */\r
+uint16_t alt_pkey_index; /* Alternate P_Key index */\r
+uint8_t en_sqd_async_notify; /* Enable SQD.drained async notification - irrelevant for ibv_query_qp */\r
+uint8_t sq_draining; /* Is the QP draining? (Valid only if qp_state is SQD) */\r
+uint8_t max_rd_atomic; /* Number of outstanding RDMA reads & atomic operations on the destination QP (valid only for RC QPs) */\r
+uint8_t max_dest_rd_atomic; /* Number of responder resources for handling incoming RDMA reads & atomic operations (valid only for RC QPs) */\r
+uint8_t min_rnr_timer; /* Minimum RNR NAK timer (valid only for RC QPs) */\r
+uint8_t port_num; /* Primary port number */\r
+uint8_t timeout; /* Local ack timeout for primary path (valid only for RC QPs) */\r
+uint8_t retry_cnt; /* Retry count (valid only for RC QPs) */\r
+uint8_t rnr_retry; /* RNR retry (valid only for RC QPs) */\r
+uint8_t alt_port_num; /* Alternate port number */\r
+uint8_t alt_timeout; /* Local ack timeout for alternate path (valid only for RC QPs) */\r
+};\r
+</pre>\r
+<p>For details on struct ibv_qp_cap see the description of <b>ibv_create_qp()</b>. \r
+For details on struct ibv_ah_attr see the description of <b>ibv_create_ah()</b>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_query_qp()</b> returns 0 on success, or the value of errno on failure \r
+(which indicates the failure reason).<h4>NOTES</h4>\r
+The argument <i>attr_mask</i> is a hint that specifies the minimum list of \r
+attributes to retrieve. Some RDMA devices may return extra attributes not \r
+requested, for example if the value can be returned cheaply. This has the same \r
+form as in <b>ibv_modify_qp()</b>.
+
+<p>Attribute values are valid if they have been set using <b>ibv_modify_qp()</b>. \r
+The exact list of valid attributes depends on the QP state. </p>\r
+<p>Multiple calls to <b>ibv_query_qp()</b> may yield some differences in the \r
+values returned for the following attributes: qp_state, path_mig_state, \r
+sq_draining, ah_attr (if APM is enabled).</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b>, <b><a href="#IBV_DESTROY_QP">ibv_destroy_qp</a></b>, <b>\r
+<a href="#IBV_MODIFY_QP">ibv_modify_qp</a></b>, <b>\r
+<a href="#IBV_CREATE_AH">ibv_create_ah</a></b><p> </p>\r
+<p align="left"> </p>\r
+<h3><br>\r
+<a name="IBV_ATTACH_MCAST">IBV_ATTACH_MCAST</a></h3>\r
+<h3><br>\r
+<a name="IBV_DETACH_MCAST">IBV_DETACH_MCAST</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+ibv_attach_mcast, ibv_detach_mcast - attach and detach a queue pair (QPs) \r
+to/from a multicast group<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_attach_mcast(struct ibv_qp </b><i>*qp</i><b>, const union ibv_gid </b><i>*gid</i><b>,</b> <b>uint16_t </b><i>lid</i><b>);</b>\r
+\r
+<b>int ibv_detach_mcast(struct ibv_qp </b><i>*qp</i><b>, const union ibv_gid </b><i>*gid</i><b>,</b> <b>uint16_t </b><i>lid</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_attach_mcast()</b> attaches the QP <i>qp</i> to the multicast group \r
+having MGID <i>gid</i> and MLID <i>lid</i>.
+
+<p><b>ibv_detach_mcast()</b> detaches the QP <i>qp</i> to the multicast group \r
+having MGID <i>gid</i> and MLID <i>lid</i>.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_attach_mcast()</b> and <b>ibv_detach_mcast()</b> returns 0 on success, or \r
+the value of errno on failure (which indicates the failure reason).<h4>NOTES</h4>\r
+Only QPs of Transport Service Type <b>IBV_QPT_UD</b> may be attached to \r
+multicast groups.
+<p>If a QP is attached to the same multicast group multiple times, the QP will \r
+still receive a single copy of a multicast message. </p>\r
+<p>In order to receive multicast messages, a join request for the multicast \r
+group must be sent to the subnet administrator (SA), so that the fabric's \r
+multicast routing is configured to deliver messages to the local port.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_CREATE_QP">ibv_create_qp</a></b><p> </p>\r
+<h3><br>\r
+<a name="IBV_RATE_TO_MULT">IBV_RATE_TO_MULT</a></h3>\r
+<h3><br>\r
+<a name="IBV_MULT_TO_RATE">IBV_MULT_TO_RATE</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+<p>ibv_rate_to_mult - convert IB rate enumeration to multiplier of 2.5 Gbit/sec<br>\r
+<br>\r
+mult_to_ibv_rate - convert multiplier of 2.5 Gbit/sec to an IB rate enumeration</p>\r
+<h4>SYNOPSIS</h4>\r
+<pre><b>#include <infiniband/verbs.h></b>\r
+\r
+<b>int ibv_rate_to_mult(enum ibv_rate </b><i>rate</i><b>);</b>\r
+\r
+<b>enum ibv_rate mult_to_ibv_rate(int </b><i>mult</i><b>);</b></pre>\r
+<h4>DESCRIPTION</h4>\r
+<b>ibv_rate_to_mult()</b> converts the IB transmission rate enumeration <i>rate</i> \r
+to a multiple of 2.5 Gbit/sec (the base rate). For example, if <i>rate</i> is <b>\r
+IBV_RATE_5_GBPS</b>, the value 2 will be returned (5 Gbit/sec = 2 * 2.5 Gbit/sec).
+<p><b>mult_to_ibv_rate()</b> converts the multiplier value (of 2.5 Gbit/sec) <i>\r
+mult</i> to an IB transmission rate enumeration. For example, if <i>mult</i> is \r
+2, the rate enumeration <b>IBV_RATE_5_GBPS</b> will be returned.</p>\r
+<h4>RETURN VALUE</h4>\r
+<b>ibv_rate_to_mult()</b> returns the multiplier of the base rate 2.5 Gbit/sec.
+<p><b>mult_to_ibv_rate()</b> returns the enumeration representing the IB \r
+transmission rate.</p>\r
+<h4>SEE ALSO</h4>\r
+<b><a href="#IBV_QUERY_PORT">ibv_query_port</a></b>
+</span>\r
+<span style="font-size: 12pt; font-family: Times New Roman">\r
<p align="left"><a href="#TOP"><font color="#000000"><<b>return-to-top</b>></font></a></p>\r
<p align="left"> </p>\r
-<BLOCKQUOTE></BLOCKQUOTE>\r
-<h2><a name="WinVerbs">WinVerbs</a></h2><hr>\r
+<h2 align="left"><a name="RDMA_CM_-_Communications_Manager">RDMA CM - Communications Manager</a></h2>\r
+<hr>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<div class="Section1">\r
+ <h4>NAME</h4>\r
+ <blockquote>\r
+ <p>librdmacm.lib<span class="GramE"> - RDMA \r
+ communication manager.</span></p>\r
+ </blockquote>\r
+ <h4>SYNOPSIS</h4>\r
+ <blockquote>\r
+ <p>#include <rdma/rdma_cma.h></p>\r
+ </blockquote>\r
+ <h4>DESCRIPTION</h4>\r
+ <blockquote>\r
+ <p><span class="GramE">Used to establish \r
+ communication endpoints over RDMA transports.</span></p>\r
+ </blockquote>\r
+ <h4>NOTES</h4>\r
+ <blockquote>\r
+ <p><span class="GramE">The RDMA</span> CM is a communication manager used to setup reliable, con<span class="GramE">nected</span> \r
+ and unreliable datagram data transfers. <span class="GramE">It \r
+ provides</span> an RDMA<span class="GramE"> transport \r
+ neutral</span> interface for establishing connections. The \r
+ API is <span class="GramE">based</span> on sockets, but adapted for \r
+ queue pair (QP) based semantics: com<span class="GramE">munication must</span> be over a specific RDMA device, and data transfers are\r
+ <span class="GramE">message</span> based.</p>\r
+ <p>The RDMA CM only <span class="GramE">provides \r
+ the</span> communication management (connection<span class="GramE"> \r
+ setup</span> / teardown) portion of an RDMA API. It works in \r
+ conjunction with<span class="GramE"> the</span> verbs API defined \r
+ by the libibverbs library. <span class="GramE">The \r
+ libibverbs</span> <span class="GramE">library</span> provides the \r
+ interfaces needed to send and receive data.</p>\r
+ </blockquote>\r
+ <h4>CLIENT OPERATION</h4>\r
+ <p> This section \r
+ provides a general overview of the basic operation for the<span class="GramE"> \r
+ active</span>, or client, side of communication. <span class="GramE">A \r
+ general</span> connection flow<span class="GramE"> would</span> \r
+ be:</p>\r
+ <p> \r
+ <a href="#rdma_create_event_channel">rdma_create_event_channel</a></p>\r
+ <p> \r
+ <span class="GramE">create</span> channel to receive events</p>\r
+ <p> <a href="#RDMA_CREATE_ID">rdma_create_id</a></p>\r
+ <p> \r
+ <span class="GramE">allocate</span> an rdma_cm_id, this is conceptually \r
+ similar to a socket</p>\r
+ <p> \r
+ <a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a></p>\r
+ <p> \r
+ <span class="GramE">obtain</span> a local RDMA device to reach the remote \r
+ address</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_ADDR_RESOLVED event</p>\r
+ <p> <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> <a href="#RDMA_CREATE_QP">rdma_create_qp</a></p>\r
+ <p> \r
+ <span class="GramE">allocate</span> a QP for the communication</p>\r
+ <p> \r
+ <a href="#RDMA_RESOLVE_ROUTE">rdma_resolve_route</a></p>\r
+ <p> \r
+ <span class="GramE">determine</span> the route to the remote address</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_ROUTE_RESOLVED event</p>\r
+ <p> \r
+ <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> <a href="#RDMA_CONNECT">rdma_connect</a></p>\r
+ <p> \r
+ <span class="GramE">connect</span> to the remote server</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_ESTABLISHED event</p>\r
+ <p> \r
+ <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> Perform data \r
+ transfers over connection</p>\r
+ <p> <a href="#RDMA_DISCONNECT">rdma_disconnect</a></p>\r
+ <p> \r
+ <span class="GramE">tear-down</span> connection</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_DISCONNECTED event</p>\r
+ <p> \r
+ <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> <a href="#RDMA_DESTROY_QP">rdma_destroy_qp</a></p>\r
+ <p> \r
+ <span class="GramE">destroy</span> the QP</p>\r
+ <p> <a href="#RDMA_DESTROY_ID">rdma_destroy_id</a></p>\r
+ <p> \r
+ <span class="GramE">release</span> the rdma_cm_id</p>\r
+ <p> \r
+ <a href="#rdma_destroy_event_channel_">rdma_destroy_event_channel</a></p>\r
+ <p> \r
+ <span class="GramE">release</span> the event channel</div>\r
+<blockquote>\r
+ <p>An almost identical process is used to setup \r
+ unreliable <span class="GramE">datagram (</span>UD)\r
+ <span class="GramE">communication between</span> nodes. <br>\r
+ No actual connection is formed between QPs however, so disconnection is not \r
+ needed.<br>\r
+ Although this <span class="GramE">example shows</span> the client \r
+ initiating the disconnect,<span class="GramE"> either</span> side of a \r
+ connection may initiate the disconnect.</p>\r
+</blockquote>\r
+<div class="Section1">\r
+ <h4>SERVER OPERATION</h4>\r
+ <p> This section \r
+ provides a general overview of the basic operation for the\r
+ <span class="GramE">passive</span>, or server, side of communication. \r
+ A <span class="GramE">general connection</span> flow<span class="GramE"> \r
+ would</span> be:</p>\r
+ <p> \r
+ <a href="#rdma_create_event_channel">rdma_create_event_channel</a></p>\r
+ <p> \r
+ <span class="GramE">create</span> channel to receive events</p>\r
+ <p> <a href="#RDMA_CREATE_ID">rdma_create_id</a></p>\r
+ <p> \r
+ <span class="GramE">allocate</span> an rdma_cm_id, this is conceptually \r
+ similar to a socket</p>\r
+ <p> <a href="#RDMA_BIND_ADDR">rdma_bind_addr</a></p>\r
+ <p> \r
+ <span class="GramE">set</span> the local port number to listen on</p>\r
+ <p> <a href="#RDMA_LISTEN">rdma_listen</a></p>\r
+ <p> \r
+ <span class="GramE">begin</span> listening for connection requests</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait for</span> \r
+ RDMA_CM_EVENT_CONNECT_REQUEST event with a new rdma_cm_id.</p>\r
+ <p> <a href="#RDMA_CREATE_QP">rdma_create_qp</a></p>\r
+ <p> \r
+ <span class="GramE">allocate</span> a QP for the communication on the new \r
+ rdma_cm_id</p>\r
+ <p> <a href="#RDMA_ACCEPT">rdma_accept</a></p>\r
+ <p> \r
+ <span class="GramE">accept</span> the connection request</p>\r
+ <p> <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_ESTABLISHED event</p>\r
+ <p> \r
+ <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> Perform data \r
+ transfers over connection</p>\r
+ <p> \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">wait</span> for RDMA_CM_EVENT_DISCONNECTED even</p>\r
+ <p> \r
+ <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a></p>\r
+ <p> \r
+ <span class="GramE">ack</span> event</p>\r
+ <p> <a href="#RDMA_DISCONNECT">rdma_disconnect</a></p>\r
+ <p> \r
+ <span class="GramE">tear-down</span> connection</p>\r
+ <p> <a href="#RDMA_DESTROY_QP">rdma_destroy_qp</a></p>\r
+ <p> \r
+ <span class="GramE">destroy</span> the QP</p>\r
+ <p> <a href="#RDMA_DESTROY_ID">rdma_destroy_id</a></p>\r
+ <p> \r
+ <span class="GramE">release</span> the connected rdma_cm_id</p>\r
+ <p> <a href="#RDMA_DESTROY_ID">rdma_destroy_id</a></p>\r
+ <p> \r
+ <span class="GramE">release</span> the listening rdma_cm_id</p>\r
+ <p> \r
+ <a href="#rdma_destroy_event_channel_">rdma_destroy_event_channel</a></p>\r
+ <p> \r
+ <span class="GramE">release</span> the event channel</p>\r
+ <h4>RETURN CODES</h4>\r
+ <p> \r
+ <span class="GramE">= 0</span> success</p>\r
+ <p> = -1 \r
+ error - see errno for more details</p>\r
+ <blockquote>\r
+ <p>Librdmacm functions return 0 to indicate \r
+ success, and a -1 return value <span class="GramE">to</span> indicate \r
+ failure.</p>\r
+ <p>If a function operates asynchronously<span class="GramE">, \r
+ a</span> return<span class="GramE"> value of</span> 0 \r
+ means that the operation was successfully started. <br>\r
+ The\r
+ <span class="GramE">operation</span> could still complete in error; \r
+ users should check the status <span class="GramE">of the</span> \r
+ related event. <br>\r
+ <br>\r
+ If the return value is -1, then errno will con<span class="GramE">tain</span> \r
+ additional information regarding the reason for the failure.<br>\r
+ Prior \r
+ versions of the library would return -errno and not set errno for\r
+ <span class="GramE">some cases</span> related to ENOMEM, ENODEV, ENODATA, \r
+ EINVAL, and EADDRNO<span class="GramE">TAVAIL codes.<br>\r
+ </span>Applications that want to check these codes and have com<span class="GramE">patibility \r
+ with</span> prior library versions must manually set errno to the <span class="GramE">negative</span> \r
+ of the return code if it is < -1.</p>\r
+ </blockquote>\r
+ <h4>SEE ALSO</h4>\r
+ <blockquote>\r
+ <p> <a href="#rdma_create_event_channel">rdma_create_event_<span class="GramE">channel</span></a>, \r
+ <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_CREATE_ID">rdma_create_id</a>, <br>\r
+ <a href="#RDMA_RESOLVE_ADDR">rdma_resolve_<span class="GramE">addr</span></a>, \r
+ <a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_CREATE_QP">rdma_create_qp</a>,<br>\r
+ <a href="#RDMA_RESOLVE_ROUTE">rdma_resolve_<span class="GramE">route</span></a>, \r
+ <a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_LISTEN">rdma_listen</a>, \r
+ <a href="#RDMA_ACCEPT">rdma_accept</a>,<br>\r
+ <a href="#RDMA_REJECT">rdma_<span class="GramE">reject</span></a>, \r
+ <a href="#RDMA_JOIN_MULTICAST">rdma_join_multicast</a>, <a href="#RDMA_LEAVE_MULTICAST">rdma_leave_multicast</a>,<br>\r
+ <a href="#RDMA_NOTIFY">rdma_<span class="GramE">notify</span></a>, <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a>, <a href="#RDMA_DISCONNECT">rdma_disconnect</a>,<br>\r
+ <a href="#RDMA_DESTROY_QP">rdma_destroy_<span class="GramE">qp</span></a>, \r
+ <a href="#RDMA_DESTROY_ID">rdma_destroy_id</a>, <a href="#rdma_destroy_event_channel_">rdma_destroy_event_channel</a>,<br>\r
+ <a href="#RDMA_GET_DEVICES">rdma_get_<span class="GramE">devices</span></a>, <a href="#RDMA_FREE_DEVICES">rdma_free_devices</a>, <a href="#RDMA_GET_PEER_ADDR">rdma_get_peer_addr</a>,<br>\r
+ <a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_<span class="GramE">addr</span></a>, <a href="#RDMA_GET_DST_PORT">rdma_get_dst_port</a>, <a href="#RDMA_GET_SRC_PORT">rdma_get_src_port</a>,<br>\r
+ <a href="#RDMA_SET_OPTION">rdma_set_<span class="GramE">option</span></a></p>\r
+ </blockquote>\r
+</div>\r
+<p align="left"><a href="#TOP"><font color="#000000"><<b>return-to-top</b>></font></a></p>\r
+</span>\r
+<span style="font-size: 12pt; font-family: Times New Roman">\r
+<h3> </h3>\r
+<h3><br>\r
+<a name="RDMA_CREATE_ID">RDMA_CREATE_ID</a></h3>\r
+<hr>\r
+<h4>NAME<br>\r
+<br>\r
+RDMA_CREATE_ID - Allocate a communication identifier. <a NAME="lbAC"> </a> </h4>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_create_id</b> <b>(struct \r
+rdma_event_channel *</b><i>channel</i><b>,</b> <b>struct rdma_cm_id **</b><i>id</i><b>,</b>\r
+<b>void *</b><i>context</i><b>,</b> <b>enum rdma_port_space </b><i>ps</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>channel</dt>\r
+ <dd>The communication channel that events associated with the allocated \r
+ rdma_cm_id will be reported on. </dd>\r
+ <dt>id</dt>\r
+ <dd>A reference where the allocated communication identifier will be \r
+ returned. </dd>\r
+ <dt>context</dt>\r
+ <dd>User specified context associated with the rdma_cm_id. </dd>\r
+ <dt>ps</dt>\r
+ <dd>RDMA port space. </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Creates an identifier that is used to track communication information.\r
+<a NAME="lbAF"> </a>\r
+<h4>NOTES</h4>\r
+Rdma_cm_id's are conceptually equivalent to a socket for RDMA communication. The \r
+difference is that RDMA communication requires explicitly binding to a specified \r
+RDMA device before communication can occur, and most operations are asynchronous \r
+in nature. Communication events on an rdma_cm_id are reported through the \r
+associated event channel. Users must release the rdma_cm_id by calling \r
+rdma_destroy_id. <a NAME="lbAG"> </a>\r
+<h4>PORT SPACE</h4>\r
+Details of the services provided by the different port spaces are outlined \r
+below.\r
+<dl COMPACT>\r
+ <dt>RDMA_PS_TCP</dt>\r
+ <dd>Provides reliable, connection-oriented QP communication. Unlike TCP, the \r
+ RDMA port space provides message, not stream, based communication. </dd>\r
+ <dt>RDMA_PS_UDP</dt>\r
+ <dd>Provides unreliable, connectionless QP communication. Supports both \r
+ datagram and multicast communication. </dd>\r
+</dl>\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CM_-_Communications_Manager">rdma_cm</a>,\r
+<a href="#RDMA_CREATE_EVENT_CHANNEL">rdma_create_event_channel</a>,\r
+<a href="#RDMA_DESTROY_ID">rdma_destroy_id</a>, <a href="#RDMA_GET_DEVICES">rdma_get_devices</a>,\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a>,\r
+<a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_LISTEN">rdma_listen</a>,\r
+<a href="#RDMA_SET_OPTION">rdma_set_option</a><p align="left"> </p>\r
+<h3><br>\r
+<a name="RDMA_DESTROY_ID">RDMA_DESTROY_ID</a></h3>\r
+<hr>\r
+<h4>NAME<br>\r
+<br>\r
+<span style="font-weight: 400">RDMA_DESTROY_ID - Release a communication \r
+identifier.</span></h4>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b> <b>int rdma_destroy_id</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>);</b><h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>The communication identifier to destroy.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Destroys the specified rdma_cm_id and cancels any outstanding asynchronous \r
+operation.<h4><font face="Arial" size="3">NOTES</font></h4>\r
+Users must free any associated QP with the rdma_cm_id before calling this \r
+routine and ack an related events.<h4><font face="Arial" size="3">SEE ALSO</font></h4>\r
+<a href="#RDMA_CREATE_ID">rdma_create_id</a>, <a href="#RDMA_DESTROY_QP">rdma_destroy_qp</a>,\r
+<a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a>\r
+<h3> </h3>\r
+<h3><br>\r
+<a name="RDMA_CREATE_EVENT_CHANNEL">RDMA_CREATE_EVENT_CHANNEL</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_create_event_channel - Open a channel used to report communication events.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>struct rdma_event_channel * \r
+rdma_create_event_channel</b> <b>(</b><i>void</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>void</dt>\r
+ <dd>no arguments\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Asynchronous events are reported to users through event channels. <a NAME="lbAF">\r
+ </a>\r
+<h4>NOTES</h4>\r
+Event channels are used to direct all events on an rdma_cm_id. For many clients, \r
+a single event channel may be sufficient, however, when managing a large number \r
+of connections or cm_id's, users may find it useful to direct events for \r
+different cm_id's to different channels for processing. All created event \r
+channels must be destroyed by calling rdma_destroy_event_channel. Users should \r
+call rdma_get_cm_event to retrieve events on an event channel. Each event \r
+channel is mapped to a file descriptor. The associated file descriptor can be \r
+used and manipulated like any other fd to change its behavior. Users may make \r
+the fd non-blocking, poll or select the fd, etc. <a NAME="lbAG"> </a>\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CM_-_Communications_Manager">rdma_cm</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_DESTROY_EVENT_CHANNEL">rdma_destroy_event_channel</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_DESTROY_EVENT_CHANNEL">RDMA_DESTROY_EVENT_CHANNEL</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_destroy_event_channel - Close an event communication channel.\r
+<a NAME="lbAC"> </a>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>void rdma_destroy_event_channel (struct \r
+rdma_event_channel *</b><i>channel</i><b>);</b> <a NAME="lbAD"> </a> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>channel</dt>\r
+ <dd>The communication channel to destroy.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Release all resources associated with an event channel and closes the associated \r
+file descriptor. <a NAME="lbAF"> </a>\r
+<h4>NOTES</h4>\r
+All rdma_cm_id's associated with the event channel must be destroyed, and all \r
+returned events must be acked before calling this function. <a NAME="lbAG"> </a>\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CREATE_EVENT_CHANNEL">rdma_create_event_channel</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a>\r
+<p align="left"> </p>\r
+<h3><br>\r
+<a name="RDMA_RESOLVE_ADDR">RDMA_RESOLVE_ADDR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_resolve_addr - Resolve destination and optional source addresses.
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_resolve_addr</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct sockaddr *</b><i>src_addr</i><b>,</b>\r
+<b>struct sockaddr *</b><i>dst_addr</i><b>,</b> <b>int </b><i>timeout_ms</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.
+ </dd>\r
+ <dt>src_addr</dt>\r
+ <dd>Source address information. This parameter may be NULL.
+ </dd>\r
+ <dt>dst_addr</dt>\r
+ <dd>Destination address information.
+ </dd>\r
+ <dt>timeout_ms</dt>\r
+ <dd>Time to wait for resolution to complete.
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Resolve destination and optional source addresses from IP addresses to an RDMA \r
+address. If successful, the specified rdma_cm_id will be bound to a local \r
+device. <a NAME="lbAF"> </a>
+<h4>NOTES</h4>\r
+This call is used to map a given destination IP address to a usable RDMA \r
+address. The IP to RDMA address mapping is done using the local routing tables, \r
+or via ARP. If a source address is given, the rdma_cm_id is bound to that \r
+address, the same as if rdma_bind_addr were called. If no source address is \r
+given, and the rdma_cm_id has not yet been bound to a device, then the \r
+rdma_cm_id will be bound to a source address based on the local routing tables. \r
+After this call, the rdma_cm_id will be bound to an RDMA device. This call is \r
+typically made from the active side of a connection before calling \r
+rdma_resolve_route and rdma_connect. <a NAME="lbAG"> </a>
+<h4>INFINIBAND SPECIFIC</h4>\r
+This call maps the destination and, if given, source IP addresses to GIDs. In \r
+order to perform the mapping, IPoIB must be running on both the local and remote \r
+nodes. <a NAME="lbAH"> </a>
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CREATE_ID">rdma_create_id</a>, <a href="#RDMA_RESOLVE_ROUTE">rdma_resolve_route</a>,\r
+<a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_CREATE_QP">rdma_create_qp</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>,\r
+<a href="#RDMA_GET_SRC_PORT">rdma_get_src_port</a>, <a href="#RDMA_GET_DST_PORT">rdma_get_dst_port</a>,\r
+<a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_addr</a>,\r
+<a href="#RDMA_GET_PEER_ADDR">rdma_get_peer_addr</a>
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_CM_EVENT">RDMA_GET_CM_EVENT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_cm_event - Retrieves the next pending communication event.
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_get_cm_event</b> <b>(struct \r
+rdma_event_channel *</b><i>channel</i><b>,</b> <b>struct rdma_cm_event **</b><i>event</i><b>);</b>\r
+</p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>channel</dt>\r
+ <dd>Event channel to check for events.
+ </dd>\r
+ <dt>event</dt>\r
+ <dd>Allocated information about the next communication event.
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Retrieves a communication event. If no events are pending, by default, the call \r
+will block until an event is received.
+<h4>NOTES</h4>\r
+The default synchronous behavior of this routine can be changed by modifying the \r
+file descriptor associated with the given channel. All events that are reported \r
+must be acknowledged by calling rdma_ack_cm_event. Destruction of an rdma_cm_id \r
+will block until related events have been acknowledged.<h4>EVENT DATA</h4>\r
+Communication event details are returned in the rdma_cm_event structure. This \r
+structure is allocated by the rdma_cm and released by the rdma_ack_cm_event \r
+routine. Details of the rdma_cm_event structure are given below.
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>The rdma_cm identifier associated with the event. If the event type is \r
+ RDMA_CM_EVENT_CONNECT_REQUEST, then this references a new id for that \r
+ communication.
+ </dd>\r
+ <dt>listen_id</dt>\r
+ <dd>For RDMA_CM_EVENT_CONNECT_REQUEST event types, this references the \r
+ corresponding listening request identifier.
+ </dd>\r
+ <dt>event</dt>\r
+ <dd>Specifies the type of communication event which occurred. See EVENT \r
+ TYPES below.
+ </dd>\r
+ <dt>status</dt>\r
+ <dd>Returns any asynchronous error information associated with an event. The \r
+ status is zero unless the corresponding operation failed.
+ </dd>\r
+ <dt>param</dt>\r
+ <dd>Provides additional details based on the type of event. Users should \r
+ select the conn or ud subfields based on the rdma_port_space of the \r
+ rdma_cm_id associated with the event. See UD EVENT DATA and CONN EVENT DATA \r
+ below.
+ </dd>\r
+</dl>\r
+<h4>UD EVENT DATA</h4>\r
+Event parameters related to unreliable datagram (UD) services: RDMA_PS_UDP and \r
+RDMA_PS_IPOIB. The UD event data is valid for RDMA_CM_EVENT_ESTABLISHED and \r
+RDMA_CM_EVENT_MULTICAST_JOIN events, unless stated otherwise.
+<dl COMPACT>\r
+ <dt>private_data</dt>\r
+ <dd>References any user-specified data associated with \r
+ RDMA_CM_EVENT_CONNECT_REQUEST or RDMA_CM_EVENT_ESTABLISHED events. The data \r
+ referenced by this field matches that specified by the remote side when \r
+ calling rdma_connect or rdma_accept. This field is NULL if the event does \r
+ not include private data. The buffer referenced by this pointer is \r
+ deallocated when calling rdma_ack_cm_event.
+ </dd>\r
+ <dt>private_data_len</dt>\r
+ <dd>The size of the private data buffer. Users should note that the size of \r
+ the private data buffer may be larger than the amount of private data sent \r
+ by the remote side. Any additional space in the buffer will be zeroed out.
+ </dd>\r
+ <dt>ah_attr</dt>\r
+ <dd>Address information needed to send data to the remote endpoint(s). Users \r
+ should use this structure when allocating their address handle.
+ </dd>\r
+ <dt>qp_num</dt>\r
+ <dd>QP number of the remote endpoint or multicast group.
+ </dd>\r
+ <dt>qkey</dt>\r
+ <dd>QKey needed to send data to the remote endpoint(s).<br>\r
+ </dd>\r
+</dl>\r
+<h4>CONN EVENT DATA</h4>\r
+Event parameters related to connected QP services: RDMA_PS_TCP. The connection \r
+related event data is valid for RDMA_CM_EVENT_CONNECT_REQUEST and \r
+RDMA_CM_EVENT_ESTABLISHED events, unless stated otherwise.
+<dl COMPACT>\r
+ <dt>private_data</dt>\r
+ <dd>References any user-specified data associated with the event. The data \r
+ referenced by this field matches that specified by the remote side when \r
+ calling rdma_connect or rdma_accept. This field is NULL if the event does \r
+ not include private data. The buffer referenced by this pointer is \r
+ deallocated when calling rdma_ack_cm_event.
+ </dd>\r
+ <dt>private_data_len</dt>\r
+ <dd>The size of the private data buffer. Users should note that the size of \r
+ the private data buffer may be larger than the amount of private data sent \r
+ by the remote side. Any additional space in the buffer will be zeroed out.
+ </dd>\r
+ <dt>responder_resources</dt>\r
+ <dd>The number of responder resources requested of the recipient. This field \r
+ matches the initiator depth specified by the remote node when calling \r
+ rdma_connect and rdma_accept.
+ </dd>\r
+ <dt>initiator_depth</dt>\r
+ <dd>The maximum number of outstanding RDMA read/atomic operations that the \r
+ recipient may have outstanding. This field matches the responder resources \r
+ specified by the remote node when calling rdma_connect and rdma_accept.
+ </dd>\r
+ <dt>flow_control</dt>\r
+ <dd>Indicates if hardware level flow control is provided by the sender.
+ </dd>\r
+ <dt>retry_count</dt>\r
+ <dd>For RDMA_CM_EVENT_CONNECT_REQUEST events only, indicates the number of \r
+ times that the recipient should retry send operations.
+ </dd>\r
+ <dt>rnr_retry_count</dt>\r
+ <dd>The number of times that the recipient should retry receiver not ready \r
+ (RNR) NACK errors.
+ </dd>\r
+ <dt>srq</dt>\r
+ <dd>Specifies if the sender is using a shared-receive queue.
+ </dd>\r
+ <dt>qp_num</dt>\r
+ <dd>Indicates the remote QP number for the connection.
+ </dd>\r
+</dl>\r
+<h4>EVENT TYPES</h4>\r
+The following types of communication events may be reported.
+<dl COMPACT>\r
+ <dt>RDMA_CM_EVENT_ADDR_RESOLVED</dt>\r
+ <dd>Address resolution (rdma_resolve_addr) completed successfully.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_ADDR_ERROR</dt>\r
+ <dd>Address resolution (rdma_resolve_addr) failed.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_ROUTE_RESOLVED</dt>\r
+ <dd>Route resolution (rdma_resolve_route) completed successfully.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_ROUTE_ERROR</dt>\r
+ <dd>Route resolution (rdma_resolve_route) failed.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_CONNECT_REQUEST</dt>\r
+ <dd>Generated on the passive side to notify the user of a new connection \r
+ request.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_CONNECT_RESPONSE</dt>\r
+ <dd>Generated on the active side to notify the user of a successful response \r
+ to a connection request. It is only generated on rdma_cm_id's that do not \r
+ have a QP associated with them.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_CONNECT_ERROR</dt>\r
+ <dd>Indicates that an error has occurred trying to establish or a \r
+ connection. May be generated on the active or passive side of a connection.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_UNREACHABLE</dt>\r
+ <dd>Generated on the active side to notify the user that the remote server \r
+ is not reachable or unable to respond to a connection request.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_REJECTED</dt>\r
+ <dd>Indicates that a connection request or response was rejected by the \r
+ remote end point.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_ESTABLISHED</dt>\r
+ <dd>Indicates that a connection has been established with the remote end \r
+ point.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_DISCONNECTED</dt>\r
+ <dd>The connection has been disconnected.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_DEVICE_REMOVAL</dt>\r
+ <dd>The local RDMA device associated with the rdma_cm_id has been removed. \r
+ Upon receiving this event, the user must destroy the related rdma_cm_id.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_MULTICAST_JOIN</dt>\r
+ <dd>The multicast join operation (rdma_join_multicast) completed \r
+ successfully.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_MULTICAST_ERROR</dt>\r
+ <dd>An error either occurred joining a multicast group, or, if the group had \r
+ already been joined, on an existing group. The specified multicast group is \r
+ no longer accessible and should be rejoined, if desired.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_ADDR_CHANGE</dt>\r
+ <dd>The network device associated with this ID through address resolution \r
+ changed its HW address, eg following of bonding failover. This event can \r
+ serve as a hint for applications who want the links used for their RDMA \r
+ sessions to align with the network stack.
+ </dd>\r
+ <dt>RDMA_CM_EVENT_TIMEWAIT_EXIT</dt>\r
+ <dd>The QP associated with a connection has exited its timewait state and is \r
+ now ready to be re-used. After a QP has been disconnected, it is maintained \r
+ in a timewait state to allow any in flight packets to exit the network. \r
+ After the timewait state has completed, the rdma_cm will report this event.
+ </dd>\r
+</dl>\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_ACK_CM_EVENT">rdma_ack_cm_event</a>,\r
+<a href="#RDMA_CREATE_EVENT_CHANNEL">rdma_create_event_channel</a>,\r
+<a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a>,\r
+<a href="#RDMA_RESOLVE_ROUTE">rdma_resolve_route</a>, <a href="#RDMA_CONNECT">rdma_connect</a>,\r
+<a href="#RDMA_LISTEN">rdma_listen</a>, <a href="#RDMA_JOIN_MULTICAST">rdma_join_multicast</a>,\r
+<a href="#RDMA_DESTROY_ID">rdma_destroy_id</a>, <a href="#RDMA_EVENT_STR">rdma_event_str</a><p> </p>\r
+<h3><br>\r
+<a name="RDMA_ACK_CM_EVENT">RDMA_ACK_CM_EVENT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+rdma_ack_cm_event - Free a communication event.</span><span style="font-size: 12pt; font-family: Times New Roman"><h4>SYNOPSIS</h4>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<b>#include <rdma/rdma_cma.h></b></span><span style="font-size: 12pt; font-family: Times New Roman"><p> </span><span style="font-size: 12pt; "><b>int rdma_ack_cm_event</b> <b>(struct \r
+rdma_cm_event *</b><i>event</i><b>);</b></span><span style="font-size: 12pt; font-family: Times New Roman"></p>\r
+<h4>ARGUMENTS</h4>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<dl COMPACT>\r
+ <dt>event</dt>\r
+ <dd>Event to be released.
+ </dd>\r
+</dl>\r
+</span>\r
+<span style="font-size: 12pt; font-family: Times New Roman">\r
+<h4>DESCRIPTION</h4>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+All events which are allocated by rdma_get_cm_event must be released, there \r
+should be a one-to-one correspondence between successful gets and acks. This \r
+call frees the event structure and any memory that it references.</span><span style="font-size: 12pt; font-family: Times New Roman"><h4>SEE ALSO</h4>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_DESTROY_ID">rdma_destroy_id</a>
+</span>\r
+<span style="font-size: 12pt; font-family: Times New Roman">\r
+<p> </p>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
+<h3><br>\r
+<a name="RDMA_CREATE_QP">RDMA_CREATE_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_create_qp - Allocate a QP. <a NAME="lbAC"> </a>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_create_qp</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct ibv_pd *</b><i>pd</i><b>,</b> <b>\r
+struct ibv_qp_init_attr *</b><i>qp_init_attr</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>pd</dt>\r
+ <dd>protection domain for the QP.\r
+ </dd>\r
+ <dt>qp_init_attr</dt>\r
+ <dd>initial QP attributes.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Allocate a QP associated with the specified rdma_cm_id and transition it for \r
+sending and receiving.\r
+<h4>NOTES</h4>\r
+The rdma_cm_id must be bound to a local RDMA device before calling this \r
+function, and the protection domain must be for that same device. QPs allocated \r
+to an rdma_cm_id are automatically transitioned by the librdmacm through their \r
+states. After being allocated, the QP will be ready to handle posting of \r
+receives. If the QP is unconnected, it will be ready to post sends.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_RESOLVE_ADDR">\r
+rdma_resolve_addr</a>, <a href="#RDMA_DESTROY_QP">rdma_destroy_qp</a>,\r
+<a href="#IBV_CREATE_QP">ibv_create_qp</a>, <a href="#IBV_MODIFY_QP">\r
+ibv_modify_qp</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_DESTROY_QP">RDMA_DESTROY_QP</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_destroy_qp - Deallocate a QP.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>void rdma_destroy_qp</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Destroy a QP allocated on the rdma_cm_id.<h4>NOTES</h4>\r
+Users must destroy any QP associated with an rdma_cm_id before destroying the \r
+ID.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CREATE_QP">rdma_create_qp</a>, <a href="#RDMA_DESTROY_ID">\r
+rdma_destroy_id</a>, <a href="#IBV_DESTROY_QP">ibv_destroy_qp</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_ACCEPT">RDMA_ACCEPT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_accept - Called to accept a connection request. <a NAME="lbAC"> </a>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_accept</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct rdma_conn_param *</b><i>conn_param</i><b>);</b>\r
+<a NAME="lbAD"> </a> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>Connection identifier associated with the request.\r
+ </dd>\r
+ <dt>conn_param</dt>\r
+ <dd>Information needed to establish the connection. See CONNECTION \r
+ PROPERTIES below for details.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Called from the listening side to accept a connection or datagram service lookup \r
+request.\r
+<h4>NOTES</h4>\r
+Unlike the socket accept routine, rdma_accept is not called on a listening \r
+rdma_cm_id. Instead, after calling rdma_listen, the user waits for an \r
+RDMA_CM_EVENT_CONNECT_REQUEST event to occur. Connection request events give the \r
+user a newly created rdma_cm_id, similar to a new socket, but the rdma_cm_id is \r
+bound to a specific RDMA device. rdma_accept is called on the new rdma_cm_id.<h4>\r
+CONNECTION PROPERTIES</h4>\r
+The following properties are used to configure the communication and specified \r
+by the conn_param parameter when accepting a connection or datagram \r
+communication request. Users should use the rdma_conn_param values reported in \r
+the connection request event to determine appropriate values for these fields \r
+when accepting. Users may reference the rdma_conn_param structure in the \r
+connection event directly, or can reference their own structure. If the \r
+rdma_conn_param structure from an event is referenced, the event must not be \r
+acked until after this call returns.\r
+<dl COMPACT>\r
+ <dt>private_data</dt>\r
+ <dd>References a user-controlled data buffer. The contents of the buffer are \r
+ copied and transparently passed to the remote side as part of the \r
+ communication request. May be NULL if private_data is not required.\r
+ </dd>\r
+ <dt>private_data_len</dt>\r
+ <dd>Specifies the size of the user-controlled data buffer. Note that the \r
+ actual amount of data transferred to the remote side is transport dependent \r
+ and may be larger than that requested.\r
+ </dd>\r
+ <dt>responder_resources</dt>\r
+ <dd>The maximum number of outstanding RDMA read and atomic operations that \r
+ the local side will accept from the remote side. Applies only to RDMA_PS_TCP. \r
+ This value must be less than or equal to the local RDMA device attribute \r
+ max_qp_rd_atom and the responder_resources value reported in the connect \r
+ request event.\r
+ </dd>\r
+ <dt>initiator_depth</dt>\r
+ <dd>The maximum number of outstanding RDMA read and atomic operations that \r
+ the local side will have to the remote side. Applies only to RDMA_PS_TCP. \r
+ This value must be less than or equal to the local RDMA device attribute \r
+ max_qp_init_rd_atom and the initiator_depth value reported in the connect \r
+ request event.\r
+ </dd>\r
+ <dt>flow_control</dt>\r
+ <dd>Specifies if hardware flow control is available. This value is exchanged \r
+ with the remote peer and is not used to configure the QP. Applies only to \r
+ RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>retry_count</dt>\r
+ <dd>This value is ignored.\r
+ </dd>\r
+ <dt>rnr_retry_count</dt>\r
+ <dd>The maximum number of times that a send operation from the remote peer \r
+ should be retried on a connection after receiving a receiver not ready (RNR) \r
+ error. RNR errors are generated when a send request arrives before a buffer \r
+ has been posted to receive the incoming data. Applies only to RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>srq</dt>\r
+ <dd>Specifies if the QP associated with the connection is using a shared \r
+ receive queue. This field is ignored by the library if a QP has been created \r
+ on the rdma_cm_id. Applies only to RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>qp_num</dt>\r
+ <dd>Specifies the QP number associated with the connection. This field is \r
+ ignored by the library if a QP has been created on the rdma_cm_id.\r
+ </dd>\r
+</dl>\r
+<h4>INFINIBAND SPECIFIC</h4>\r
+In addition to the connection properties defined above, InfiniBand QPs are \r
+configured with minimum RNR NAK timer and local ACK timeout values. The minimum \r
+RNR NAK timer value is set to 0, for a delay of 655 ms. The local ACK timeout is \r
+calculated based on the packet lifetime and local HCA ACK delay. The packet \r
+lifetime is determined by the InfiniBand Subnet Administrator and is part of the \r
+route (path record) information obtained by the active side of the connection. \r
+The HCA ACK delay is a property of the locally used HCA. The RNR retry count is \r
+a 3-bit value.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_LISTEN">rdma_listen</a>, <a href="#RDMA_REJECT">rdma_reject</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_CONNECT">RDMA_CONNECT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_connect - Initiate an active connection request.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_connect</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct rdma_conn_param *</b><i>conn_param</i><b>);</b>\r
+</p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>conn_param</dt>\r
+ <dd>connection parameters. See CONNECTION PROPERTIES below for details.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+For an rdma_cm_id of type RDMA_PS_TCP, this call initiates a connection request \r
+to a remote destination. For an rdma_cm_id of type RDMA_PS_UDP, it initiates a \r
+lookup of the remote QP providing the datagram service.<h4>NOTES</h4>\r
+Users must have resolved a route to the destination address by having called \r
+rdma_resolve_route before calling this routine.\r
+<h4>CONNECTION PROPERTIES</h4>\r
+The following properties are used to configure the communication and specified \r
+by the conn_param parameter when connecting or establishing datagram \r
+communication.\r
+<dl COMPACT>\r
+ <dt>private_data</dt>\r
+ <dd>References a user-controlled data buffer. The contents of the buffer are \r
+ copied and transparently passed to the remote side as part of the \r
+ communication request. May be NULL if private_data is not required.\r
+ </dd>\r
+ <dt>private_data_len</dt>\r
+ <dd>Specifies the size of the user-controlled data buffer. Note that the \r
+ actual amount of data transferred to the remote side is transport dependent \r
+ and may be larger than that requested.\r
+ </dd>\r
+ <dt>responder_resources</dt>\r
+ <dd>The maximum number of outstanding RDMA read and atomic operations that \r
+ the local side will accept from the remote side. Applies only to RDMA_PS_TCP. \r
+ This value must be less than or equal to the local RDMA device attribute \r
+ max_qp_rd_atom and remote RDMA device attribute max_qp_init_rd_atom. The \r
+ remote endpoint can adjust this value when accepting the connection.\r
+ </dd>\r
+ <dt>initiator_depth</dt>\r
+ <dd>The maximum number of outstanding RDMA read and atomic operations that \r
+ the local side will have to the remote side. Applies only to RDMA_PS_TCP. \r
+ This value must be less than or equal to the local RDMA device attribute \r
+ max_qp_init_rd_atom and remote RDMA device attribute max_qp_rd_atom. The \r
+ remote endpoint can adjust this value when accepting the connection.\r
+ </dd>\r
+ <dt>flow_control</dt>\r
+ <dd>Specifies if hardware flow control is available. This value is exchanged \r
+ with the remote peer and is not used to configure the QP. Applies only to \r
+ RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>retry_count</dt>\r
+ <dd>The maximum number of times that a data transfer operation should be \r
+ retried on the connection when an error occurs. This setting controls the \r
+ number of times to retry send, RDMA, and atomic operations when timeouts \r
+ occur. Applies only to RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>rnr_retry_count</dt>\r
+ <dd>The maximum number of times that a send operation from the remote peer \r
+ should be retried on a connection after receiving a receiver not ready (RNR) \r
+ error. RNR errors are generated when a send request arrives before a buffer \r
+ has been posted to receive the incoming data. Applies only to RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>srq</dt>\r
+ <dd>Specifies if the QP associated with the connection is using a shared \r
+ receive queue. This field is ignored by the library if a QP has been created \r
+ on the rdma_cm_id. Applies only to RDMA_PS_TCP.\r
+ </dd>\r
+ <dt>qp_num</dt>\r
+ <dd>Specifies the QP number associated with the connection. This field is \r
+ ignored by the library if a QP has been created on the rdma_cm_id. Applies \r
+ only to RDMA_PS_TCP.\r
+ </dd>\r
+</dl>\r
+<h4>INFINIBAND SPECIFIC</h4>\r
+In addition to the connection properties defined above, InfiniBand QPs are \r
+configured with minimum RNR NAK timer and local ACK timeout values. The minimum \r
+RNR NAK timer value is set to 0, for a delay of 655 ms. The local ACK timeout is \r
+calculated based on the packet lifetime and local HCA ACK delay. The packet \r
+lifetime is determined by the InfiniBand Subnet Administrator and is part of the \r
+resolved route (path record) information. The HCA ACK delay is a property of the \r
+locally used HCA. Retry count and RNR retry count values are 3-bit values.<h4>\r
+IWARP SPECIFIC</h4>\r
+Connections established over iWarp RDMA devices currently require that the \r
+active side of the connection send the first message.\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CM_-_Communications_Manager">rdma_cm</a>,\r
+<a href="#RDMA_CREATE_ID">rdma_create_id</a>, <a href="#RDMA_RESOLVE_ROUTE">\r
+rdma_resolve_route</a>, <a href="#RDMA_DISCONNECT">rdma_disconnect</a>,\r
+<a href="#RDMA_LISTEN">rdma_listen</a>, <a href="#RDMA_GET_CM_EVENT">\r
+rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_DISCONNECT">RDMA_DISCONNECT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_disconnect - This function disconnects a connection. <a NAME="lbAC"> </a>\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_disconnect</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Disconnects a connection and transitions any associated QP to the error state, \r
+which will flush any posted work requests to the completion queue. This routine \r
+may be called by both the client and server side of a connection. After \r
+successfully disconnecting, an RDMA_CM_EVENT_DISCONNECTED event will be \r
+generated on both sides of the connection.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_LISTEN">rdma_listen</a>,\r
+<a href="#RDMA_ACCEPT">rdma_accept</a>, <a href="#RDMA_GET_CM_EVENT">\r
+rdma_get_cm_event</a><p> </p>\r
+<h3><br>\r
+<a name="RDMA_RESOLVE_ROUTE">RDMA_RESOLVE_ROUTE</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_resolve_route - Resolve the route information needed to establish a \r
+connection.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_resolve_route</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>int </b><i>timeout_ms</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>timeout_ms</dt>\r
+ <dd>Time to wait for resolution to complete.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Resolves an RDMA route to the destination address in order to establish a \r
+connection. The destination address must have already been resolved by calling \r
+rdma_resolve_addr.\r
+<h4>NOTES</h4>\r
+This is called on the client side of a connection after calling \r
+rdma_resolve_addr, but before calling rdma_connect.<h4>INFINIBAND SPECIFIC</h4>\r
+This call obtains a path record that is used by the connection.\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a>, <a href="#RDMA_CONNECT">\r
+rdma_connect</a>, <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_BIND_ADDR">RDMA_BIND_ADDR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_bind_addr - Bind an RDMA identifier to a source address.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_bind_addr</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct sockaddr *</b><i>addr</i><b>);</b>\r
+</p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>addr</dt>\r
+ <dd>Local address information. Wildcard values are permitted.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Associates a source address with an rdma_cm_id. The address may be wildcarded. \r
+If binding to a specific local address, the rdma_cm_id will also be bound to a \r
+local RDMA device.<h4>NOTES</h4>\r
+Typically, this routine is called before calling rdma_listen to bind to a \r
+specific port number, but it may also be called on the active side of a \r
+connection before calling rdma_resolve_addr to bind to a specific address. If \r
+used to bind to port 0, the rdma_cm will select an available port, which can be \r
+retrieved with <a href="#RDMA_GET_SRC_PORT">rdma_get_src_port</a>.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CREATE_ID">rdma_create_id</a>, <a href="#RDMA_LISTEN">rdma_listen</a>,\r
+<a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a>, <a href="#RDMA_CREATE_QP">\r
+rdma_create_qp</a>, <a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_addr</a>,\r
+<a href="#RDMA_GET_SRC_PORT">rdma_get_src_port</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_LISTEN">RDMA_LISTEN</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_listen - Listen for incoming connection requests.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_listen</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>int </b><i>backlog</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>backlog</dt>\r
+ <dd>backlog of incoming connection requests.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Initiates a listen for incoming connection requests or datagram service lookup. \r
+The listen will be restricted to the locally bound source address.\r
+<h4>NOTES</h4>\r
+Users must have bound the rdma_cm_id to a local address by calling \r
+rdma_bind_addr before calling this routine. If the rdma_cm_id is bound to a \r
+specific IP address, the listen will be restricted to that address and the \r
+associated RDMA device. If the rdma_cm_id is bound to an RDMA port number only, \r
+the listen will occur across all RDMA devices.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CM_-_Communications_Manager">rdma_cm</a>,\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_CONNECT">\r
+rdma_connect</a>, <a href="#RDMA_ACCEPT">rdma_accept</a>, <a href="#RDMA_REJECT">\r
+rdma_reject</a>, <a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_REJECT">RDMA_REJECT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_reject - Called to reject a connection request.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_reject</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>const void *</b><i>private_data</i><b>,</b>\r
+<b>uint8_t </b><i>private_data_len</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>Connection identifier associated with the request.\r
+ </dd>\r
+ <dt>private_data</dt>\r
+ <dd>Optional private data to send with the reject message.\r
+ </dd>\r
+ <dt>private_data_len</dt>\r
+ <dd>Specifies the size of the user-controlled data buffer. Note that the \r
+ actual amount of data transferred to the remote side is transport dependent \r
+ and may be larger than that requested.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Called from the listening side to reject a connection or datagram service lookup \r
+request.<h4>NOTES</h4>\r
+After receiving a connection request event, a user may call rdma_reject to \r
+reject the request. If the underlying RDMA transport supports private data in \r
+the reject message, the specified data will be passed to the remote side.\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_LISTEN">rdma_listen</a>, <a href="#RDMA_ACCEPT">rdma_accept</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_SRC_PORT">RDMA_GET_SRC_PORT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_src_port - Returns the local port number of a bound rdma_cm_id.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>uint16_t rdma_get_src_port</b> <b>\r
+(struct rdma_cm_id *</b><i>id</i><b>);</b> <a NAME="lbAD"> </a> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Returns the local port number for an rdma_cm_id that has been bound to a local \r
+address.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_RESOLVE_ADDR">\r
+rdma_resolve_addr</a>, <a href="#RDMA_GET_DST_PORT">rdma_get_dst_port</a>,\r
+<a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_addr</a>,\r
+<a href="#RDMA_GET_PEER_ADDR">rdma_get_peer_addr</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_DST_PORT">RDMA_GET_DST_PORT</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_dst_port - Returns the remote port number of a bound rdma_cm_id.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>uint16_t rdma_get_dst_port</b> <b>\r
+(struct rdma_cm_id *</b><i>id</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Returns the remote port number for an rdma_cm_id that has been bound to a remote \r
+address.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_ACCEPT">rdma_accept</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>, <a href="#RDMA_GET_SRC_PORT">\r
+rdma_get_src_port</a>, <a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_addr</a>,\r
+<a href="#RDMA_GET_PEER_ADDR">rdma_get_peer_addr</a>
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_LOCAL_ADDR">RDMA_GET_LOCAL_ADDR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_local_addr - Returns the local IP address of a bound rdma_cm_id.\r
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>struct sockaddr * rdma_get_local_addr</b>\r
+<b>(struct rdma_cm_id *</b><i>id</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Returns the local IP address for an rdma_cm_id that has been bound to a local \r
+device.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_RESOLVE_ADDR">\r
+rdma_resolve_addr</a>, <a href="#RDMA_GET_SRC_PORT">rdma_get_src_port</a>,\r
+<a href="#RDMA_GET_DST_PORT">rdma_get_dst_port</a>,\r
+<a href="#RDMA_GET_PEER_ADDR">rdma_get_peer_addr</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_PEER_ADDR">RDMA_GET_PEER_ADDR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_peer_addr - Returns the remote IP address of a bound rdma_cm_id.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>struct sockaddr * rdma_get_peer_addr</b>\r
+<b>(struct rdma_cm_id *</b><i>id</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Returns the remote IP address associated with an rdma_cm_id.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_RESOLVE_ADDR">rdma_resolve_addr</a>, <a href="#RDMA_GET_SRC_PORT">\r
+rdma_get_src_port</a>, <a href="#RDMA_GET_DST_PORT">rdma_get_dst_port</a>,\r
+<a href="#RDMA_GET_LOCAL_ADDR">rdma_get_local_addr</a><p> </p>\r
+<h3><br>\r
+<a name="RDMA_EVENT_STR">RDMA_EVENT_STR</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_event_str - Returns a string representation of an rdma cm event.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>char * rdma_event_str</b> <b>(enum</b><i>rdma_cm_event_type</i><b> \r
+event );</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>event</dt>\r
+ <dd>Asynchronous event.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Returns a string representation of an asynchronous event.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a><p> </p>\r
+<h3><br>\r
+<a name="RDMA_JOIN_MULTICAST">RDMA_JOIN_MULTICAST</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_join_multicast - Joins a multicast group.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_join_multicast</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct sockaddr *</b><i>addr</i><b>,</b> <b>\r
+void *</b><i>context</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>Communication identifier associated with the request.\r
+ </dd>\r
+ <dt>addr</dt>\r
+ <dd>Multicast address identifying the group to join.\r
+ </dd>\r
+ <dt>context</dt>\r
+ <dd>User-defined context associated with the join request.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Joins a multicast group and attaches an associated QP to the group.<h4>NOTES</h4>\r
+Before joining a multicast group, the rdma_cm_id must be bound to an RDMA device \r
+by calling rdma_bind_addr or rdma_resolve_addr. Use of rdma_resolve_addr \r
+requires the local routing tables to resolve the multicast address to an RDMA \r
+device, unless a specific source address is provided. The user must call \r
+rdma_leave_multicast to leave the multicast group and release any multicast \r
+resources. After the join operation completes, any associated QP is \r
+automatically attached to the multicast group, and the join context is returned \r
+to the user through the private_data field in the rdma_cm_event.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_LEAVE_MULTICAST">rdma_leave_multicast</a>,\r
+<a href="#RDMA_BIND_ADDR">rdma_bind_addr</a>, <a href="#RDMA_RESOLVE_ADDR">\r
+rdma_resolve_addr</a>, <a href="#RDMA_CREATE_QP">rdma_create_qp</a>,\r
+<a href="#RDMA_GET_CM_EVENT">rdma_get_cm_event</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_LEAVE_MULTICAST">RDMA_LEAVE_MULTICAST</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_leave_multicast - Leaves a multicast group.
+<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_leave_multicast</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>struct sockaddr *</b><i>addr</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>Communication identifier associated with the request.
+ </dd>\r
+ <dt>addr</dt>\r
+ <dd>Multicast address identifying the group to leave.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Leaves a multicast group and detaches an associated QP from the group.<h4>NOTES</h4>\r
+Calling this function before a group has been fully joined results in canceling \r
+the join operation. Users should be aware that messages received from the \r
+multicast group may stilled be queued for completion processing immediately \r
+after leaving a multicast group. Destroying an rdma_cm_id will automatically \r
+leave all multicast groups.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_JOIN_MULTICAST">rdma_join_multicast</a>,\r
+<a href="#RDMA_DESTROY_QP">rdma_destroy_qp</a>
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_SET_OPTION">RDMA_SET_OPTION</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_set_option - Set communication options for an rdma_cm_id.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_set_option</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>int </b><i>level</i><b>,</b> <b>int </b><i>\r
+optname</i><b>,</b> <b>void *</b><i>optval</i><b>,</b> <b>size_t </b><i>optlen</i><b>);</b>\r
+</p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>level</dt>\r
+ <dd>Protocol level of the option to set.\r
+ </dd>\r
+ <dt>optname</dt>\r
+ <dd>Name of the option, relative to the level, to set.\r
+ </dd>\r
+ <dt>optval</dt>\r
+ <dd>Reference to the option data. The data is dependent on the level and \r
+ optname.\r
+ </dd>\r
+ <dt>optlen</dt>\r
+ <dd>The size of the %optval buffer.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Sets communication options for an rdma_cm_id. This call is used to override the \r
+default system settings.<h4>NOTES</h4>\r
+Option details may be found in the relevent header files.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CREATE_ID">rdma_create_id</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_GET_DEVICES">RDMA_GET_DEVICES</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_get_devices - Get a list of RDMA devices currently available.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>struct ibv_context ** rdma_get_devices</b>\r
+<b>(int *</b><i>num_devices</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>num_devices</dt>\r
+ <dd>If non-NULL, set to the number of devices returned.\r
+ </dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Return a NULL-terminated array of opened RDMA devices. Callers can use this \r
+routine to allocate resources on specific RDMA devices that will be shared \r
+across multiple rdma_cm_id's.<h4>NOTES</h4>\r
+The returned array must be released by calling rdma_free_devices. Devices remain \r
+opened while the librdmacm is loaded\r
+<h4>SEE ALSO</h4>\r
+<a href="#RDMA_FREE_DEVICES">rdma_free_devices</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_FREE_DEVICES">RDMA_FREE_DEVICES</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_free_devices - Frees the list of devices returned by rdma_get_devices.<h4>\r
+SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>void rdma_free_devices</b> <b>(struct \r
+ibv_context **</b><i>list</i><b>);</b> </p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>list</dt>\r
+ <dd>List of devices returned from rdma_get_devices.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Frees the device array returned by rdma_get_devices.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_GET_DEVICES">rdma_get_devices</a>\r
+<p> </p>\r
+<h3><br>\r
+<a name="RDMA_NOTIFY">RDMA_NOTIFY</a></h3>\r
+<hr>\r
+<h4>NAME</h4>\r
+rdma_notify - Notifies the librdmacm of an asynchronous event.<h4>SYNOPSIS</h4>\r
+<b>#include <rdma/rdma_cma.h></b><p> <b>int rdma_notify</b> <b>(struct \r
+rdma_cm_id *</b><i>id</i><b>,</b> <b>enum ibv_event_type </b><i>event</i><b>);</b></p>\r
+<h4>ARGUMENTS</h4>\r
+<dl COMPACT>\r
+ <dt>id</dt>\r
+ <dd>RDMA identifier.\r
+ </dd>\r
+ <dt>event</dt>\r
+ <dd>Asynchronous event.</dd>\r
+</dl>\r
+<h4>DESCRIPTION</h4>\r
+Used to notify the librdmacm of asynchronous events that have occurred on a QP \r
+associated with the rdma_cm_id.<h4>NOTES</h4>\r
+Asynchronous events that occur on a QP are reported through the user's device \r
+event handler. This routine is used to notify the librdmacm of communication \r
+events. In most cases, use of this routine is not necessary, however if \r
+connection establishment is done out of band (such as done through Infiniband), \r
+it's possible to receive data on a QP that is not yet considered connected. This \r
+routine forces the connection into an established state in this case in order to \r
+handle the rare situation where the connection never forms on its own. Events \r
+that should be reported to the CM are: IB_EVENT_COMM_EST.<h4>SEE ALSO</h4>\r
+<a href="#RDMA_CONNECT">rdma_connect</a>, <a href="#RDMA_ACCEPT">rdma_accept</a>,\r
+<a href="#RDMA_LISTEN">rdma_listen</a>\r
+<p>\r
+<span style="font-size: 12pt; ">\r
+<a href="#TOP"><font color="#000000"><</font></a></span><a href="#TOP"><font color="#000000"><b>return-to-top</b><span style="font-size: 12pt; ">></span></font></a></p>\r
+<p> </p>\r
+<p> </p>\r
+<h2><a name="WinVerbs">WinVerbs</a></h2></span>\r
+<span style="font-size: 12pt; font-family: Times New Roman">\r
+<hr>\r
+</span>\r
+<span style="font-size: 12pt; ">\r
<p>WinVerbs is a userspace verbs and communication management interface optimized<br>for the Windows operating system. Its lower interface is designed to support<br>any RDMA based device, including Infiniband and \r
future RDMA devices. Its upper interface is<br>capable of providing a low latency verbs interface, plus supports Microsoft's<br>NetworkDirect Interface, DAPL and OFED \r
components: libibverbs, libibmad, rdma_cm interfaces and numerous OFED IB \r