Jack Morgenstein [Sun, 14 Dec 2008 16:14:20 +0000 (18:14 +0200)]
Set ownership bit correctly when copying over CQEs during CQ resize
When resizing a CQ, when copying over unpolled CQEs from the old CQE
buffer to the new buffer, the ownership bit must be set appropriately
for the new buffer, or the ownership bit in the new buffer gets
corrupted.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Jack Morgenstein [Tue, 25 Nov 2008 06:40:07 +0000 (08:40 +0200)]
Fix race between create QP and destroy QP
There is a race in libmlx4 because mlx4_create_qp() and
mlx4_destroy_qp() are not atomic WRT each other. If one thread is
destroying a QP while another is creating a QP, the following can
happen: the destroying thread can be scheduled out after it has
deleted the QP from kernel space, but before it has cleared it from
userspace store (mlx4_clear_qp()). If the other thread creates a QP
during this break, it gets the same QP base number and overwrites the
destroyed QP's entry with mlx4_store_qp(). When the destroying thread
resumes, it clears the new entry from the userspace store via
mlx4_clear_qp.
Fix this by expanding where qp_table_mutex is held to serialize the
full create and destroy operations against each other.
Eli Cohen [Mon, 16 Jun 2008 08:09:18 +0000 (11:09 +0300)]
Optimize QP stamping
Optimize samping by reading the value of the DS field just before we
stamp, which would give the effective size of the descriptor as used
in the previous post and. Then we stamp only that area, since the rest
of the descriptor is already stamped.
Signed-off-by: Eli Cohen <eli@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Fri, 4 Apr 2008 19:14:57 +0000 (12:14 -0700)]
Fix CQ cleanup when QP is destroyed
The current code is mlx4_destroy_qp() cleans completions from the QP
being destroyed out of CQs before calling into the kernel to actually
destroy the QP. This leaves a window where new completions could be
added and left in the CQ, which leads to problems when that completion
is polled. Fix this by cleaning the CQ and removing the QP from the
QP table after the QP is really gone.
Roland Dreier [Mon, 28 Jan 2008 04:30:03 +0000 (20:30 -0800)]
Spec file cleanups based on Fedora review
- Don't mark libmlx4.driver as a %config, since it is not user modifiable.
- Change the name of the -devel-static package to plain -devel, since
it would be empty without the static library.
Jack Morgenstein [Thu, 24 Jan 2008 23:53:26 +0000 (15:53 -0800)]
Don't use memcpy() to write blueflame sends
Some memcpy() implementations may use move-string-buffer assembly
instructions, which do not guarantee copy order into the blueflame
buffer. This causes problems when writing into a blueflame buffer, so
use our own copy function instead.
Signed-off-by: Jack Morgenstein <jackm@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Fri, 4 Jan 2008 03:59:05 +0000 (19:59 -0800)]
Micro-optimize mlx4_poll_one()
Rather than byte-swapping cqe->g_mlpath_rqpn each time we extract a
field from it, byte-swap it once into a temporary variable. This
results in smaller, better code.
Jack Morgenstein [Mon, 17 Dec 2007 08:19:21 +0000 (10:19 +0200)]
Clear context struct at allocation time
Future versions of libibverbs will add additional ops to the end of
struct ibv_context. This means that driver libraries should zero the
entire struct ibv_context at allocation time, so that any new ops will
be NULL by default.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Thu, 29 Nov 2007 22:52:36 +0000 (14:52 -0800)]
Don't add an extra entry to CQs
With mlx4 hardware, there is no need to add an extra entry when
creating a CQ. This potentially saves a lot of memory if a consumer
asks for an exact power of 2 entries.
This change works without changing the kernel mlx4_ib driver's ABI by
subtracting 1 from the number of CQ entries before passing the value
to the kernel; the kernel will add 1 and end up with the same value
actually used by libmlx4.
Based on work from Jack Morgenstein <jackm@dev.mellanox.co.il>.
Jack Morgenstein [Wed, 28 Nov 2007 10:44:20 +0000 (12:44 +0200)]
max_recv_wr must be > 0 for non-SRQ QPs
max_recv_wr must also be non-zero for QPs which are not associated
with an SRQ.
Without this patch, if the userspace caller specifies max_recv_wr == 0
for a non-srq QP, the creation will be rejected in kernel space in
file infiniband/hw/mlx4/qp.c, function set_rq_size():
Roland Dreier [Tue, 23 Oct 2007 18:44:24 +0000 (11:44 -0700)]
Change __always_inline to inline
__always_inline is a kernel macro, so we can't use it in userspace code.
The inline keyword seems to work just as well in the one place libmlx4
uses it, so just change __always_inline to inline.
Jack Morgenstein [Mon, 22 Oct 2007 13:30:39 +0000 (15:30 +0200)]
Fix thinko in headroom marking order commit
Fix a thinko bug in commit c45efd89 ("Fix data corruption triggered by
wrong headroom marking order"), which leaves s/g entries being written
in forward (rather than reverse) order.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Jack Morgenstein [Thu, 20 Sep 2007 18:22:37 +0000 (11:22 -0700)]
Fix data corruption triggered by wrong headroom marking order
This is an addendum to commit 561da8d1 ("Handle new FW requirement for
send request prefetching"). We also need to handle prefetch marking
properly for S/G segments, or else the HCA may end up processing S/G
segments that are not fully written and end up sending the wrong data.
We write S/G segments in reverse order into the WQE, in order to
guarantee that the first dword of all cachelines containing S/G
segments is written last (overwriting the headroom invalidation
pattern). The entire cacheline will thus contain valid data when the
invalidation pattern is overwritten.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Wed, 19 Sep 2007 03:41:11 +0000 (20:41 -0700)]
Factor out setting WQE segment entries
Clean up setting WQE segment entries by moving code out of the main
work request posting functions into inline functions. This also lets
the compiler do a better job of optimizing.
A work request with IBV_SEND_INLINE set and more than one gather entry
does not have its data copied into the WQE correctly, because the
offset is not updated properly. Add the missing update of off when a
gather entry does not fill an inline segment exactly.
Signed-off-by: Gleb Natapov <glebn@voltaire.com> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Tue, 3 Jul 2007 18:55:03 +0000 (11:55 -0700)]
Fix Valgrind annotations so they can actually be built
The AC_CHECK_HEADER() test for <valgrind/memcheck.h> will never result
in HAVE_VALGRIND_MEMCHECK_H being defined, so ibverbs.h will never
include <valgrind/memcheck.h> and Valgrind annotations will never actually
get built. Fix this by adding an AC_DEFINE() of HAVE_VALGRIND_MEMCHECK_H
if the header is found.
Roland Dreier [Tue, 3 Jul 2007 18:48:14 +0000 (11:48 -0700)]
Clean up NVALGRIND comment in config.h.in
Update configure.in so that the comment generated by autoheader for
NVALGRIND in config.h.in is a complete sentence to match the style of
the rest of the file.
Roland Dreier [Tue, 19 Jun 2007 02:17:54 +0000 (19:17 -0700)]
Remove private implementation of ibv_read_sysfs_file()
The release of libibverbs 1.0.3 (which introduced
ibv_read_sysfs_file()) was more than a year ago, so it seems safe for
libmlx4 to depend on it. In fact libmlx4 relies on the recent fix to
libibverbs to set the state of newly created QPs, so libmlx4 wouldn't
have a chance at working with libibverbs 1.0.2 or older anyway. So
remove libmlx4's private implementation of ibv_read_sysfs_file() and
just fail the build if libibverbs doesn't supply the function.
Jack Morgenstein [Mon, 18 Jun 2007 16:27:45 +0000 (09:27 -0700)]
Add a memory barrier before setting an inline data segment's byte count
We need a memory barrier before setting an inline segment byte count
to make sure that all the inline data for a cacheline has been written
before changing the cacheline's byte-count from 0xffffffff to
something valid.
Signed-off-by: Ishai Rabinovitz <ishai@mellanox.co.il> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Sat, 16 Jun 2007 21:27:38 +0000 (14:27 -0700)]
Fix returned max_inline_data QP cap
Set the value of max_inline_data that is returned in the QP caps from
mlx4_create_qp() after we calculate the real value, rather than just
returning whatever uninitialized junk is in qp->max_inline_data before
it is set.
Roland Dreier [Thu, 14 Jun 2007 20:23:33 +0000 (13:23 -0700)]
Make sure inline segments in send WQEs don't cross 64 byte boundaries
Hardware requires that inline data segments do not cross a 64 byte
boundary. Make sure that send work requests satisfy this by using
multiple inline data segments when needed.
Based on a patch from Jack Morgenstein <jackm@dev.mellanox.co.il>.
Jack Morgenstein [Wed, 13 Jun 2007 20:34:30 +0000 (13:34 -0700)]
Handle buffer wraparound in mlx4_cq_clean()
When compacting CQ entries, we need to set the correct value of the
ownership bit in case the value is different between the index we copy
the CQE from and the index we copy it to.
Also correct wrong placement of () when checking QP number: the
"& 0xffffff" should be outside of the parameter to ntohl().
Found by Ronni Zimmerman of Mellanox.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Wed, 13 Jun 2007 17:31:16 +0000 (10:31 -0700)]
Handle new FW requirement for send request prefetching
New ConnectX firmware introduces FW command interface revision 2,
which requires that for each QP, a chunk of send queue entries (the
"headroom") is kept marked as invalid, so that the HCA doesn't get
confused if it prefetches entries that haven't been posted yet. Add
code to libmlx4 to do this.
Also, handle the new kernel ABI that adds the sq_no_prefetch parameter
to the create QP operation. We just hard-code sq_no_prefetch to 0 and
always provide the full SQ headroom for now.
Based on a patch from Jack Morgenstein <jackm@dev.mellanox.co.il>.
Jack Morgenstein [Mon, 11 Jun 2007 15:09:50 +0000 (18:09 +0300)]
Fix problem with inline WQE in post_send error flow
Suppose a consumer posts a list of two WQEs, with the second wqe in
the list being an INLINE which is too long. In this case, post_send
jumps to "out" with: nreq = 1, inl positive, and size in the range
allowing blueflame. All the blueflame test conditions are met.
However, the cntl pointer now points to the invalid wqe, and this will
be "blueflamed".
Fix this by setting inl to 0 before jumping out of the loop.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Eli Cohen [Mon, 11 Jun 2007 21:43:26 +0000 (14:43 -0700)]
Fix handling of wq->tail for send completions
Cast the increment added to wq->tail when send completions are
processed to uint16_t to avoid using wrong values caused by standard
integer promotions.
Signed-off-by: Eli Cohen <eli@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Thu, 7 Jun 2007 21:11:02 +0000 (14:11 -0700)]
Make sure RQ allocation is always valid
QPs attached to an SRQ must never have their own RQ, and QPs not
attached to SRQs must have an RQ with at least 1 entry. Enforce all
of this in set_rq_size().
Also simplify how we round up queue sizes. There's no need to pass the
context into align_queue_size(), since that parameter is completely
unused, and we don't really need two functions for rounding up to the
next power of two.
Eli Cohen [Mon, 4 Jun 2007 14:16:35 +0000 (17:16 +0300)]
Fix word size in doorbell allocator bitmaps
Use an explicitly long constant 1UL identical to the type of the
variable holding the bit mask. This avoids using the same bit twice,
because on 64 bit architectures, 1 << 32 == 0.
Found by Dotan Barak at Mellanox.
Signed-off-by: Eli Cohen <eli@mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Tue, 29 May 2007 18:31:04 +0000 (11:31 -0700)]
Fix max_send_sge and max_inline_data returned from create QP
Fix the calulation of max_inline_data and max_send_sge returned to the
user. Without this fix, the size of the SQ WQEs may increase every
time create QP is called using values returned from a previous call.
For example, here is a quote from the output of the test showing the
problem with a UD QP:
Roland Dreier [Thu, 24 May 2007 20:58:20 +0000 (13:58 -0700)]
Initialize send queue entry ownership bits
We need to initialize the owner bit of send queue WQEs to hardware
ownership whenever the QP is modified from reset to init, not just
when the QP is first allocated. This avoids having the hardware
process stale WQEs when the QP is moved to reset but not destroyed and
then modified to init again.
This is the same bug fixed in the kernel by Eli Cohen <eli@mellanox.co.il>.
Roland Dreier [Tue, 22 May 2007 21:13:15 +0000 (14:13 -0700)]
Handle freeing doorbell records
Actually implement mlx4_free_db() that just naively searches through
all doorbell pages. Also add a doorbell type parameter to the
function to avoid searching through all CQ doorbell pages when we
really want to find an RQ doorbell.
Roland Dreier [Mon, 21 May 2007 03:12:15 +0000 (20:12 -0700)]
Pass send queue sizes from userspace to kernel
Update to handle kernel mlx4 ABI version 2: pass log_2 of send queue
WQE basic block size and log_2 of number of send queue basic blocks to
the kernel to avoid bugs caused by the kernel calculating a different
send queue WQE size. This will also allow us to use multiple BBs per
WQE if we want to someday.
Roland Dreier [Sun, 20 May 2007 18:06:44 +0000 (11:06 -0700)]
Use wc_wmb() when posting BlueFlame send WQEs
Use wc_wmb() after copying WQE to BlueFlame register to avoid having
WQEs reach the device out of order if the BlueFlame page is mapped with
write combining.
Fix inline send posting when posting more than one request
Need to set inl parameter to zero for each request when posting a list
of requests, so that the value of inl is correct for each work
request, and is not cumulative.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Roland Dreier [Fri, 13 Apr 2007 04:23:59 +0000 (21:23 -0700)]
Implement posting of RDMA and atomic operations
Clean up the definitions of remote address and atomic operations WQE
segments. Fill in the missing code that fills in these segments when
posting RDMA or atomic operations to a send queue.
Roland Dreier [Wed, 11 Apr 2007 06:16:59 +0000 (23:16 -0700)]
Multiple SRQ fixes
Several one-liner fixes to SRQ support:
- Scatter entry address is 64 bits, so use htonll() instead of
htonl() when filling in WQE.
- Minimum SRQ WQE size is 32 bytes, so use 5 as a minimum value of
wqe_shift.
- When initializing next_wqe_index values, use htons() to put indices
into big-endian byte order.
Roland Dreier [Tue, 10 Apr 2007 17:33:48 +0000 (10:33 -0700)]
Don't set last byte of GID for non-global address vectors
Previous generation HCAs needed the last byte of the GID set to 2 for
non-global address vectors, but ConnectX just ignores the remote GID
field for non-global AVs, so remove the unnecessary code that sets it.
Roland Dreier [Tue, 10 Apr 2007 03:36:47 +0000 (20:36 -0700)]
Implement handling for completions with error
Convert status from HCA's hardware values to libibverbs enum for
completions with error in mlx4_handle_error_cqe(). Also, there's no
way mlx4_handle_error_cqe() can fail, so there's no reason for it to
return a value.
Roland Dreier [Tue, 10 Apr 2007 03:20:44 +0000 (20:20 -0700)]
Simplify completion with error handling
The out-of-line function to handle error CQEs doesn't need as many
parameters as the libmthca version did, so get rid of everything
except the CQE pointer and the WC pointer.