From: Hong zhi guo Date: Tue, 31 Jul 2012 23:41:35 +0000 (-0700) Subject: vmalloc: walk vmap_areas by sorted list instead of rb_next() X-Git-Url: https://openfabrics.org/gitweb/?a=commitdiff_plain;h=92ca922f0a19145f2dcc99d84fe656fa55b52c2e;p=~shefty%2Frdma-dev.git vmalloc: walk vmap_areas by sorted list instead of rb_next() There's a walk by repeating rb_next to find a suitable hole. Could be simply replaced by walk on the sorted vmap_area_list. More simpler and efficient. Mutation of the list and tree only happens in pair within __insert_vmap_area and __free_vmap_area, under protection of vmap_area_lock. The patch code is also under vmap_area_lock, so the list walk is safe, and consistent with the tree walk. Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and rounds for hours. Signed-off-by: Hong Zhiguo Cc: Nick Piggin Cc: Johannes Weiner Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e03f4c7307a..7e25ee3ce6e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -413,11 +413,11 @@ nocache: if (addr + size - 1 < addr) goto overflow; - n = rb_next(&first->rb_node); - if (n) - first = rb_entry(n, struct vmap_area, rb_node); - else + if (list_is_last(&first->list, &vmap_area_list)) goto found; + + first = list_entry(first->list.next, + struct vmap_area, list); } found: