Skip to content

Commit

Permalink
lz4: fix kernel decompression speed
Browse files Browse the repository at this point in the history
This patch replaces all memcpy() calls with LZ4_memcpy() which calls
__builtin_memcpy() so the compiler can inline it.

LZ4 relies heavily on memcpy() with a constant size being inlined.  In x86
and i386 pre-boot environments memcpy() cannot be inlined because memcpy()
doesn't get defined as __builtin_memcpy().

An equivalent patch has been applied upstream so that the next import
won't lose this change [1].

I've measured the kernel decompression speed using QEMU before and after
this patch for the x86_64 and i386 architectures.  The speed-up is about
10x as shown below.

Code	Arch	Kernel Size	Time	Speed
v5.8	x86_64	11504832 B	148 ms	 79 MB/s
patch	x86_64	11503872 B	 13 ms	885 MB/s
v5.8	i386	 9621216 B	 91 ms	106 MB/s
patch	i386	 9620224 B	 10 ms	962 MB/s

I also measured the time to decompress the initramfs on x86_64, i386, and
arm.  All three show the same decompression speed before and after, as
expected.

[1] lz4/lz4#890

Signed-off-by: Nick Terrell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Cc: Yann Collet <[email protected]>
Cc: Gao Xiang <[email protected]>
Cc: Sven Schmidt <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Arvind Sankar <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Diab Neiroukh <[email protected]>
Signed-off-by: Lau <[email protected]>
  • Loading branch information
terrelln authored and radcolor committed Oct 1, 2020
1 parent 7e2eb0e commit 22ff130
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 9 deletions.
2 changes: 1 addition & 1 deletion lib/lz4/lz4_compress.c
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@ static FORCE_INLINE int LZ4_compress_generic(
*op++ = (BYTE)(lastRun << ML_BITS);
}

memcpy(op, anchor, lastRun);
LZ4_memcpy(op, anchor, lastRun);

op += lastRun;
}
Expand Down
14 changes: 7 additions & 7 deletions lib/lz4/lz4_decompress.c
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ static FORCE_INLINE int LZ4_decompress_generic(
&& likely((endOnInput ? ip < shortiend : 1) &
(op <= shortoend))) {
/* Copy the literals */
memcpy(op, ip, endOnInput ? 16 : 8);
LZ4_memcpy(op, ip, endOnInput ? 16 : 8);
op += length; ip += length;

/*
Expand All @@ -169,9 +169,9 @@ static FORCE_INLINE int LZ4_decompress_generic(
(offset >= 8) &&
(dict == withPrefix64k || match >= lowPrefix)) {
/* Copy the match. */
memcpy(op + 0, match + 0, 8);
memcpy(op + 8, match + 8, 8);
memcpy(op + 16, match + 16, 2);
LZ4_memcpy(op + 0, match + 0, 8);
LZ4_memcpy(op + 8, match + 8, 8);
LZ4_memcpy(op + 16, match + 16, 2);
op += length + MINMATCH;
/* Both stages worked, load the next token. */
continue;
Expand Down Expand Up @@ -260,7 +260,7 @@ static FORCE_INLINE int LZ4_decompress_generic(
}
}

memcpy(op, ip, length);
LZ4_memcpy(op, ip, length);
ip += length;
op += length;

Expand Down Expand Up @@ -383,7 +383,7 @@ static FORCE_INLINE int LZ4_decompress_generic(
while (op < copyEnd)
*op++ = *match++;
} else {
memcpy(op, match, mlen);
LZ4_memcpy(op, match, mlen);
}
op = copyEnd;
if (op == oend)
Expand All @@ -397,7 +397,7 @@ static FORCE_INLINE int LZ4_decompress_generic(
op[2] = match[2];
op[3] = match[3];
match += inc32table[offset];
memcpy(op + 4, match, 4);
LZ4_memcpy(op + 4, match, 4);
match -= dec64table[offset];
} else {
LZ4_copy8(op, match);
Expand Down
10 changes: 10 additions & 0 deletions lib/lz4/lz4defs.h
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,16 @@ static FORCE_INLINE void LZ4_writeLE16(void *memPtr, U16 value)
return put_unaligned_le16(value, memPtr);
}

/*
* LZ4 relies on memcpy with a constant size being inlined. In freestanding
* environments, the compiler can't assume the implementation of memcpy() is
* standard compliant, so apply its specialized memcpy() inlining logic. When
* possible, use __builtin_memcpy() to tell the compiler to analyze memcpy()
* as-if it were standard compliant, so it can inline it in freestanding
* environments. This is needed when decompressing the Linux Kernel, for example.
*/
#define LZ4_memcpy(dst, src, size) __builtin_memcpy(dst, src, size)

static FORCE_INLINE void LZ4_copy8(void *dst, const void *src)
{
#if LZ4_ARCH64
Expand Down
2 changes: 1 addition & 1 deletion lib/lz4/lz4hc_compress.c
Original file line number Diff line number Diff line change
Expand Up @@ -570,7 +570,7 @@ static int LZ4HC_compress_generic(
*op++ = (BYTE) lastRun;
} else
*op++ = (BYTE)(lastRun<<ML_BITS);
memcpy(op, anchor, iend - anchor);
LZ4_memcpy(op, anchor, iend - anchor);
op += iend - anchor;
}

Expand Down

0 comments on commit 22ff130

Please sign in to comment.