With such compression ratio for text, and compression speed, this algorithm fits well to the Google Snappy / LZO / FastLZ group. Every database engine should use one of these, they are operating at disk I/O speed.
Only for certain kind of data. Like a document index. But not for numerical or many other types. RLE, delta encoding and many other simpler algorithms are a better match on many cases.
You should consider zlib, it's compatible with everything and has very nice options. Also it's very low in memory usage (around 500KB vs *MB). Check out how pigz implements it with pthreads (one thread per block).
More relevant would be a comparison with gzip -1, lzop, snappy and rolz.
Also the test data is mixed. It is not very helpful to see where this algorithm shines.
Also note the memory usage shots up from 5m/2m for info-zip -1 to 46m/42m for the specific case of lz4 you picked.
EDIT: also bzip2 seems to be paticularly bad for this specific dataset, other algorithms in that category get better compression ratios. Added pigz to the comparison (info-zip with pthreads).
This level of compressions is only good for very low entropy data. Like HTML found in the wild from template code. But you probably are better off running whitespace cleanup, CSS removal and plain minimization. All that is just as fast and needs no decompression.
These compressors are only useful for a handful of cases.