0.94 came only 2 months after initial 0.90:
0.90 (10 Aug 1997): First public release of lzop
...
0.94 (15 Oct 1997): Header format change
function old new delta
do_lzo_decompress 411 404 -7
f_read8 24 - -24
f_read16 31 - -31
------------------------------------------------------------------------------
(add/remove: 0/2 grow/shrink: 0/1 up/down: 0/-62) Total: -62 bytes
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Usage: tar c|x|t [-ZzJjahmvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...
Create, extract, or list files from a tar file
Operation: <============== DELETED
c Create
x Extract
t List
-f FILE Name of TARFILE ('-' for stdin/out)
-C DIR Change to DIR before operation
-v Verbose
-O Extract to stdout
-m Don't restore mtime
-o Don't restore user:group
-k Don't replace existing files
-Z (De)compress using compress
-z (De)compress using gzip
-J (De)compress using xz
-j (De)compress using bzip2
-a (De)compress using lzma
-h Follow symlinks
-T FILE File with names to include
-X FILE File with glob patterns to exclude
--exclude PATTERN Glob pattern to exclude
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Instead, simply don't send this header.
On Mon, Apr 2, 2018 at 8:17 PM, xisd <xisd-dev@riseup.net> wrote:
> I had some trouble using busybox httpd to serve a static website and I
> thought the issue might be of interest.
>
> My problem is related to something that seem quite common for static
> site generator : the use of html files without the '.html' extension
> (it is called 'clean url'...)
>
> Most web server guess that these files are html and display them like
> any other .html files.
>
> From what I understood, the MIME type for files without extension in
> busybox htttp default settings is 'application/octet-stream', and
> because of that 'clean url' pages are not displayed.
>
> It is only trouble because I wanted to deploy my website on freshly
> installed linux without editing any configuration.
>
> The default MIME setting make sense to me as it is, I just thought that
> might be worth mentioning since the use of 'clean url' seem to be a
> common practice for static sites generators (the one I use is callled
> 'yellow' (https://github.com/datenstrom/yellow))
>
> Here is a link for the related issue on github :
> https://github.com/datenstrom/yellow/issues/317
function old new delta
send_headers 702 718 +16
send_headers_and_exit 23 20 -3
handle_incoming_and_exit 2794 2791 -3
send_file_and_exit 772 756 -16
------------------------------------------------------------------------------
(add/remove: 0/0 grow/shrink: 1/3 up/down: 16/-22) Total: -6 bytes
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
On sorting all kernel/linux/arch/ *.[ch] files,
this reduces memory usage by 6%.
yes | head -99999999 | sort
goes down from 1900Mb to 380 Mb.
function old new delta
sort_main 862 1098 +236
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Upstream email:
parser: Fix parameter expansion inside inner double quotes
The parsing of parameter expansion inside inner double quotes
breaks because we never look for ENDVAR while innerdq is true.
echo "${x#"${x+''}"''}
This patch fixes it by pushing the syntax stack if innerdq is
true and we enter a new parameter expansion.
This patch also fixes a corner case where a bad substitution error
occurs within arithmetic expansion.
Reported-by: Denys Vlasenko <vda.linux@googlemail.com>
Fixes: ab1cecb40478 (" parser: Add syntax stack for recursive...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
function old new delta
readtoken1 2880 2898 +18
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Upstream email:
This is actually composed of two bugs. First of all our tracking
of quotemark is wrong so anything after "$@" becomes quoted. Once
we fix that then the problem is that the first space character
after "$@" is not recognised as an IFS.
This patch fixes both.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Upstream commit:
From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Fri, 9 Mar 2018 23:07:53 +0800
parser: Fix single-quoted patterns in here-documents
The script
x=*
cat <<- EOF
${x#'*'}
EOF
prints * instead of nothing as it should. The problem is that
when we're in sqsyntax context in a here-document, we won't add
CTLESC as we should. This patch fixes it:
Reported-by: Harald van Dijk <harald@gigawatt.nl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
This closes 10821.
Upstream patch:
From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Fri, 9 Mar 2018 00:14:02 +0800
parser: Add syntax stack for recursive parsing
Without a stack of syntaxes we cannot correctly these two cases
together:
"${a#'$$'}"
"${a#"${b-'$$'}"}"
A recursive parser also helps in some other corner cases such
as nested arithmetic expansion with paratheses.
This patch adds a syntax stack allocated from the stack using
alloca. As a side-effect this allows us to remove the naked
backslashes for patterns within double-quotes, which means that
EXP_QPAT also has to go.
This patch also fixes removes any backslashes that precede right
braces when they are present within a parameter expansion context,
and backslashes that precede double quotes within inner double
quotes inside a parameter expansion in a here-document context.
The idea of a recursive parser is based on a patch by Harald van
Dijk.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
var_bash3, var_bash4 and var_bash6 tests are updated
with the output given by bash-4.3.43
With this patch, the following tests now pass for ash:
dollar_repl_slash_bash2.tests
squote_in_varexp2.tests
squote_in_varexp.tests
var_bash4.tests
function old new delta
readtoken1 2615 2874 +259
synstack_push - 54 +54
evalvar 574 571 -3
rmescapes 330 310 -20
subevalvar 1279 1258 -21
argstr 1146 1107 -39
------------------------------------------------------------------------------
(add/remove: 1/0 grow/shrink: 1/4 up/down: 313/-83) Total: 230 bytes
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
This fragment did not work right:
temp = bb_get_chunk_from_file(fp, &len);
if (temp) {
/* len > 0 here, it's ok to do temp[len-1] */
char c = temp[len-1];
With "int len" _sign-extending_, temp[len-1] can refer to a wrong location
if len > 0x7fffffff.
Signed-off-by: Quentin Rameau <quinq@fifth.space>
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Upstream commit:
Author: Herbert Xu <herbert@gondor.apana.org.au>
Date: Thu Mar 15 18:27:30 2018 +0800
parser: Fix backquote support in here-document EOF mark
Currently using backquotes in a here-document EOF mark is broken
because dash tries to do command substitution on it. This patch
fixes it by checking whether we're looking for an EOF mark during
tokenisation.
Reported-by: Harald van Dijk <harald@gigawatt.nl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With added fix for quoted-ness of the EOF mark.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>