This is the mail archive of the
libc-help@sourceware.org
mailing list for the glibc project.
Re: read() cant read files larger than 2.1 gig on a 64 bit system
- From: Florian Weimer <fw at deneb dot enyo dot de>
- To: Mike Frysinger <vapier at gentoo dot org>
- Cc: glide creme <glidecreme at gmail dot com>, libc-help at sourceware dot org
- Date: Sun, 31 Jan 2010 12:55:33 +0100
- Subject: Re: read() cant read files larger than 2.1 gig on a 64 bit system
- References: <431f8c1a0912251330l266dd23blae9d425b65e04162@mail.gmail.com><200912261158.41895.vapier@gentoo.org><431f8c1a0912261407p132c4ee1i1c859e71104b68c4@mail.gmail.com><200912270300.26960.vapier@gentoo.org>
* Mike Frysinger:
> On Saturday 26 December 2009 17:07:35 glide creme wrote:
>> Instead of the strange " If count is greater than SSIZE_MAX, the
>> result is unspecified.", which implies that a value smaller than
>> SSIZE_MAX is supported.
>
> seems pretty clear to me. dont do it.
It's not that clear because SSIZE_MAX is defined as LONG_MAX, so the
rule doesn't kick in here. (The reason for this rule is that
(ssize_t)-1 could be ambiguous).
It seems to me that libc is non-compliant here, but so are many other
systems. You really need to loop around calls to read. Even on
compliant systems, not doing that may lead to bugs when your program
is suspended with ^Z, and perhaps other fun. So it's not worth fixing
libc, IMHO.