This is the mail archive of the
cygwin
mailing list for the Cygwin project.
Re: Cygwin Maximum Memory Documentation
- From: Rob Donovan <hikerman2005-542731 at yahoo dot com>
- To: Rob Donovan <hikerman2005-542731 at yahoo dot com>, cygwin <cygwin at cygwin dot com>
- Date: Wed, 28 Apr 2010 14:56:33 -0700
- Subject: Re: Cygwin Maximum Memory Documentation
- References: <4BD73BD7.4070202@yahoo.com>
[ Apologies for the duplicate post. I'm trying to get this
retraction/qualification to be added to the original post's thread list,
since my first attempt was not. I'm concerned someone might find the
original via Google and then waste their time. If this second attempt
fails and someone out there can get a URL pointer to this message into a
post on the original's thread list, please do! The original post is at
http://cygwin.com/ml/cygwin/2010-04/msg00983.html ]
Looks like I posted too soon.
I can run SOME Cygwin/gcc programs up to 2.83GB with the changes I
mentioned in my first post (yesterday), but not ALL programs (and not
anything useful...). The problem seems to be with Cygwin's
implementation of disk access functions like fscanf() and fgets(). The
following code as written here and compiled under gcc/Cygwin (with the
gcc line commented out at the top) WILL run up to 2.83GB image size.
However, if the fscanf() line is uncommented the resulting program dies
just above a 2GB image size with the error "*** fatal error - cmalloc
would have returned NULL". The file tmp contains "hello.\n". If the
program is compiled with fscanf() uncommented using cl (Visual Studio
C++ compiler - the 2nd compile line commented out below) the program
will run up to 2.83GB before it runs out of memory. fgets() has the
same effect as fscanf(). I'm using Task Manager Mem Usage column to
watch the memory grow.
I also tested fprintf(stdout,...) strlen() sscanf() calloc() sprintf()
fopen() fabs() fclose() strtok() strcmp() strdup() strspn() and free()
all of which work fine up to 2.83GB.
My conclusion is that there's something in Cygwin's implementation of
disk access functions (fscanf(), fgets(), ...?) that stops working when
the process image size goes over 2GB. Since the /3GB switch enables
user pointers above 7FFFFFFF my guess would be something like
assumptions made about the most significant pointer bit.
/rob
#include <stdlib.h>
#include <stdio.h>
// to compile : gcc -g -Wall -Wpadded -Wl,--large-address-aware -o
memory_eater2.x memory_eater2.c
// to compile : cl memory_eater2.c winmm.lib /link /largeaddressaware
int main (int argc, char *argv[])
{
FILE *f;
char *buf;
long c = 0;
while (1 != 2) {
if ((buf = (char *) calloc(24,sizeof(char))) == NULL) {
fprintf(stderr,"Problem in %s line %d allocating
memory\n",__FILE__,__LINE__);
return(1);
}
c++;
if ((c % 5000) == 0) {
if ((f = fopen("./tmp","r")) == NULL) {
fprintf(stderr,"Problem in %s line %d opening input
file\n",__FILE__,__LINE__);
return(1);
}
// fscanf(f,"%s",buf);
fclose(f);
}
}
return(0);
}
Rob Donovan wrote:
I've found the following to be true on my system and feel these
details could usefully be added to the Changing Cygwin's Maximum
Memory page in the User's Guide. My system is a Dell Inspiron 1520
laptop with 4GB of physical RAM running Windows XP Home Edition with
SP3. Uname -v reports the Cygwin Kernel version as 2010-04-07 11:02
Some of these comments are specific to XP; I gather Vista does not use
a boot.ini file, for example. Perhaps others running other operating
systems could flesh it out to provide complete documentation across
operating systems.
1) When changing the maximum memory available to Cygwin using "regtool
-i set /HKLM/Software/Cygwin/heap_chunk_in_mb MMMM" the maximum useful
value of MMMM is 4095. Values of 4096 and above result in the heap
size reverting to its unset value (about 1000 on my system).
2) With this change in place processes are still limited to 2GB of
memory on 32-bit Windows systems, unless you set the /3GB switch in
your boot.ini file. This switch allows each process to grow to 3GB.
However, used alone it may have undesirable consequences (such as your
system hanging) which the /Userva=MMMM flag may prevent. MMMM from
2900 to 3030 is recommended. This switch caps user processes at MMMM MB.
The change might be from a boot.ini file of
[boot loader]
;timeout=30
default=multi(0)disk(0)rdisk(0)partition(2)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Home
Edition" /noexecute=optin /fastdetect
to
[boot loader]
;timeout=30
default=multi(0)disk(0)rdisk(0)partition(2)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Home
Edition" /noexecute=optin /fastdetect
multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Home
Edition 2.83GB" /3GB /Userva=2900 /noexecute=optin /fastdetect
You should then get a choice between OS configurations at boot. If
you use a 3rd party boot loader you may have to make the configuration
changes there instead of directly to the boot.ini file.
3) With both these changes in place your process will STILL be limited
to 2GB process size, unless you compile it with the /LARGEADDRESSAWARE
linker flag in place. Under gcc this flag is specified thus : gcc
-Wl,--large-address-aware -o program.x program.c That's W ell, not
W one.
With all these changes in place I can now run gcc compiled Cygwin
processes up to 2.83GB in size :)
/rob
--
Problem reports: http://cygwin.com/problems.html
FAQ: http://cygwin.com/faq/
Documentation: http://cygwin.com/docs.html
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple