This is the mail archive of the cygwin mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: heap_chunk_in_mb default value (Was Re: perl - segfault on "free unused scalar")


Igor Pechtchanski wrote:

$ cat foo.c #include <stdlib.h>

int main(int argc, char * argv[]){
       int i;
       char * ptrs[1024];
       for(i = 0; i < atoi(argv[2]); ++i){
               ptrs[i] = malloc(1024 * 1024 * atoi(argv[1]));
               memset(ptrs[i], 'a', 1024 * 1024 * atoi(argv[1]));
       }

       sleep(10);
}

$ ./foo 200 5

$ ./foo 800 1

$ ./foo 2 500


Heh.  Are you sure it's working?  You don't check for malloc() returning
NULL above -- it could be silently failing, and you won't know (of course,
provided memset() ignores NULL arguments, or else you'd get a segfault).

I thought that memset would complain if called with NULL, but you're right, I should have checked it. I changed foo.c to:


#include <stdlib.h>
#include <stdio.h>

int main(int argc, char * argv[]){
        int i;
        char * ptrs[1024];
        for(i = 0; i < atoi(argv[2]); ++i){
                ptrs[i] = malloc(1024 * 1024 * atoi(argv[1]));
                if(ptrs[i] == 0){
                        printf("Malloc failed in iteration %d\n", i);
                        exit(1);
                }
                memset(ptrs[i], 'a', 1024 * 1024 * atoi(argv[1]));
        }
        sleep(10);
	printf("All ok\n");
}

$ ./foo 200 5
Malloc failed in iteration 4

$ ./foo 100 9
All ok

$ ./foo 200 4
All ok

$ ./foo 800 1
All ok

$ ./foo 2 500
All ok

$ perl -e '$a="a"x(200*1024*1024); sleep 9'
Out of memory during "large" request for 268439552 bytes, total sbrk() is 268627968 bytes at -e line 1.


So C can handle up to 1 GB while perl failes on 400 MB.

I've been using more than 384 MB in C and C++ in Cygwin for a long time.
Why heap_chunk_in_mb would affect Perl, but not C?

It affects every Cygwin program.

I have no reason not to believe you, but it seems it doesn't.


Do you compile the above with -mno-cygwin, by any chance?

No.


$ make foo
gcc     foo.c   -o foo

$ gcc -v
Reading specs from /usr/lib/gcc/i686-pc-cygwin/3.4.4/specs
Configured with: /gcc/gcc-3.4.4/gcc-3.4.4-1/configure --verbose --prefix=/usr --exec-prefix=/usr --sysconfdir=/etc --libdir=/usr/lib --libexecdir=/usr/lib --mandir=/usr/share/man --infodir=/usr/share/info --enable-languages=c,ada,c++,d,f77,java,objc --enable-nls --without-included-gettext --enable-version-specific-runtime-libs --without-x --enable-libgcj --disable-java-awt --with-system-zlib --enable-interpreter --disable-libgcj-debug --enable-threads=posix --enable-java-gc=boehm --disable-win32-registry --enable-sjlj-exceptions --enable-hash-synchronization --enable-libstdcxx-debug : (reconfigured)
Thread model: posix
gcc version 3.4.4 (cygming special) (gdc 0.12, using dmd 0.125)


I used over 1 GB in C++ in Cygwin for few years now. I would have noticed low limit of memory usage if any of my apps failed.

Even pure Cygwin apps (I mean those build and packaged by Cygwin developers) can use over 384 MB. Check this out:

$ cat foo.cpp
class A0 {};
class A1 : public A0{};
class A2 : public A1,A0{};
class A3 : public A2,A1,A0{};
class A4 : public A3,A2,A1,A0{};
class A5 : public A4,A3,A2,A1,A0{};
class A6 : public A5,A4,A3,A2,A1,A0{};
class A7 : public A6,A5,A4,A3,A2,A1,A0{};
class A8 : public A7,A6,A5,A4,A3,A2,A1,A0{};
class A9 : public A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A10 : public A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A11 : public A10,A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A12 : public A11,A10,A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A13 : public A12,A11,A10,A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A14 : public A13,A12,A11,A10,A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
class A15 : public A14,A13,A12,A11,A10,A9,A8,A7,A6,A5,A4,A3,A2,A1,A0{};
int main(){}

$ g++ foo.cpp 2>&1|grep -v warning
foo.cpp:14: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://gcc.gnu.org/bugs.html> for instructions.

cc1plus.exe consumes over 1 GB before it dies.

I didn't use references on purpose. I wanted to avoid the problem that
arrays require continuous space, so using an array to measure system
memory capacity is inaccurate. On the other hand, hash is a pointer
structure (at least I think so), so it should work with fragmented
memory.

It's not a fragmentation problem.

It could have been. I wasn't aware that heap_chunk_in_mb exists and is the limit of memory usage for Cygwin programs. Take a look above: ./foo 200 5 fails to allocate 1 GB of memory, but ./foo 2 500 succeded. That's because of fragmentation. I wanted to avoid it.


Krzysztof Duleba


-- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Problem reports: http://cygwin.com/problems.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]