This is the mail archive of the cygwin mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

heap_chunk_in_mb default value (Was Re: perl - segfault on "free unused scalar")


On Thu, 28 Jul 2005, Krzysztof Duleba wrote:

> Igor Pechtchanski wrote:
>
> > > > > > $ ./inter.pl
> > > > > > perl> sub foo($){$a=shift;foo($a+1);}
> >
> > You do realize you have infinite recursion here, right?
>
> Sure.
>
> > > > > Segmentation fault (core dumped)
> > And this is Windows saying "I don't think so". :-)
>
> :-)
>
> > > > I don't know.  Maybe it is a Windows feature that applications
> > > > running out of memory are crashing?
> > >
> > > But there's plenty of memory left when perl crashes. I have 1 GB RAM
> > > and 1 GB swap file.
> >
> > IIRC, unless you specifically increase heap_chunk_in_mb, Cygwin will
> > only use 384M of address space (which seems consistent with the sbrk()
> > and the request size above).
>
> I thought of that. However:
>
> $ cat foo.c
> #include <stdlib.h>
>
> int main(int argc, char * argv[]){
>         int i;
>         char * ptrs[1024];
>         for(i = 0; i < atoi(argv[2]); ++i){
>                 ptrs[i] = malloc(1024 * 1024 * atoi(argv[1]));
>                 memset(ptrs[i], 'a', 1024 * 1024 * atoi(argv[1]));
>         }
>
>         sleep(10);
> }
>
> $ ./foo 200 5
>
> $ ./foo 800 1
>
> $ ./foo 2 500

Heh.  Are you sure it's working?  You don't check for malloc() returning
NULL above -- it could be silently failing, and you won't know (of course,
provided memset() ignores NULL arguments, or else you'd get a segfault).

> I've been using more than 384 MB in C and C++ in Cygwin for a long time.
> Why heap_chunk_in_mb would affect Perl, but not C?

It affects every Cygwin program.  Do you compile the above with
-mno-cygwin, by any chance?

> > > I've simplified the test case. It seems that Cygwin perl can't
> > > handle too much memory. For instance:
> > >
> > > $ perl -e '$a="a"x(200 * 1024 * 1024); sleep 9'
> > >
> > > OK, this could have failed because $a might require 200 MB of
> > > continuous space.
> >
> > Actually, $a requires *more* than 200MB of continuous space.  Perl
> > characters are 2 bytes, so you're allocating at least 400MB of space!
>
> Right, UTF. I completely forgot about that.

Unicode, actually.

> > FWIW, the above doesn't fail for me, but then, I have heap_chunk_in_mb
> > set to 1024. :-)
>
> I'll try that in a while.
>
> > > But hashes don't, do they? Then why does the following code fail?
> > >
> > > $ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=$a for(1..400);sleep 9'
> >
> > Wow.  You're copying a 2MB string 400 times.  No wonder this fails.  It
> > would fail with larger heap sizes as well. :-)
> >
> > This works with no problems and very little memory usage, FWIW:
> >
> > $ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=\$a for(1..400);sleep 9'
>
> I didn't use references on purpose. I wanted to avoid the problem that
> arrays require continuous space, so using an array to measure system
> memory capacity is inaccurate. On the other hand, hash is a pointer
> structure (at least I think so), so it should work with fragmented
> memory.

It's not a fragmentation problem.

> I don't see why "no wonder it fails", unless it's a reference to
> aforementioned heap_chunk_in_mb.

It is.

> > > Or that one?
> > >
> > > $ perl -e '$a="a"x(50 * 1024 * 1024);$b=$a;$c=$a;$d=$a;$e=$a;sleep 10'
> >
> > Yep, let's see.  100MB * 5 = 500MB.  Since Cygwin perl by default can
> > only use 384MB, the result is pretty predictable.  Perl shouldn't
> > segfault, though -- that's a bug, IMO.
>
> Should I do anything about it?

I'd guess "perlbug", but let's first see what Gerrit has to say.

> > > On linux there's no such problem - perl can use all available memory.
> >
> > Yeah.  Set heap_chunk_in_mb to include all available memory, and I'm
> > sure you'll find that Cygwin perl works the same too.  However, you
> > might want to read some Perl documentation too, to make sure your data
> > structure size calculations are correct, and that your expectations
> > are reasonable.
>
> Thanks for being so helpful. That really explans a lot. Thanks to Dave
> and Gerrit, too.

No problem.

An aside to Cygwin developers (and I apologize if this has been asked
before): is it easy to determine the amount of physical memory and set
heap_chunk_in_mb to that?  Does it even make sense to do this?  Would some
variant of the current heap_chunk_in_mb code be useful for implementing a
proper ulimit?
	Igor
-- 
				http://cs.nyu.edu/~pechtcha/
      |\      _,,,---,,_		pechtcha@cs.nyu.edu
ZZZzz /,`.-'`'    -.  ;-;;,_		igor@watson.ibm.com
     |,4-  ) )-,_. ,\ (  `'-'		Igor Pechtchanski, Ph.D.
    '---''(_/--'  `-'\_) fL	a.k.a JaguaR-R-R-r-r-r-.-.-.  Meow!

If there's any real truth it's that the entire multidimensional infinity
of the Universe is almost certainly being run by a bunch of maniacs. /DA

--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Problem reports:       http://cygwin.com/problems.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]