This is the mail archive of the cygwin mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: perl - segfault on "free unused scalar"


Igor Pechtchanski wrote:

$ ./inter.pl
perl> sub foo($){$a=shift;foo($a+1);}

You do realize you have infinite recursion here, right?

Sure.


Segmentation fault (core dumped)
And this is Windows saying "I don't think so". :-)

:-)


I don't know.  Maybe it is a Windows feature that applications running
out of memory are crashing?

But there's plenty of memory left when perl crashes. I have 1 GB RAM and 1 GB swap file.

IIRC, unless you specifically increase heap_chunk_in_mb, Cygwin will only use 384M of address space (which seems consistent with the sbrk() and the request size above).

I thought of that. However:


$ cat foo.c
#include <stdlib.h>

int main(int argc, char * argv[]){
        int i;
        char * ptrs[1024];
        for(i = 0; i < atoi(argv[2]); ++i){
                ptrs[i] = malloc(1024 * 1024 * atoi(argv[1]));
                memset(ptrs[i], 'a', 1024 * 1024 * atoi(argv[1]));
        }

        sleep(10);
}

$ ./foo 200 5

$ ./foo 800 1

$ ./foo 2 500

I've been using more than 384 MB in C and C++ in Cygwin for a long time. Why heap_chunk_in_mb would affect Perl, but not C?

I've simplified the test case. It seems that Cygwin perl can't handle
too much memory. For instance:

$ perl -e '$a="a"x(200 * 1024 * 1024); sleep 9'

OK, this could have failed because $a might require 200 MB of continuous
space.

Actually, $a requires *more* than 200MB of continuous space. Perl characters are 2 bytes, so you're allocating at least 400MB of space!

Right, UTF. I completely forgot about that.


FWIW, the above doesn't fail for me, but then, I have heap_chunk_in_mb set
to 1024. :-)

I'll try that in a while.


But hashes don't, do they? Then why does the following code fail?

$ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=$a for(1..400);sleep 9'

Wow. You're copying a 2MB string 400 times. No wonder this fails. It would fail with larger heap sizes as well. :-)

This works with no problems and very little memory usage, FWIW:

$ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=\$a for(1..400);sleep 9'

I didn't use references on purpose. I wanted to avoid the problem that arrays require continuous space, so using an array to measure system memory capacity is inaccurate. On the other hand, hash is a pointer structure (at least I think so), so it should work with fragmented memory.


I don't see why "no wonder it fails", unless it's a reference to aforementioned heap_chunk_in_mb.

Or that one?

$ perl -e '$a="a"x(50 * 1024 * 1024);$b=$a;$c=$a;$d=$a;$e=$a;sleep 10'

Yep, let's see. 100MB * 5 = 500MB. Since Cygwin perl by default can only use 384MB, the result is pretty predictable. Perl shouldn't segfault, though -- that's a bug, IMO.

Should I do anything about it?


On linux there's no such problem - perl can use all available memory.

Yeah. Set heap_chunk_in_mb to include all available memory, and I'm sure you'll find that Cygwin perl works the same too. However, you might want to read some Perl documentation too, to make sure your data structure size calculations are correct, and that your expectations are reasonable.

Thanks for being so helpful. That really explans a lot. Thanks to Dave and Gerrit, too.


Krzysztof Duleba


-- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Problem reports: http://cygwin.com/problems.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]