Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shared array eating memory #5802

Closed
p5pRT opened this issue Jul 31, 2002 · 48 comments
Closed

shared array eating memory #5802

p5pRT opened this issue Jul 31, 2002 · 48 comments

Comments

@p5pRT
Copy link

p5pRT commented Jul 31, 2002

Migrated from rt.perl.org#15893 (status was 'resolved')

Searchable as RT15893$

@p5pRT
Copy link
Author

p5pRT commented Jul 31, 2002

From @lizmat

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.

Seems to me we have a (huge) leak here. ;-(

==================================================
use threads;
my @​queue : shared;

my $thread = threads->new(
  sub {
  while (1) {
  die if @​queue > 10000;
  shift( @​queue );
  }
  }
);

push( @​queue,1 ) while 1;

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.

Perl Info


This perlbug was built using Perl 5.00503 - Wed Feb  2 15:34:50 EST 2000
It is being executed now by  Perl 5.008 - Wed Jul 24 14:27:23 CEST 2002.

Site configuration information for perl 5.008:

Configured by liz at Wed Jul 24 14:27:23 CEST 2002.

Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:
   Platform:
     osname=linux, osvers=2.2.20, archname=i686-linux-thread-multi
     uname='linux dolphin.hsyndicate.com 2.2.20 #1 smp wed jan 2 21:32:07 
cet 2002 i686 unknown '
     config_args='-de -Dusethreads'
     hint=recommended, useposix=true, d_sigaction=define
     usethreads=define use5005threads=undef useithreads=define 
usemultiplicity=define
     useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
     use64bitint=undef use64bitall=undef uselongdouble=undef
     usemymalloc=n, bincompat5005=undef
   Compiler:
     cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing 
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-I/usr/include/gdbm',
     optimize='-O2',
     cppflags='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing 
-I/usr/local/include -I/usr/include/gdbm'
     ccversion='', gccversion='egcs-2.91.66 19990314/Linux (egcs-1.1.2 
release)', gccosandvers=''
     intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
     d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
     ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', 
lseeksize=8
     alignbytes=4, prototype=define
   Linker and Libraries:
     ld='cc', ldflags =' -L/usr/local/lib'
     libpth=/usr/local/lib /lib /usr/lib
     libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lpthread -lc -lposix -lcrypt 
-lutil
     perllibs=-lnsl -ldl -lm -lpthread -lc -lposix -lcrypt -lutil
     libc=/lib/libc-2.1.3.so, so=so, useshrplib=false, libperl=libperl.a
     gnulibc_version='2.1.3'
   Dynamic Linking:
     dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
     cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'

Locally applied patches:



@INC for perl 5.008:
     /usr/local/lib/perl5/5.8.0/i686-linux-thread-multi
     /usr/local/lib/perl5/5.8.0
     /usr/local/lib/perl5/site_perl/5.8.0/i686-linux-thread-multi
     /usr/local/lib/perl5/site_perl/5.8.0
     /usr/local/lib/perl5/site_perl/5.6.1
     /usr/local/lib/perl5/site_perl
     .


Environment for perl 5.008:
     HOME=/home/liz
     LANG=en_US
     LANGUAGE (unset)
     LD_LIBRARY_PATH (unset)
     LOGDIR (unset)
     PATH=/usr/bin:/bin:/usr/local/bin:/usr/X11R6/bin
     PERL_BADLANG (unset)
     SHELL=/bin/bash



@p5pRT
Copy link
Author

p5pRT commented Jul 31, 2002

From @lizmat

At 12​:00 PM 7/31/02 +0000, via RT wrote​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.

use threads;
my @​queue : shared;

my $thread = threads->new(
sub {
while (1) {
die if @​queue > 10000;
shift( @​queue );
}
}
);

push( @​queue,1 ) while 1;

The same happens with​:

use threads;
my %hash : shared;

my $thread = threads->new(
  sub {
  while (1) {
  my @​key = keys %hash;
  die if @​key > 10000;
  delete( $hash{$_} ) foreach @​key;
  }
  }
);

my $i;
$hash{$i} = 1 while ++$i;

so the problem does not seem related to shared arrays, but also applies to
shared hashes...

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From nick.ing-simmons@elixent.com

Elizabeth Mattijsen <perl5-porters@​perl.org> writes​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.

What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?

Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...

Seems to me we have a (huge) leak here. ;-(

==================================================
use threads;
my @​queue : shared;

my $thread = threads->new(
sub {
while (1) {
die if @​queue > 10000;
shift( @​queue );
}
}
);

push( @​queue,1 ) while 1;

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.

[Please do not change anything below this line]
-----------------------------------------------------------------

---
This perlbug was built using Perl 5.00503 - Wed Feb 2 15​:34​:50 EST 2000
It is being executed now by Perl 5.008 - Wed Jul 24 14​:27​:23 CEST 2002.

Site configuration information for perl 5.008​:

Configured by liz at Wed Jul 24 14​:27​:23 CEST 2002.

Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration​:
Platform​:
osname=linux, osvers=2.2.20, archname=i686-linux-thread-multi
uname='linux dolphin.hsyndicate.com 2.2.20 #1 smp wed jan 2 21​:32​:07
cet 2002 i686 unknown '
config_args='-de -Dusethreads'
hint=recommended, useposix=true, d_sigaction=define
usethreads=define use5005threads=undef useithreads=define
usemultiplicity=define
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=undef use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
Compiler​:
cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing
-I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
-I/usr/include/gdbm',
optimize='-O2',
cppflags='-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing
-I/usr/local/include -I/usr/include/gdbm'
ccversion='', gccversion='egcs-2.91.66 19990314/Linux (egcs-1.1.2
release)', gccosandvers=''
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=8
alignbytes=4, prototype=define
Linker and Libraries​:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -lndbm -lgdbm -ldb -ldl -lm -lpthread -lc -lposix -lcrypt
-lutil
perllibs=-lnsl -ldl -lm -lpthread -lc -lposix -lcrypt -lutil
libc=/lib/libc-2.1.3.so, so=so, useshrplib=false, libperl=libperl.a
gnulibc_version='2.1.3'
Dynamic Linking​:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'

Locally applied patches​:

---
@​INC for perl 5.008​:
/usr/local/lib/perl5/5.8.0/i686-linux-thread-multi
/usr/local/lib/perl5/5.8.0
/usr/local/lib/perl5/site_perl/5.8.0/i686-linux-thread-multi
/usr/local/lib/perl5/site_perl/5.8.0
/usr/local/lib/perl5/site_perl/5.6.1
/usr/local/lib/perl5/site_perl
.

---
Environment for perl 5.008​:
HOME=/home/liz
LANG=en_US
LANGUAGE (unset)
LD_LIBRARY_PATH (unset)
LOGDIR (unset)
PATH=/usr/bin​:/bin​:/usr/local/bin​:/usr/X11R6/bin
PERL_BADLANG (unset)
SHELL=/bin/bash
--
Nick Ing-Simmons

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 08​:37 AM 8/1/02 +0100, Nick Ing-Simmons wrote​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.
use threads;
my @​queue : shared;

my $thread = threads->new(
sub {
while (1) {
die if @​queue > 10000;
shift( @​queue );
}
}
);

push( @​queue,1 ) while 1;

What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?

So you're saying the thread may never see that the array has grown too
large? Well, I just added a little warn before the C<die> which shows me
it _does_ get there quite often. So the values _are_ shifted from the array.

Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...

Well, that's just the point. It _never_ dies (at least not before taking
up _all_ available memory on the machine). Even though the size of the
array remains below 10000. The only (other) way out of it is Control-C.

So something is not being freed or re-used or whatever. Maybe technically
there is no leak and is it just a stupid memory allocation thing, but it
sure enough stops you from using shared arrays in a production environment.

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From goldbb2@earthlink.net

Nick Ing-Simmons wrote​:

Elizabeth Mattijsen <perl5-porters@​perl.org> writes​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.

What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?

Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...

Seems to me we have a (huge) leak here. ;-(

==================================================
use threads;
my @​queue : shared;

my $thread = threads->new(
sub {
while (1) {
die if @​queue > 10000;
shift( @​queue );
}
}
);

push( @​queue,1 ) while 1;

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.

If you're worried that the thread is dieing and not ever being given the
cpu, then consider writing this as​:

use threads;
my @​queue : shared;
my $thread = threads->new( sub {
  shift @​queue while @​queue < 10000;
  # splice @​queue, 0 while @​queue < 10000;
  warn "In subthread, \@​queue reached 10000 elements\n";
} );
threads->yield, push @​queue, 1 while @​queue < 10000;
warn "In main thread, \@​queue reached 10000 elements.\n";
__END__
[untested]

--
tr/`4/ /d, print "@​{[map --$| ? ucfirst lc : lc, split]},\n" for
pack 'u', pack 'H*', 'ab5cf4021bafd28972030972b00a218eb9720000';

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 05​:04 AM 8/1/02 -0400, Benjamin Goldberg wrote​:

Nick Ing-Simmons wrote​:

Elizabeth Mattijsen <perl5-porters@​perl.org> writes​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.
What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?
Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...
If you're worried that the thread is dieing and not ever being given the
cpu, then consider writing this as​:

Eh... I'm not interested in whether this works or not. The point is that
the code should not be eating memory until the end of time (or when the
machine crashes, whichever comes earlier ;-).

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code! And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @iabyn

On Thu, Aug 01, 2002 at 11​:07​:39AM +0200, Elizabeth Mattijsen wrote​:

Eh... I'm not interested in whether this works or not. The point is that
the code should not be eating memory until the end of time (or when the
machine crashes, whichever comes earlier ;-).

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code! And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((

But as Nick has pointed out, your code doesn't demonstrate this.

For one possibility of how the threads get scheduled, your program is
functionally equivalent to

  push @​array, 1 while 1;

Which is not to say there may not be leaks, but your particular example
isn't evidence.

--
"Strange women lying in ponds distributing swords is no basis for a system
of government. Supreme executive power derives from a mandate from the
masses, not from some farcical aquatic ceremony."
Dennis - Monty Python and the Holy Grail.

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 11​:19 AM 8/1/02 +0100, Dave Mitchell wrote​:

On Thu, Aug 01, 2002 at 11​:07​:39AM +0200, Elizabeth Mattijsen wrote​:

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code! And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((
But as Nick has pointed out, your code doesn't demonstrate this.
For one possibility of how the threads get scheduled, your program is
functionally equivalent to
push @​array, 1 while 1;

Except that the elements are removed again within the thread as fast as the
main thread is putting them on. And roughly at the same speed, because the
program never dies (at least not on my "slow" machine, maybe the number
needs to be higher than 10000 on your machine).

There was some concern that the thread that is removing elements from the
shared array wasn't getting scheduled. By adding a C<warn> you will be
able to see that the thread _is_ getting scheduled and that elements are
therefore being removed.

use threads;
my @​queue : shared;
my $thread = threads->new(
  sub {
  while (1) {
  die if @​queue > 10000;
warn "Removing from \@​queue\n";
  shift( @​queue );
  }
  }
);
push( @​queue,1 ) while 1;

Which is not to say there may not be leaks, but your particular example
isn't evidence.

Have you looked at "top" when this program runs? I'd say that is enough
evidence. If anyone could suggest better ways of proving this problem, I'd
be obliged to hear of them.

Anyway, I was pointed to this particular problem by a user on the
perl-ithreads list. I had seen something before, but attributed it to my
own poor thread programming skills. His question prompted me to really
test this...

Needless to say this user is not using threads anymore, although the memory
usage was only one of the problems. The other was speed (i.e. not being
able to handle 50 "requests" per second). I wouldn't be surprised if these
two problems had a common cause.

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @iabyn

On Thu, Aug 01, 2002 at 12​:39​:01PM +0200, Elizabeth Mattijsen wrote​:

At 11​:19 AM 8/1/02 +0100, Dave Mitchell wrote​:

On Thu, Aug 01, 2002 at 11​:07​:39AM +0200, Elizabeth Mattijsen wrote​:

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code! And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((
But as Nick has pointed out, your code doesn't demonstrate this.
For one possibility of how the threads get scheduled, your program is
functionally equivalent to
push @​array, 1 while 1;

Except that the elements are removed again within the thread as fast as the
main thread is putting them on. And roughly at the same speed, because the
program never dies (at least not on my "slow" machine, maybe the number
needs to be higher than 10000 on your machine).

Okay, I've tried it with some modified code that I'm satified works, and
yes, there is a leak of about 200 bytes per pushed scalar. Sorry I doubted
you :-)

Sounds like a job for Arthur...

Dave.

  use threads;
  use threads​::shared;

  my @​queue : shared;
  my $thread = threads->new(
  sub {
  while (1) {
  while (@​queue < 10) {
  # warn "consumer​: yielding\n";
  threads->yield
  }
  shift @​queue;
  #warn "consumer​: length = ", scalar(@​queue), "\n";
  }
  }
  );
  my $x = 1;
  while (1) {
  system("ps -lfyp $$") if $x % 1000 == 0; # for Solaris. YMMV.
  while (@​queue > 100) {
  # warn "producer​: yielding\n";
  threads->yield
  }
  push @​queue, 1;
  #warn "producer​: length = ", scalar(@​queue), "\n";
  exit 0 if $x++ > 10000;
  }

--
You never really learn to swear until you learn to drive.

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 12​:22 PM 8/1/02 +0100, Dave Mitchell wrote​:

Okay, I've tried it with some modified code that I'm satified works, and
yes, there is a leak of about 200 bytes per pushed scalar. Sorry I doubted
you :-)

No problem... Thanks for checking it out and confirming what I saw...

I hope I wasn't too pushy... I was getting "parrot sketch"
feelings... the "Norwegian blue, beautiful plumage" type... ;-)

Sounds like a job for Arthur...

Indeed... ;-) I hope it is going to be a simple patch that can be applied
to 5.8.0...

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From arthur@contiller.se

On torsdag, augusti 1, 2002, at 01​:49 , Elizabeth Mattijsen wrote​:

Sounds like a job for Arthur...

Indeed... ;-) I hope it is going to be a simple patch that can be
applied to 5.8.0...

Grr, I will take a look at it this weekend, hopefully it it is not a
simple patch to perl but rather a simple patch to threads​::shared that
can be released onto CPAN.

Arthur

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 02​:48 PM 8/1/02 +0200, Arthur Bergman wrote​:

Sounds like a job for Arthur...
Indeed... ;-) I hope it is going to be a simple patch that can be
applied to 5.8.0...
Grr, I will take a look at it this weekend, hopefully it it is not a
simple patch to perl but rather a simple patch to threads​::shared that can
be released onto CPAN.

That would even be better!

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From nick.ing-simmons@elixent.com

Elizabeth Mattijsen <liz@​dijkmat.nl> writes​:

At 08​:37 AM 8/1/02 +0100, Nick Ing-Simmons wrote​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.
use threads;
my @​queue : shared;

my $thread = threads->new(
sub {
while (1) {
die if @​queue > 10000;
shift( @​queue );
}
}
);

push( @​queue,1 ) while 1;

What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?

So you're saying the thread may never see that the array has grown too
large? Well, I just added a little warn before the C<die> which shows me
it _does_ get there quite often.

But there is nothing in the code that forces that, it is just an artifact
of how your system time-slices threads.

So the values _are_ shifted from the array.

Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...

Well, that's just the point. It _never_ dies (at least not before taking
up _all_ available memory on the machine).

The thread dies, the process doesn't.

Even though the size of the
array remains below 10000. The only (other) way out of it is Control-C.

So something is not being freed or re-used or whatever.

There is no gate in the main thread doing the pushing, so it just keeps
going. Once the sub-thread has seen it go over 10000 and died
you have same effect as if you had written​:

#!perl
my @​queue;
push( @​queue,1 ) while 1;
__END__

Maybe technically
there is no leak and is it just a stupid memory allocation thing, but it
sure enough stops you from using shared arrays in a production environment.

In a production environment the "producer" should be the one to test the
bounds.

e.g.

while (1) {
if (@​queue < 9000) {
  push(@​queue,1);
}
else {
  yield;
}
}

Liz
--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From nick.ing-simmons@elixent.com

Elizabeth Mattijsen <liz@​dijkmat.nl> writes​:

At 05​:04 AM 8/1/02 -0400, Benjamin Goldberg wrote​:

Nick Ing-Simmons wrote​:

Elizabeth Mattijsen <perl5-porters@​perl.org> writes​:

The following program grows to more than 250 Mbyte of RAM within
10 seconds without ever die-ing because the array got too long.
What says the shift-ing thread has to be given any CPU time,
there is no reason to yield?
Also once the thread does die - remember you won't see the message
till you join then main thread just gets on with its pushing...
If you're worried that the thread is dieing and not ever being given the
cpu, then consider writing this as​:

Eh... I'm not interested in whether this works or not. The point is that
the code should not be eating memory until the end of time (or when the
machine crashes, whichever comes earlier ;-).

You have written the equivalent of​:

perl -e 'push(@​queue,1) while 1'

and it behaves the same​:

nick@​bactrian 1030$ time perl -e 'push(@​queue,1) while 1'
Out of memory!

real 0m27.027s
user 0m14.840s
sys 0m1.390s
nick@​bactrian 1031$

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code!

No you haven't - you have just re-discovered that infinite loops with
no error checking are a bad idea.

And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((

Liz
--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From nick.ing-simmons@elixent.com

Elizabeth Mattijsen <liz@​dijkmat.nl> writes​:

At 11​:19 AM 8/1/02 +0100, Dave Mitchell wrote​:

On Thu, Aug 01, 2002 at 11​:07​:39AM +0200, Elizabeth Mattijsen wrote​:

The point is that I'm trying to show that there is a _HUGE_ leak with
shared arrays and hashes that basically make them useless for production
code! And if you can't use shared arrays and shared hashes, you basically
can't use threads at all... ;-(((
But as Nick has pointed out, your code doesn't demonstrate this.
For one possibility of how the threads get scheduled, your program is
functionally equivalent to
push @​array, 1 while 1;

Except that the elements are removed again within the thread as fast as the
main thread is putting them on.

No. The pulling off thread is going to be slower 'cos it does the
checks - and eventually that thread will "die".

And roughly at the same speed, because the
program never dies

There is no "die" in the main thread, nor a join to propagate the die
of the sub-thread.

Anyway, I was pointed to this particular problem by a user on the
perl-ithreads list. I had seen something before, but attributed it to my
own poor thread programming skills. His question prompted me to really
test this...

And I am afraid to say you have only exposed your threads programming skills.
Compare what happens if you swap the two tasks round -

use threads;
my @​queue : shared;
my $thread = threads->new(
  sub {
  push( @​queue,1 ) while 1;
  }
);

while (1) {
  die if @​queue > 10000;
  warn "Removing from \@​queue\n";
  shift( @​queue );
  }

--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 1, 2002

From @lizmat

At 03​:15 PM 8/1/02 +0100, Nick Ing-Simmons wrote​:

Except that the elements are removed again within the thread as fast as the
main thread is putting them on.
No. The pulling off thread is going to be slower 'cos it does the
checks - and eventually that thread will "die".

And take any other thread that's running with it. Because that's the way
threads work in Perl. Once one thread dies, they _all_ die. Which can be
a problem, because there is no nice way for a thread to shut itself down
other than returning from the initial subroutine...

And roughly at the same speed, because the
program never dies
There is no "die" in the main thread, nor a join to propagate the die
of the sub-thread.

Eh... in Perl, a C<die> in a thread is propagated to all other threads
more or less instantaneusly. No need to join() or whatever. Which can be
a problem, but that's just the way it is currently.

Anyway, I was pointed to this particular problem by a user on the
perl-ithreads list. I had seen something before, but attributed it to my
own poor thread programming skills. His question prompted me to really
test this...
And I am afraid to say you have only exposed your threads programming skills.

Well, I'm eager to learn and not afraid to get the lid on my nose... How
else is anybody going to learn... ;-)

Compare what happens if you swap the two tasks round -
use threads;
my @​queue : shared;
my $thread = threads->new(
sub {
push( @​queue,1 ) while 1;
}
);

while (1) {
die if @​queue > 10000;
warn "Removing from \@​queue\n";
shift( @​queue );
}

Well, on my development box, this eats memory just as fast as my original
example and never dies either.

A die in one thread is the same as a die in any other thread. So in Perl
threads programming, it doesn't matter which of the two threads does the check.

I guess Perl threads programming in that respect is more like real
life. It doesn't matter which side pushes the big red button... ;-)

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From cmeyer@helvella.org

On Wed, Jul 31, 2002 at 12​:00​:23PM -0000, Elizabeth Mattijsen wrote​:

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.

perldoc perlthrtut says​:

  Even "$a += 5" or "$a++" are not guaranteed to be atomic.

shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?

perlfunc, threads​::shared, perlguts, perlapi and the like don't seem to
have any information on what's atomic and what's not.

Oh, now I see that threads​::shared uses a tied interface to do its stuff
(for arrays and hashes, but not scalars?). Hmm seems that scalars are
shared with some magic, and the tied arrays or hashes share the
underlying scalars?

Perhaps the threads​::shared pod should talk about what's atomic (and
maybe it should mention that it ties arrays and hashes)?

internals newbie, thinking aloud
-Colin.

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @lizmat

At 08​:45 PM 8/1/02 -0700, Colin Meyer wrote​:

On Wed, Jul 31, 2002 at 12​:00​:23PM -0000, Elizabeth Mattijsen wrote​:

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.
perldoc perlthrtut says​:
Even "$a += 5" or "$a++" are not guaranteed to be atomic.
shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?
perlfunc, threads​::shared, perlguts, perlapi and the like don't seem to
have any information on what's atomic and what's not.

Just before the release of 5.8.0 there was a discussion as to whether the
self-lockingness of certain operations should be documented. Since there
was no clear consensus that it should have been, it was not documented.

However, since then, Arthur and Benjamin have confirmed that​:

  - shift(), unshift(), push() and pop() are self-locking,
  - adding keys to and removing keys from a hash are self-locking

I would rather not call these atomic (they are), but they don't describe
what is actually happening. It's just that these operations are surrounded
by locks that shouldn't interfere with user locks. But since there hasn't
been a lot of real world usage out there, may prove to interfere with user
locks after all.

Oh, now I see that threads​::shared uses a tied interface to do its stuff
(for arrays and hashes, but not scalars?). Hmm seems that scalars are
shared with some magic, and the tied arrays or hashes share the
underlying scalars?

Perhaps the threads​::shared pod should talk about what's atomic (and
maybe it should mention that it ties arrays and hashes)?

Does that matter? Maybe in a pod describing internals?

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From cmeyer@helvella.org

On Fri, Aug 02, 2002 at 08​:30​:25AM +0200, Elizabeth Mattijsen wrote​:

However, since then, Arthur and Benjamin have confirmed that​:

- shift(), unshift(), push() and pop() are self-locking,
- adding keys to and removing keys from a hash are self-locking

I would rather not call these atomic (they are), but they don't describe
what is actually happening. It's just that these operations are surrounded
by locks that shouldn't interfere with user locks. But since there hasn't
been a lot of real world usage out there, may prove to interfere with user
locks after all.

Fair enough​: the programmer using shared variables should not assume
that push, etc, are atomic.

Oh, now I see that threads​::shared uses a tied interface to do its stuff
(for arrays and hashes, but not scalars?). Hmm seems that scalars are
shared with some magic, and the tied arrays or hashes share the
underlying scalars?

Perhaps the threads​::shared pod should talk about what's atomic (and
maybe it should mention that it ties arrays and hashes)?

Does that matter? Maybe in a pod describing internals?

Yes, it does matter that sharing a hash or an array ties it. Try tying
an array, then sharing it. You lose the original tied functionality.
This means, for example, that you can't share a hash that is tied to a
berkeley database.

-Colin.

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @lizmat

At 11​:55 PM 8/1/02 -0700, Colin Meyer wrote​:

Perhaps the threads​::shared pod should talk about what's atomic (and
maybe it should mention that it ties arrays and hashes)?
Does that matter? Maybe in a pod describing internals?
Yes, it does matter that sharing a hash or an array ties it. Try tying
an array, then sharing it. You lose the original tied functionality.
This means, for example, that you can't share a hash that is tied to a
berkeley database.

Ah, I think that that is actually classified as a bug, actually, to be
fixed in future versions... So you _should_ be able to have shared tied
thingies in the future... just not now...

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From nick.ing-simmons@elixent.com

Elizabeth Mattijsen <liz@​dijkmat.nl> writes​:

At 03​:15 PM 8/1/02 +0100, Nick Ing-Simmons wrote​:

Except that the elements are removed again within the thread as fast as the
main thread is putting them on.
No. The pulling off thread is going to be slower 'cos it does the
checks - and eventually that thread will "die".

And take any other thread that's running with it. Because that's the way
threads work in Perl. Once one thread dies, they _all_ die.

That isn't the way I coded it, and as far as I can
tell by re-reading the code Arthur has not changed fundamentals.

"die" only kills the thread that calls it.

This is because
threads->new( )

has an implied XS level eval {} round the call. which catches the die.
It more less has to - die un-winds the stack so we have to have something
at top of the thread's stack to unwind to.

Once upon a time the $@​ value was propagated to whoever called ->join
but that seems to have gone - now you should just get a warning
"thread failed to start" (which is misleading).

Which can be
a problem, because there is no nice way for a thread to shut itself down
other than returning from the initial subroutine...

Which is why I was not really worried that the "obvious" way I coded
it did not do that.

And roughly at the same speed, because the
program never dies
There is no "die" in the main thread, nor a join to propagate the die
of the sub-thread.

Eh... in Perl, a C<die> in a thread is propagated to all other threads
more or less instantaneusly.

Er, I wrote the code and I am reasonably sure it doesn't work like that.
For a start we have no mechanism to propagate anything to all other threads.

No need to join() or whatever. Which can be
a problem, but that's just the way it is currently.

A die in one thread is the same as a die in any other thread. So in Perl
threads programming, it doesn't matter which of the two threads does the check.

I think this is our fundamental difference of opinion. Perhaps Arthur
can explain how it really works.

Meanwhile I will build a fresh threaded perl and experiment some.

--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From nick.ing-simmons@elixent.com

Colin Meyer <cmeyer@​helvella.org> writes​:

Oh, now I see that threads​::shared uses a tied interface to do its stuff
(for arrays and hashes, but not scalars?). Hmm seems that scalars are
shared with some magic, and the tied arrays or hashes share the
underlying scalars?

Scalars are shared with some 'magic' as well - but not tie magic,
something slightly cheaper.

--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @lizmat

At 10​:01 AM 8/2/02 +0100, Nick Ing-Simmons wrote​:

And take any other thread that's running with it. Because that's the way
threads work in Perl. Once one thread dies, they _all_ die.
That isn't the way I coded it, and as far as I can
tell by re-reading the code Arthur has not changed fundamentals.
"die" only kills the thread that calls it.

This is because
threads->new( )

has an implied XS level eval {} round the call. which catches the die.
It more less has to - die un-winds the stack so we have to have something
at top of the thread's stack to unwind to.

Hmmm... a simple example proves that you're right. And that I'm
wrong. And that I was confusing exit() with die(). So yes, my poor thread
programming skills were exposed... ;-)

Once upon a time the $@​ value was propagated to whoever called ->join
but that seems to have gone - now you should just get a warning
"thread failed to start" (which is misleading).
Which is why I was not really worried that the "obvious" way I coded
it did not do that.

Yes, you're right. Still, the die() would have put a message on the
screen, which it doesn't do in my example, so the die() was indeed never fired.

And roughly at the same speed, because the
program never dies
There is no "die" in the main thread, nor a join to propagate the die
of the sub-thread.
Eh... in Perl, a C<die> in a thread is propagated to all other threads
more or less instantaneusly.
Er, I wrote the code and I am reasonably sure it doesn't work like that.
For a start we have no mechanism to propagate anything to all other threads.

Yes, indeed you're completely right and I'm completely wrong. Again, the
confusion in my mind was between exit() and die(). And probably segfaults
that I used to see frequently from RC1 onwards, but which almost all have
disappeared now.

No need to join() or whatever. Which can be
a problem, but that's just the way it is currently.
A die in one thread is the same as a die in any other thread. So in Perl
threads programming, it doesn't matter which of the two threads does the
check.
I think this is our fundamental difference of opinion. Perhaps Arthur
can explain how it really works.

Not anymore...

Meanwhile I will build a fresh threaded perl and experiment some.

Now, that is a good idea... ;-) And thanks for putting up with me in
this respect...

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From arthur@contiller.se

On fredag, augusti 2, 2002, at 05​:45 , Colin Meyer wrote​:

From​: Colin Meyer <cmeyer@​helvella.org>
Date​: fre aug 02, 2002 05​:45​:43 Europe/Stockholm
To​: perl5-porters@​perl.org
Subject​: atomic operations [was​: Re​: [perl #15893] shared array eating
memory]

On Wed, Jul 31, 2002 at 12​:00​:23PM -0000, Elizabeth Mattijsen wrote​:

For simplicity, I did not use any lock()s, as shift() and push()
are confirmed to be self-locking and atomic.

perldoc perlthrtut says​:

   Even "$a \+= 5" or "$a\+\+" are not guaranteed to be atomic\.

Each statement is atomic, $a++ is not atomic since it is the same as
$a = $a + 1

shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?

Common sense, they only access the variable once.

Arthur

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From arthur@contiller.se

On fredag, augusti 2, 2002, at 08​:55 , Colin Meyer wrote​:

matter that sharing a hash or an array ties it. Try tying
an array, then sharing it. You lose the original tied functionality.
This means, for example, that you can't share a hash that is tied to a
berkeley database.

That would seem to be a very suicidal thing to do anyway :-)

Arthur

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @rgarcia

Arthur Bergman wrote​:

Each statement is atomic, $a++ is not atomic since it is the same as $a
= $a + 1

There's a separate opcode for $a++ (op_preinc), so it's not the same
at the perl level. So I would think that it's atomic (currently).

shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?

Common sense, they only access the variable once.

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From arthur@contiller.se

On fredag, augusti 2, 2002, at 01​:06 , Rafael Garcia-Suarez wrote​:

There's a separate opcode for $a++ (op_preinc), so it's not the same
at the perl level. So I would think that it's atomic (currently).

shhhh ;), it still access the variable twice

Arthur

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @lizmat

At 01​:06 PM 8/2/02 +0200, Rafael Garcia-Suarez wrote​:

Arthur Bergman wrote​:

Each statement is atomic, $a++ is not atomic since it is the same as $a =
$a + 1
There's a separate opcode for $a++ (op_preinc), so it's not the same
at the perl level. So I would think that it's atomic (currently).

There was a thread about this about 6 weeks ago. Basically try having 10
threads increment a shared scalar 10000 times. The final result in the
shared scalar is on most machines _not_ 10 * 10000.

So, ++ is not atomic...

Liz

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From @Tux

On Fri 02 Aug 2002 13​:06, Rafael Garcia-Suarez <raphel.garcia-suarez@​hexaflux.com> wrote​:

Arthur Bergman wrote​:

Each statement is atomic, $a++ is not atomic since it is the same as $a
= $a + 1

There's a separate opcode for $a++ (op_preinc), so it's not the same

Hmm, I hope I'm not wrong, but IIRC that is for ++$a, and not for $a++

++$a is atomic, $a++ is not

at the perl level. So I would think that it's atomic (currently).

shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?

Common sense, they only access the variable once.

--
H.Merijn Brand Amsterdam Perl Mongers (http​://amsterdam.pm.org/)
using perl-5.6.1, 5.8.0 & 633 on HP-UX 10.20 & 11.00, AIX 4.2, AIX 4.3,
  WinNT 4, Win2K pro & WinCE 2.11. Smoking perl CORE​: smokers@​perl.org
http​://archives.develooper.com/daily-build@​perl.org/ perl-qa@​perl.org
send smoke reports to​: smokers-reports@​perl.org, QA​: http​://qa.perl.org

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From nick.ing-simmons@elixent.com

Arthur Bergman <arthur@​contiller.se> writes​:

On fredag, augusti 2, 2002, at 05​:45 , Colin Meyer wrote​:
Each statement is atomic, $a++ is not atomic since it is the same as
$a = $a + 1

shift() and push() are a bit more complex than increment, so how am I to
know that they are atomic?

Common sense, they only access the variable once.

I find that even after 20 years "common sense" and "concurent programming"
don't mix.

Arthur
--
Nick Ing-Simmons
http​://www.ni-s.u-net.com/

@p5pRT
Copy link
Author

p5pRT commented Aug 2, 2002

From arthur@contiller.se

On fredag, augusti 2, 2002, at 02​:11 , H.Merijn Brand wrote​:

Hmm, I hope I'm not wrong, but IIRC that is for ++$a, and not for $a++

++$a is atomic, $a++ is not

Nope, none of them are atomic, they could be made atomic by locking the
op.

Patches welcome if they don't slow down non shared variables!

Arthur

@p5pRT
Copy link
Author

p5pRT commented Aug 3, 2002

From cmeyer@helvella.org

On Fri, Aug 02, 2002 at 01​:05​:07PM +0200, Arthur Bergman wrote​:

On fredag, augusti 2, 2002, at 08​:55 , Colin Meyer wrote​:

matter that sharing a hash or an array ties it. Try tying
an array, then sharing it. You lose the original tied functionality.
This means, for example, that you can't share a hash that is tied to a
berkeley database.

That would seem to be a very suicidal thing to do anyway :-)

Hrm. I had hoped that with BerkeleyDB.pm and proper locking that
it might be feasible.

However, what really concerns me is that one cannot share any tied hash,
or tied array, even something as simple as a logging hash. It should at
least be documented that sharing is tied, no?

-Colin.

@p5pRT
Copy link
Author

p5pRT commented Sep 18, 2002

From jim.cistaro@av.com

I am a relative novice as XS so please pardon my ignorance. I
experience the same memory leak issues staed in bug 15893 while trying
to use Threads​::Queue (hence threads​::shared). After looking at the
code generated in shared.c and reading up on the documentation, I had a
few questions.

Is there a reason that POP and unshift use
ST(0) = Nullsv;
instead of
ST(0) = sv_newmortal();

Also should they do a
SvREFCNT_dec(sv);
This would appear to reduce the refcount to 1 if it is a normal scalar.
I am guessing that 1 is associated with the mortal ST(0). Then freeing
occurs when ST(0) dies.

I have only done minimal testing of a simple queue, but this seems to
stop the leak. I believe the only growth I see is due to the queue
backing up.

As I said, I am somewhat of a novice at XS and just want to be able to
use threading with queues. Hope this helps. If not, sorry for taking
up anyone's time.

Jim

@p5pRT
Copy link
Author

p5pRT commented Jan 4, 2003

arthur@contiller.se - Status changed from 'new' to 'open'

@p5pRT
Copy link
Author

p5pRT commented Apr 13, 2003

From arthur@contiller.se

Bug fixed,

All shared variables were created with a reference number of 2.

Patch fixes this, in the repository as #19200.

Arthur

@p5pRT
Copy link
Author

p5pRT commented Apr 13, 2003

From arthur@contiller.se

Inline Patch
--- shared.xs-19199	Sun Apr 13 21:54:30 2003
+++ shared.xs	Sun Apr 13 21:54:32 2003
@@ -275,8 +275,10 @@
     /* If neither of those then create a new one */
     if (!data) {
 	    SHARED_CONTEXT;
-	    if (!ssv)
+	    if (!ssv) {
 		ssv = newSV(0);
+		SvREFCNT(ssv) = 0;
+	    }
 	    data = PerlMemShared_malloc(sizeof(shared_sv));
 	    Zero(data,1,shared_sv);
 	    SHAREDSvPTR(data) = ssv;
@@ -503,7 +505,6 @@
     assert ( SHAREDSvPTR(shared) );
 
     ENTER_LOCK;
-
     if (SvTYPE(SHAREDSvPTR(shared)) == SVt_PVAV) {
 	assert ( mg->mg_ptr == 0 );
 	SHARED_CONTEXT;
@@ -782,6 +783,7 @@
 	    sharedsv_scalar_store(aTHX_ tmp, target);
 	    SHARED_CONTEXT;
 	    av_push((AV*) SHAREDSvPTR(shared), SHAREDSvPTR(target));
+	    SvREFCNT_inc(SHAREDSvPTR(target));
 	    SHARED_RELEASE;
 	    SvREFCNT_dec(tmp);
 	}
@@ -801,6 +803,7 @@
 	    sharedsv_scalar_store(aTHX_ tmp, target);
 	    SHARED_CONTEXT;
 	    av_store((AV*) SHAREDSvPTR(shared), i - 1, SHAREDSvPTR(target));
+	    SvREFCNT_inc(SHAREDSvPTR(target));
 	    CALLER_CONTEXT;
 	    SvREFCNT_dec(tmp);
 	}

@p5pRT
Copy link
Author

p5pRT commented Apr 13, 2003

arthur@contiller.se - Status changed from 'open' to 'resolved'

@p5pRT
Copy link
Author

p5pRT commented May 22, 2003

From guest@guest.guest.xxxxxxxx

[sky - Sun Apr 13 13​:03​:29 2003]​:

Bug fixed,

All shared variables were created with a reference number of 2.

Patch fixes this, in the repository as #19200.

Arthur

Is there someplace where can I get the compiled binaries for this fix
on Win2K?
-sureshr

@p5pRT
Copy link
Author

p5pRT commented May 22, 2003

From @lizmat

At 05​:34 +0000 5/22/03, Guest (via RT) wrote​:

[sky - Sun Apr 13 13​:03​:29 2003]​:

Bug fixed,
Is there someplace where can I get the compiled binaries for this fix
on Win2K?
-sureshr

I'm not aware of any such place.

Liz

@p5pRT
Copy link
Author

p5pRT commented May 27, 2003

From guest@guest.guest.xxxxxxxx

I did a local build using the patch in this thread for the shared.xs.
It was'nt of much help. Perl died in panic after a huge memory growth
(close to 190/200mb), with the following message.

"panic​: COND_INIT (1816)."

You can use the following test porgrams to reproduce the problem. I
have a 512mb ram, p4 m/c & perl dies after about 10min.

Details of my perl (part of perl -V o/p)​:
Summary of my perl5 (revision 5 version 8 subversion 0) configuration​:
  Platform​:
  osname=MSWin32, osvers=4.0, archname=MSWin32-x86-multi-thread
Characteristics of this binary (from libperl)​:
  Compile-time options​: MULTIPLICITY USE_ITHREADS USE_LARGE_FILES
PERL_IMPLICIT_CONTEXT PERL_IMPLICIT_SYS
  Locally applied patches​:
  ActivePerl Build 806
  Built under MSWin32
  Compiled at May 23 2003 13​:51​:54

#-------test program 1----------#
#! /usr/local/bin/perl -w

use strict;

use threads;
use threads​::shared;

my @​a​:shared = (1,2,3);

sub f1 {
  my $i;
  $i=0;
  while (1) {
  lock (@​a);
  push (@​a, $i, $i+1, $i+2);
  $i+=3;
  $i %= 100000;
  threads->yield();
  }
}

sub f2 {
  my $aref = shift;
  my @​wq = ();
  while (1) {
  lock(@​a);
  if ($#a >= 0) {
  push (@​wq, @​a);

  undef (@​a);
  @​a = ();
  }

  if ($#wq >= 0) {
  print "wkr​: ";
  while (<@​wq>) {
  print "$_, ";
  }
  undef(@​wq);
  @​wq = ();
  print "\n";
  }
  threads->yield();
  }
}

my $thr = threads->new (\&f2);
f1();
$thr->join();

#-------test program 2----------#

#! /usr/local/bin/perl -w

use strict;

use threads;
use threads​::shared;

my @​a​:shared = (1,2,3);

sub f1 {
  my ($i, $j) = (0, 0);
  while (1) {
  lock (@​a);
  for ($j=0; $j<10; $j++) {
  unshift (@​a, ($i+$j));
  }
  $i+=10;
  $i %= 100000;
  threads->yield();
  }
}

sub f2 {
  my $aref = shift;
  my @​wq = ();
  while (1) {
  lock(@​a);
  if ($#a >= 0) {
  while (@​a) {
  my $data = delete $a[-1];
  push (@​wq, $data);
  }

  undef (@​a);
  @​a = ();
  }

  if ($#wq >= 0) {
  print "wkr​: ";
  while (<@​wq>) {
  print "$_, ";
  }
  undef(@​wq);
  @​wq = ();
  print "\n";
  }
  threads->yield();
  }
}

my $thr = threads->new (\&f2);
f1();
$thr->join();

Any insight is really appreciated!

Thanks,
Suresh R

[elizabeth - Thu May 22 09​:52​:07 2003]​:

At 05​:34 +0000 5/22/03, Guest (via RT) wrote​:

[sky - Sun Apr 13 13​:03​:29 2003]​:

Bug fixed,
Is there someplace where can I get the compiled binaries for this fix
on Win2K?
-sureshr

I'm not aware of any such place.

Liz

@p5pRT
Copy link
Author

p5pRT commented Jun 9, 2003

From arthur@contiller.se

I think that the test cases are flawed, for example, the yield is pointless since you still
keep a lock on the array, and for me the printout routine only runs once.

However, I did run them and saw no memory leak, however as far as I can tell the array
just keeps on growing.

please try with a snapshot of maintperl or bleadperl and see if you can reproduce the
error, this specific patch only fixes the pop/shift problems.

[guest - Mon May 26 22​:51​:53 2003]​:

I did a local build using the patch in this thread for the shared.xs.
It was'nt of much help. Perl died in panic after a huge memory growth
(close to 190/200mb), with the following message.

"panic​: COND_INIT (1816)."

You can use the following test porgrams to reproduce the problem. I
have a 512mb ram, p4 m/c & perl dies after about 10min.

Details of my perl (part of perl -V o/p)​:
Summary of my perl5 (revision 5 version 8 subversion 0) configuration​:
Platform​:
osname=MSWin32, osvers=4.0, archname=MSWin32-x86-multi-thread
Characteristics of this binary (from libperl)​:
Compile-time options​: MULTIPLICITY USE_ITHREADS USE_LARGE_FILES
PERL_IMPLICIT_CONTEXT PERL_IMPLICIT_SYS
Locally applied patches​:
ActivePerl Build 806
Built under MSWin32
Compiled at May 23 2003 13​:51​:54

#-------test program 1----------#
#! /usr/local/bin/perl -w

use strict;

use threads;
use threads​::shared;

my @​a​:shared = (1,2,3);

sub f1 {
my $i;
$i=0;
while (1) {
lock (@​a);
push (@​a, $i, $i+1, $i+2);
$i+=3;
$i %= 100000;
threads->yield();
}
}

sub f2 {
my $aref = shift;
my @​wq = ();
while (1) {
lock(@​a);
if ($#a >= 0) {
push (@​wq, @​a);

         undef \(@&#8203;a\);
         @&#8203;a = \(\);
     \}

     if \($\#wq >= 0\) \{
         print "wkr&#8203;: ";
         while \(\<@&#8203;wq>\) \{
             print "$\_\, ";
         \}
         undef\(@&#8203;wq\);
         @&#8203;wq = \(\);
         print "\\n";
     \}
     threads\->yield\(\);
 \}

}

my $thr = threads->new (\&f2);
f1();
$thr->join();

#-------test program 2----------#

#! /usr/local/bin/perl -w

use strict;

use threads;
use threads​::shared;

my @​a​:shared = (1,2,3);

sub f1 {
my ($i, $j) = (0, 0);
while (1) {
lock (@​a);
for ($j=0; $j<10; $j++) {
unshift (@​a, ($i+$j));
}
$i+=10;
$i %= 100000;
threads->yield();
}
}

sub f2 {
my $aref = shift;
my @​wq = ();
while (1) {
lock(@​a);
if ($#a >= 0) {
while (@​a) {
my $data = delete $a[-1];
push (@​wq, $data);
}

        undef \(@&#8203;a\);
        @&#8203;a = \(\);
    \}

    if \($\#wq >= 0\) \{
        print "wkr&#8203;: ";
        while \(\<@&#8203;wq>\) \{
            print "$\_\, ";
        \}
        undef\(@&#8203;wq\);
        @&#8203;wq = \(\);
        print "\\n";
    \}
    threads\->yield\(\);
\}

}

my $thr = threads->new (\&f2);
f1();
$thr->join();

Any insight is really appreciated!

Thanks,
Suresh R

[elizabeth - Thu May 22 09​:52​:07 2003]​:

At 05​:34 +0000 5/22/03, Guest (via RT) wrote​:

[sky - Sun Apr 13 13​:03​:29 2003]​:

Bug fixed,
Is there someplace where can I get the compiled binaries for this fix
on Win2K?
-sureshr

I'm not aware of any such place.

Liz

@p5pRT
Copy link
Author

p5pRT commented Jun 9, 2003

arthur@contiller.se - Status changed from 'open' to 'resolved'

@p5pRT
Copy link
Author

p5pRT commented Jun 10, 2003

From guest@guest.guest.xxxxxxxx

[sky - Mon Jun 9 06​:57​:37 2003]​:

I think that the test cases are flawed, for example, the yield is
pointless since you still
keep a lock on the array, and for me the printout routine only runs
once.
You said it. There is a case where the thread for worker thread will
never get a time slot to run. Consider the following sequence if there
was no yield stmt in my test program...
1) thread f1​: gets scheduled
2) thread f1​: locks the variable @​a & does processing
3) thread f2​: gets scheduled
4) thread f2​: waits for lock & timesout
5) thread f1​: gets scheduled
6) thread f1​: reaches the point just before the loop ends or
  just before un-locking @​a & gets pre-empted
7) thread f2​: gets scheduled & still not acquired lock. so, again
  loses time-slot to thread-f1
8) thread f1​: get scheduled & goes on...

Now, if you consider the above sequence to repeat, it would so happen
that you would never be able to do the print (or the worker thread
getting scheduled). Thus, 'yield' is a MUST & you should test with the
same code.

However, I did run them and saw no memory leak, however as far as I
can tell the array
just keeps on growing.
The explanation for your observed growth could be 'coz you used the
code with no yield. But, assuming if you had tried with the code
version having the 'yield' stmt, even in that case you should have seen
the memory growth, as its the bug which I am trying to prove.

If you look at the code, the shared array is being emptied by the
worker thread (f2) and I would expect the memory to be freed for the
same. I would expect the same behaviour even if you try a 'shift' over
the array in f2, instead of array copy & doing undef of the array
contents.

please try with a snapshot of maintperl or bleadperl and see if you
can reproduce the
error, this specific patch only fixes the pop/shift problems.
Where can I get these tools? I don't find them in the default Perl
installation.

Thanks,
Suresh R

@p5pRT
Copy link
Author

p5pRT commented Jun 10, 2003

From g@netcraft.com.au

On Mon, Jun 09, 2003 at 01​:57​:38PM -0000, Arthur Bergman wrote​:

I think that the test cases are flawed, for example, the yield is pointless since you still
keep a lock on the array, and for me the printout routine only runs once.

However, I did run them and saw no memory leak, however as far as I can tell the array
just keeps on growing.

please try with a snapshot of maintperl or bleadperl and see if you can reproduce the
error, this specific patch only fixes the pop/shift problems.

Here is a much simpler test case​:


#!/usr/bin/perl -w

use strict;

use threads;
use Thread​::Queue;

my $q = Thread​::Queue->new(1);

$q->enqueue(1) while $q->dequeue();


The patch didn't seem to fix the problem for me, but maybe I did
something stupid and ended up running my old Perl without the patch.

--
Geoffrey D. Bennett, RHCE, RHCX geoffrey@​netcraft.com.au
Senior Systems Engineer http​://www.netcraft.com.au/geoffrey/
NetCraft Australia Pty Ltd http​://www.netcraft.com.au/linux/

@p5pRT
Copy link
Author

p5pRT commented Jun 12, 2003

From arthur@contiller.se

[gdb - Tue Jun 10 04​:29​:18 2003]​:

On Mon, Jun 09, 2003 at 01​:57​:38PM -0000, Arthur Bergman wrote​:

I think that the test cases are flawed, for example, the yield is
pointless since you still
keep a lock on the array, and for me the printout routine only runs
once.

However, I did run them and saw no memory leak, however as far as I
can tell the array
just keeps on growing.

please try with a snapshot of maintperl or bleadperl and see if you
can reproduce the
error, this specific patch only fixes the pop/shift problems.

Here is a much simpler test case​:

-----
#!/usr/bin/perl -w

use strict;

use threads;
use Thread​::Queue;

my $q = Thread​::Queue->new(1);

$q->enqueue(1) while $q->dequeue();
-----

The patch didn't seem to fix the problem for me, but maybe I did
something stupid and ended up running my old Perl without the patch.

What platform are you running on, we can reproduce this on tru64 but not any other.

Arthur

@p5pRT
Copy link
Author

p5pRT commented Jun 12, 2003

From g@netcraft.com.au

On Thu, Jun 12, 2003 at 11​:00​:59AM -0000, Arthur Bergman wrote​:

[gdb - Tue Jun 10 04​:29​:18 2003]​:
[...]

Here is a much simpler test case​:

-----
#!/usr/bin/perl -w

use strict;

use threads;
use Thread​::Queue;

my $q = Thread​::Queue->new(1);

$q->enqueue(1) while $q->dequeue();
-----

The patch didn't seem to fix the problem for me, but maybe I did
something stupid and ended up running my old Perl without the patch.

What platform are you running on, we can reproduce this on tru64 but
not any other.

Red Hat Linux 9 Intel​:

$ uname -a
Linux brooke.netcraft.com.au 2.4.20-8 #1 Thu Mar 13 17​:54​:28 EST 2003 i686 i686 i386 GNU/Linux

--
Geoffrey D. Bennett, RHCE, RHCX geoffrey@​netcraft.com.au
Senior Systems Engineer http​://www.netcraft.com.au/geoffrey/
NetCraft Australia Pty Ltd http​://www.netcraft.com.au/linux/

@p5pRT
Copy link
Author

p5pRT commented Jul 7, 2003

From mterretta@advection.net

Have a 2,000 line program on ActivePerl Build 805 for Win32 that processes
milliions of lines of logs (or would if it wouldn't die). Consumes massive
amounts of memory until finally dies with a "panic​: COND_INIT (1816)."

Started whittling away the code until I had only an enqueue and a dequeue
left, without even a variable assignment on the dequeue. If I commented out
either one, no leak, for millions of executions. With it, I hit 100 megs in
just a few hundred thousand loops. Started Googling.

Found the following even simpler program, given on Google to illustrate the
leak, which consumes RAM wildly, about 10 MB per second on my system.

  use strict;
  use threads;
  use Thread​::Queue;

  my $q = Thread​::Queue->new(1);

  $q->enqueue(1) while $q->dequeue();

It was noted this only reproduces on Tru64. Well, it reproduces on Win32 as
well, at least on a Windows 2000 box with all service packs.

My perl -V here​:

Summary of my perl5 (revision 5 version 8 subversion 0) configuration​:
  Platform​:
  osname=MSWin32, osvers=4.0, archname=MSWin32-x86-multi-thread
  uname=''
  config_args='undef'
  hint=recommended, useposix=true, d_sigaction=undef
  usethreads=undef use5005threads=undef useithreads=define
usemultiplicity=define
  useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
  use64bitint=undef use64bitall=undef uselongdouble=undef
  usemymalloc=n, bincompat5005=undef
  Compiler​:
  cc='cl', ccflags ='-nologo -Gf -W3 -MD -Zi -DNDEBUG -O1 -DWIN32
-D_CONSOLE -DNO_STRICT -DHAVE_DES_FCRYPT -DPERL_IMPLICIT_CONTEX
T -DPERL_IMPLICIT_SYS -DUSE_PERLIO -DPERL_MSVCRT_READFIX',
  optimize='-MD -Zi -DNDEBUG -O1',
  cppflags='-DWIN32'
  ccversion='', gccversion='', gccosandvers=''
  intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
  d_longlong=undef, longlongsize=8, d_longdbl=define, longdblsize=10
  ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='__int64',
lseeksize=8
  alignbytes=8, prototype=define
  Linker and Libraries​:
  ld='link', ldflags ='-nologo -nodefaultlib -debug -opt​:ref,icf
-libpath​:"C​:\Perl\lib\CORE" -machine​:x86'
  libpth="C​:\Perl\lib\CORE"
  libs= oldnames.lib kernel32.lib user32.lib gdi32.lib winspool.lib
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib
  netapi32.lib uuid.lib wsock32.lib mpr.lib winmm.lib version.lib
odbc32.lib odbccp32.lib msvcrt.lib
  perllibs= oldnames.lib kernel32.lib user32.lib gdi32.lib winspool.lib
comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32
.lib netapi32.lib uuid.lib wsock32.lib mpr.lib winmm.lib version.lib
odbc32.lib odbccp32.lib msvcrt.lib
  libc=msvcrt.lib, so=dll, useshrplib=yes, libperl=perl58.lib
  gnulibc_version='undef'
  Dynamic Linking​:
  dlsrc=dl_win32.xs, dlext=dll, d_dlsymun=undef, ccdlflags=' '
  cccdlflags=' ', lddlflags='-dll -nologo -nodefaultlib -debug
-opt​:ref,icf -libpath​:"C​:\Perl\lib\CORE" -machine​:x86'

Characteristics of this binary (from libperl)​:
  Compile-time options​: MULTIPLICITY USE_ITHREADS USE_LARGE_FILES
PERL_IMPLICIT_CONTEXT PERL_IMPLICIT_SYS
  Locally applied patches​:
  ActivePerl Build 805
  Built under MSWin32
  Compiled at Feb 4 2003 18​:08​:02
  @​INC​:
  C​:/Perl/lib
  C​:/Perl/site/lib
  .

@p5pRT p5pRT closed this as completed Sep 29, 2010
@p5pRT
Copy link
Author

p5pRT commented Sep 29, 2010

@cpansprout - Status changed from 'open' to 'resolved'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant