Prefer line-based I/O to slurping .
Reading in an entire file in a single <> operation is colloquially known as "slurping". But the considerations of memory allocation discussed in the previous section mean that slurping the contents of a file and then manipulating those contents monolithically, like so:
# Slurp the entire file (see the next guideline)... my $text = do { local $/; <> }; # Wash its mouth out... $text =~ s/$EXPLETIVE/[DELETED]/gxms; # Print it all back out... print $text;
is generally slower, less robust, and less scalable than processing the contents a line at a time:
while (my $line = <>) { $line =~ s/$expletive/[DELETED]/gxms; print $line; }
Reading an entire file into memory makes sense only when the file is unstable in some way, or is being updated asynchronously and you need a "snapshot", or if your planned text processing is likely to cross line boundaries:
sub get_C_code { my ($filename) = @_;
# Get a handle on the code...open my $in, '<', $filename or croak "Can't open C file '$filename': $OS_ERROR";# Read it all in...my $code = do { local $/; <$in> };# Convert any C-style comment to a single space...use Regexp::Common;# See Chapter 12$code =~ s{ $RE{comment}{C} }{$SPACE}gxms; return $code; }
Because C comments can span multiple lines, it's necessary to load the entire file into memory at once so the pattern can detect such cases.