Perl Cd Bookshelf [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Perl Cd Bookshelf [Electronic resources] - نسخه متنی

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید

20.11. Creating a Robot


20.11.1. Problem





You want to create a script that
navigates the Web on its own (i.e., a robot), and you''d like to
respect the remote sites'' wishes.

20.11.2. Solution


Instead of writing your robot with LWP::UserAgent, use LWP::RobotUA
instead:

use LWP::RobotUA;
$ua = LWP::RobotUA->new(''websnuffler/0.1'', ''me@wherever.com'');

20.11.3. Discussion


To avoid marauding robots and web crawlers hammering their servers,
sites are encouraged to create a file with access rules called
robots.txt. If you''re fetching only one
document, this is no big deal, but if your script fetches many
documents from the same server, you could easily exhaust that site''s
bandwidth.

When writing scripts to run around the Web, it''s important to be a
good net citizen: don''t request documents from the same server too
often, and heed the advisory access rules in their
robots.txt file.

The easiest way to handle this is to use the LWP::RobotUA module
instead of LWP::UserAgent to create agents. This agent automatically
knows to fetch data slowly when calling the same server repeatedly.
It also checks each site''s robots.txt file to
see whether you''re trying to grab a file that is off-limits. If you
do, you''ll get a response like this:

403 (Forbidden) Forbidden by robots.txt

Here''s an example robots.txt file, fetched using
the GET program that comes with the LWP module suite:

% GET http://www.webtechniques.com/robots.txt 
User-agent: *
Disallow: /stats
Disallow: /db
Disallow: /logs
Disallow: /store
Disallow: /forms
Disallow: /gifs
Disallow: /wais-src
Disallow: /scripts
Disallow: /config

A more interesting and extensive example is at http://www.cnn.com/robots.txt. This file is
so big, they even keep it under RCS control!

% GET http://www.cnn.com/robots.txt | head
# robots, scram
# $I d : robots.txt,v 1.2 1998/03/10 18:27:01 mreed Exp $
User-agent: *
Disallow: /
User-agent: Mozilla/3.01 (hotwired-test/0.1)
Disallow: /cgi-bin
Disallow: /TRANSCRIPTS
Disallow: /development

20.11.4. See Also


The documentation for the CPAN module LWP::RobotUA(3); http://info.webcrawler.com/mak/projects/robots/robotsl
for a description of how well-behaved robots act

/ 875