PHPCrawl is a framework for crawling/spidering websites written in the programming language PHP, so just call it a webcrawler-library or crawler-engine for PHP
PHPCrawl “spiders” websites and passes information about all found documents (pages, links, files ans so on) for futher processing to users of the library.
It is high configurable and provides several options to specify the behaviour of the crawler like URL- and Content-Type-filters, cookie-handling, robots.txt-handling, limiting options, multiprocessing and much more.
PHPCrawl is completly free opensource software and is licensed under the GNU GENERAL PUBLIC LICENSE v2.
The following steps show how to use phpcrawl:
- Unpack the phpcrawl-package somewhere. That’s all you have to do for installation.
- Include the phpcrawl-mainclass to your script or project. Its located in the “libs”-path of the package.
There are no other includes needed.
- Extend the phpcrawler-class and override the handleDocumentInfo-method with your own code to process the information of every document the crawler finds on its way.
class MyCrawler extends PHPCrawler
function handleDocumentInfo(PHPCrawlerDocumentInfo $PageInfo)
// Your code comes here!
// Do something with the $PageInfo-object that
// contains all information about the currently
// received document.
// As example we just print out the URL of the document
Note to users of phpcrawl 0.7x or before: The old, overridable method “handlePageData()“, that receives the document-information as an array, still is present and gets called. PHPcrawl 0.8 is fully compatible with scripts written for earlier versions.
- Create an instance of that class in your script or project, define the behaviour of the crawler and start the crawling-process.
$crawler = new MyCrawler();
For a list of all available setup-options/methods of the crawler take a look at the PHPCrawler-classreference.