
com.digitalpebble.storm.crawler.protocol.RobotRules Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of storm-crawler Show documentation
Show all versions of storm-crawler Show documentation
A collection of resources for building low-latency, scalable web crawlers on Apache Storm.
/**
* Licensed to DigitalPebble Ltd under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* DigitalPebble licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.digitalpebble.storm.crawler.protocol;
import java.net.URL;
/**
* This class holds the rules which were parsed from a robots.txt file, and can
* test paths against those rules.
*/
public interface RobotRules {
/**
* Get expire time
*/
public long getExpireTime();
/**
* Get Crawl-Delay, in milliseconds. This returns -1 if not set.
*/
public long getCrawlDelay();
/**
* Returns false
if the robots.txt
file prohibits
* us from accessing the given url
, or true
* otherwise.
*/
public boolean isAllowed(URL url);
}
© 2015 - 2025 Weber Informatics LLC | Privacy Policy