SEO Crawler – User Agent Information
The SEO Crawler is a tool developed by Lil Robots LLC to help users analyze websites for SEO purposes. If you see our crawler in your server logs, here's what you need to know:
User Agent String
Lil-Robots-Crawler/1.0 (+https://lilrobots.com/crawler-info)Purpose
The crawler is designed to:
- Collect publicly available SEO data such as titles, meta tags, links, headers, and site structure.
- Help users identify technical and on-page SEO issues.
The crawler does not collect personal data, attempt to bypass security, or scrape content beyond standard SEO-related elements.
Crawling Behavior
- Respects robots.txt rules by default.
- Uses polite crawl delays to avoid overwhelming servers.
- Identifies itself clearly with the user agent above.
How to Block or Control Access
If you prefer not to allow the SEO Crawler to access your site, you can block it via your robots.txt file:
User-agent: Lil-Robots-Crawler Disallow: /
This will prevent the crawler from visiting your site.
Contact
If you have questions or concerns about our crawler, please contact us:
Company: Lil Robots LLC
Email: support@lilrobots.com
Address: 4022 East Greenway Road, Suite 11 #432, Phoenix, AZ 85032