We've launched a free online demo of LibreCrawl at crawl.librecrawl.com. You can test all features with three complimentary crawls before deciding whether to self-host. No credit card required, no account creation, no tracking cookies—just a straightforward way to experience the tool before committing to installation.
Why Build a Demo for an Open Source Tool?
LibreCrawl is fundamentally an open-source project designed for self-hosting. The source code is freely available on GitHub under the MIT license, and we encourage everyone to download it, run it on their own infrastructure, and modify it to suit their needs. The entire philosophy behind LibreCrawl centers on giving SEO professionals full control over their tools and data.
But there's a practical problem with the "download and self-host" model: evaluation friction. Setting up a Python environment, installing dependencies like Playwright and ChromeDriver, configuring database connections, and ensuring all the pieces work together takes time. For someone simply trying to decide whether LibreCrawl meets their needs, that's a significant barrier. They might spend an hour getting the tool running only to discover it doesn't match their workflow or crawling requirements.
The live demo solves this evaluation problem. You can visit the URL, paste in a website, and see LibreCrawl crawl it within minutes. You get to experience the interface, test the JavaScript rendering capabilities, review the issue detection logic, and export data in your preferred format—all without installing anything. After those three test crawls, you'll know whether LibreCrawl is worth self-hosting for your use case.
The Guest Experience: Three Crawls, No Strings Attached
When you visit the demo, you're automatically logged in as a guest user. There's no registration form, no email verification step, no password to remember. You simply click "Continue as Guest" and start crawling immediately. This guest tier gives you three crawls within a 24-hour period, tracked by IP address to prevent abuse while maintaining privacy.
Those three crawls come with full feature access. You're not getting a stripped-down version of LibreCrawl—you're using the complete tool exactly as it would run on your own server. The JavaScript rendering works, the issue detection runs, the data exports function properly. The only limitation is the crawl count, which exists purely to prevent someone from spinning up thousands of crawls and overwhelming the demo server.
This approach differs significantly from most SaaS demos. Traditional SaaS companies often show you a carefully curated sandbox environment or a pre-recorded walkthrough video. You're not interacting with the real product; you're seeing a sanitized version designed to look impressive in demos. With LibreCrawl's demo, you're using the actual production code, running real crawls against real websites, and getting authentic results. If you encounter a bug or limitation in the demo, that's valuable information about what you'd experience when self-hosting.
Why IP-Based Tracking Instead of Sessions?
The demo tracks your three crawls by IP address rather than browser cookies or session tokens. This design decision reflects our privacy-first philosophy. Session-based tracking would require storing identifiers in your browser, creating a persistent connection between your activity and your device. IP-based tracking is ephemeral—once 24 hours pass, your crawl history expires from our database, and there's no long-term record of your usage.
IP tracking also prevents the most obvious abuse vector: repeatedly clearing cookies to bypass the three-crawl limit. Without IP tracking, someone could launch hundreds of crawls by opening incognito windows or clearing their browser data. By associating crawls with IP addresses (with proper Cloudflare origin IP detection for users behind proxies), we maintain a reasonable rate limit without invading privacy through account systems or tracking cookies.
The system doesn't log which websites you crawled, what issues were detected, or any details about the data you exported. We store only the IP address and timestamp, kept for exactly 24 hours before automatic deletion. This minimalist approach to anti-abuse tracking aligns with the broader LibreCrawl philosophy: collect the minimum data necessary, store it for the minimum time required, and give users control over their information.
From Guest to Self-Hosted: The Natural Progression
After exhausting your three guest crawls, the natural next step is registering for a free account. Registration unlocks unlimited crawls on the demo server and gives you access to persistent settings, project history, and saved configurations. But here's the important part: we're not trying to convert you into a paying customer, because LibreCrawl isn't a paid product. We're encouraging you to either register for continued demo access or, ideally, download the source code and self-host.
The registration tier exists primarily for users who want to keep using the demo server long-term, perhaps because they're testing it as part of an evaluation process, or they occasionally need to run quick crawls without spinning up their own server. It's a convenience option, not a monetization strategy. The actual goal is getting you to self-host, where you'll have complete control, zero rate limits, and the ability to customize the tool to your exact requirements.
Self-hosting gives you benefits the demo can never provide. You can crawl massive websites with millions of URLs without worrying about shared server resources. You can customize the crawler's behavior, modify the issue detection rules, integrate it with your existing SEO workflow, and extend it with custom features. You own your data completely—it never leaves your infrastructure, it's never subject to someone else's retention policies, and you never have to worry about the service shutting down.
The demo server will always exist as a way to evaluate LibreCrawl and run occasional quick crawls, but it's intentionally not designed to be your primary crawling platform. If you're running hundreds of crawls per month, analyzing enterprise-scale websites, or integrating crawl data into automated reporting systems, self-hosting is the answer. The demo is your on-ramp to self-hosting, not a replacement for it.
The Tier System Explained
LibreCrawl's demo server uses a simple tier system to manage access and prevent abuse: Guest, User, Extra, and Admin. Understanding these tiers helps clarify what you get at each level and why they exist.
Guest is the starting point—no account required, three crawls per 24 hours, access to the core crawling functionality but no ability to modify settings. Settings are locked for guests because allowing unrestricted configuration changes could enable abuse (like setting concurrency to 50 and overwhelming the server). Guests get the default configuration, which is already quite capable: 3 levels of depth, following redirects, discovering sitemaps, and respecting robots.txt.
User unlocks after registration and gives you unlimited crawls plus control over basic settings. You can configure crawl depth, URL limits, delay between requests, export formats, and issue exclusion patterns. These are the settings most people need for regular SEO audits, and they don't create server load concerns because they're primarily about how you want your data organized and presented.
Extra adds advanced features like custom filters, request header modification, JavaScript rendering controls, and custom CSS injection. These features require more server resources and are typically needed by power users running complex crawls of JavaScript-heavy sites or implementing sophisticated filtering logic. Extra tier is manually assigned after review, ensuring these powerful features go to users who understand their impact.
Admin is reserved for internal use and gives complete control over all settings, including server-level configurations like concurrency limits, memory allocation, and proxy settings. This tier exists for managing the demo server itself, not for regular users.
The tier system isn't about monetization—all tiers are free. It's about resource management and preventing abuse on a shared demo server. When you self-host LibreCrawl, there are no tiers. You have admin access to everything because it's your server, your resources, and your rules.
Open Source First, Demo Second
The existence of the demo doesn't change LibreCrawl's fundamental nature as an open-source, self-hosted tool. The MIT license remains in place. The source code stays freely available. The documentation still encourages self-hosting as the primary deployment method. The demo is supplementary infrastructure designed to lower the barrier to evaluation, not a pivot toward SaaS.
This approach follows in the footsteps of successful open-source projects like Ghost, Plausible Analytics, and Matomo. These projects are fundamentally self-hosted solutions with optional managed hosting for users who prefer convenience over control. The managed hosting exists as a service to the community, not as the business model. LibreCrawl takes this philosophy even further by making the demo tier completely free—there's no paid hosting option, just the demo for evaluation and self-hosting for production use.
If LibreCrawl becomes popular enough that hosting costs for the demo server become unsustainable, we might explore optional paid hosting tiers for users who want the convenience of managed infrastructure. But that would be an optional convenience service, not a requirement. The self-hosted version would remain the primary way to use LibreCrawl, fully featured and completely free forever.
Getting Started with the Demo
Ready to try LibreCrawl? Visit crawl.librecrawl.com and click "Continue as Guest" to start your first crawl immediately. You'll see the full interface, test the JavaScript rendering on a modern SPA, and export your results in CSV, JSON, or XML format. After three crawls, you can either register for unlimited demo access or download the source code from GitHub and self-host it on your own infrastructure.
The demo represents our commitment to making LibreCrawl accessible while maintaining our open-source, privacy-first principles. It's the easiest way to evaluate whether LibreCrawl meets your needs before investing time in self-hosting. And if you do decide to self-host, you'll know exactly what you're getting because you've already used the real tool.
LibreCrawl remains free, open source, and designed for self-hosting. The demo is just a convenient way to try it first.