If you’ve noticed that the sitemap generated during a scan sometimes shows more endpoints, or fewer, compared to previous scans — there are a few logical reasons this could happen.

This is a common and valid concern, and it’s rooted in a combination of how dynamic applications work, how scans are configured, and how data and access affect crawling.

Below, we outline the typical causes and what can be done to reduce variability.

Looking for more real-time endpoint discovery?


Our API Security Platform leverages OpenTelemetry-based tracing to detect your application's surface area by analyzing real-time traffic — not just what’s discoverable through crawling. This approach ensures deeper visibility, especially for APIs and dynamically generated endpoints that may not be reachable through traditional scans.


Learn more about our API Security Platform →

Common Reasons for Sitemap Differences

1. Changes in the Application

If the underlying application has changed between scans, the sitemap is expected to reflect that. This includes:

2. Differences in Application Data

Some parts of the application are data-dependent and only become visible under specific conditions. For example:

3. Missing or Incorrect Login Configuration

In dashboard-style apps, most of the application is available only after logging in. If login is not properly configured:

Additionally, if the credentials used have restricted permissions (e.g., limited user roles), the sitemap may not reflect admin-only or restricted features.

4. Scan Configuration Changes

Differences in scanner settings across scans can lead to sitemap variations. Common configuration-related causes include:

For accurate comparisons, it’s best to compare scans with the same configuration.

5. Environmental Factors During the Scan

Factors outside the application logic can also impact crawling behavior:

These can prevent the crawler from reaching certain pages or lead it to pick up unexpected routes.

6. Scan Type Differences

If you're comparing sitemaps from different types of scans, differences are expected:

To get consistent comparisons, compare sitemaps from scans of the same type.

What We Do to Ensure Consistency

While some variation is natural, we take several steps to reduce it and maximize coverage:

  1. Central Endpoint Inventory
    We maintain a centralized inventory of all endpoints seen across web crawls, full scans, and observability. This inventory is cumulative and is used to enrich subsequent scans.

  2. Authentication Health Checks
    If login fails during a scan due to misconfiguration or invalid credentials, we flag it in your dashboard for quick resolution.

  3. Connectivity Resilience
    We automatically retry crawling attempts in case of temporary network issues, using exponential backoff to ensure better reliability.

  4. Sitemap Transparency
    For every scan, the sitemap is available in your dashboard, along with the full central inventory. This gives you visibility into what was detected per scan and what’s known overall.

Tips to Improve Consistency