Accessibility Scanner: Troubleshooting Guide


πŸ“‘ Accessibility Scanner v4.0.2: The Complete Troubleshooting Guide

This guide provides a comprehensive breakdown of common issues, error messages, and their solutions. By understanding the cause of a problem, you can use the scanner's powerful configuration options to resolve it.

First Step for Any Issue: Always check both the UI Activity Log and the backend Node.js console (the terminal where you started the scanner) for the most detailed error messages.

Installation & Caveats

  • ⚠️ Installer Blocked? Depending on your security settings, the installer may be flagged. If this happens, simply right-click the "Scanner Installer," select Properties, and at the bottom of the window, check Unblock, then click OK.
  • πŸͺͺ Unsigned Executable Code signing is not currently a priority. While it can improve trust, it's costly and doesn't guarantee avoidance of security warnings. The application's reputation—built from more users downloading and running it over time—is the most effective way to reduce these warnings.
  • πŸ’Ύ System Performance. The scanner can only perform as well as your computer allows. A poor internet connection or a slow system may lead to longer scan times, timeouts, or other issues. It’s a focused tool — give it the right environment, and it’ll get the job done.

1. Navigation, Connection & Page Load Errors

These are the most common errors, occurring when the scanner's browser fails to load a URL.

Symptoms:

  • The scan fails immediately or on specific pages.
  • UI Activity Log Messages:
    • Navigation timeout of [X] ms exceeded
    • net::ERR_CONNECTION_REFUSED, net::ERR_NAME_NOT_RESOLVED
    • net::ERR_PROXY_CONNECTION_FAILED
    • HTTP Status Errors (e.g., status code 403, 404, 429, 503)
    • Navigation failed because browser has disconnected!
    • Messages indicating a page is not HTML (if "Skip pre-scan content check" is off).

Cause 1.A: Simple Configuration or Network Issue

  • Possible Reasons: A typo in the URL; a VPN or corporate firewall is blocking access; the proxy server is incorrect or down; the target website is offline.
  • Solutions:
    • Verify URLs: Manually copy and paste every URL from your configuration into a regular browser to confirm they work.
    • Check Your Network: Temporarily disable your VPN. Check local firewall and antivirus settings.
    • Isolate Proxy: If using a proxy, try the scan without it and verify the proxy URL is correct.

Cause 1.B: Target Website is Slow (Timeouts)

  • Possible Reason: The page has many large resources or a slow backend, causing it to take longer to load than the scanner's configured timeout.
  • Solutions:
    • Increase Navigation Timeout:
      • Location: Advanced Scan Options -> Scan Behaviour -> Navigation Timeout (ms).
      • Action: Increase this value significantly (e.g., from 90000 to 120000).
    • Increase Other Timeouts (if needed):
      • Location: Backend Fine-Tuning (Advanced)
      • Action: For crawl-specific timeouts, increase Crawl Page Navigation Timeout (ms).

Cause 1.C: Target Website is Blocking the Scanner (403, 429, CAPTCHAs)

  • Possible Reason: The website's Web Application Firewall (WAF) or anti-bot system has detected the scan and is actively blocking requests. This is very common.
  • Solutions:
    • Use the "Analyse Site Settings" Button: This is your most important first step. It runs a quick check and will often detect protection mechanisms, suggesting the best settings.
    • Ensure Adaptive Remediation is Enabled:
      • Location: Advanced Scan Options -> Adaptive Remediation & Stealth
      • Action: This feature is on by default and is the primary defense. It will automatically increase delays and enable stealth modes when it detects blocking. You should see messages like [ADAPTIVE ACTION] in the UI log.
    • Manually Escalate Stealth Modes (if adaptive isn't enough):
      • Location: Advanced Scan Options -> Adaptive Remediation & Stealth
      • Action: Start by enabling Basic Stealth Mode. If issues persist, enable Enhanced Stealth & Fingerprint Spoofing instead.
    • Drastically Reduce Speed and Increase Delays:
      • Location: Crawl Options
      • Action: Set the base Crawl Delay (ms) to a high value (e.g., 2000 to 5000). The adaptive engine will add to this.
    • Disable All Parallelism:
      • Action: Uncheck Enable Parallel Crawling? and Enable Parallel Axe Scanning?. This makes your scan appear like a single, slow user.
    • Use a Proxy:
      • Location: Advanced Scan Options -> Other Options -> Proxy Server URL
      • Action: If your direct IP is blocked, routing traffic through a proxy is essential.
    • Contact Site Administrators: For legitimate audits, the most reliable solution is to ask for your IP address to be whitelisted.

2. Axe Scan Errors & Page-Specific Issues

These errors occur during the accessibility analysis phase on a specific page.

Symptoms:

  • A scan completes, but the report shows "Page Scan Errors" for certain URLs.
  • UI Activity Log Messages:
    • Axe execution timed out after [X]ms.
    • Axe script did not load successfully...
    • Scan Error: [Specific JavaScript error]
    • JS error on scan task page [URL]... (often an error in the target page's code).

Cause 2.A: Issues with the Target Page Itself

  • Possible Reasons:
    • Page JavaScript Errors: Errors in the scanned page's own code are interfering with Axe's execution. This is more common when using Interaction Scanning.
    • Extremely Complex/Large DOM: The page is so large that Axe cannot finish its analysis within the configured time.
    • Content Security Policy (CSP): A strict security policy is blocking the injection of the Axe-core script.
  • Solutions:
    • Increase Axe Timeout:
      • Location: Backend Fine-Tuning (Advanced) -> Timeout Settings -> Axe Execution Timeout (ms)
      • Action: Increase this value (e.g., from 45000 to 60000).
    • Handle Content Security Policies:
      • Location: Advanced Scan Options -> Other Options
      • Action: Check the box for Attempt to Handle Trusted Types?.
    • Exclude Problematic Sections:
      • Location: Advanced Scan Options -> Axe Configuration -> Exclude CSS Selectors
      • Action: If a specific part of the page (like a third-party widget) is causing errors, add its CSS selector here to tell Axe to ignore it.
    • Report Website Bugs: If the UI log shows JavaScript errors originating from the page, this is a bug in the website itself that should be reported to its developers.

Cause 2.B: Errors Originating from the Target Website (JavaScript Errors)

It is critical to understand that the scanner executes the target website's code. If the website has its own bugs, the scanner will report them. These are not errors in the scanner itself.

  • Symptoms: You will see messages in the UI Activity Log like:
    • InitialPage JS ERROR: Uncaught TypeError: Assignment to constant variable.
    • Scan task page JS ERROR: Uncaught TypeError: Cannot read properties of null (reading 'classList')
  • Cause: A bug in the JavaScript code of the website being scanned.
  • Impact on the Scan: Often harmless, but a severe error can make the page unstable, leading to scanner timeouts.
  • Solutions / Recommended Actions:
    • Report the Bug: This is the most important action. These are real bugs affecting real users. Copy the error and URL and report it to the website's developers.
    • For Scan Instability: If pages with these errors are causing your scan to fail:
      • Primary Solution: Navigate to Advanced Scan Options -> Scan Behaviour and ensure Enable Periodic Browser Restarts is checked. This gives the scanner a fresh start regularly.
      • Secondary Solution: Exclude the problematic page using the Exclude URL Patterns option in the Crawl settings.

3. Parallel Processing & Cluster Errors

These errors are specific to using Enable Parallel Crawling? or Enable Parallel Axe Scanning?.

Symptoms:

  • The scan seems to hang or stall.
  • UI Activity Log Messages: Cluster task error crawling [URL]..., Attempted to use detached Frame...

Cause & Solutions:

  • Possible Reason: The selected Concurrency value is too high for your machine's RAM or CPU, causing browser workers to crash.
  • Solutions:
    • Reduce Concurrency:
      • Location: Crawl Options and Axe Scan Rules sections.
      • Action: Lower the Crawl Concurrency and/or Scan Concurrency values. Start with 2 and see if the scan is stable.
    • Disable Parallelism: For maximum stability, uncheck both parallel options and run the scan sequentially.
    • Monitor System Resources: Use your computer's Task Manager (Windows) or Activity Monitor (macOS) to watch CPU and RAM usage. If they are pegged at 100%, your concurrency is too high.

4. System & Resource Exhaustion Issues

These issues manifest as general scanner slowness, crashes, or the application becoming unresponsive.

Symptoms:

  • The "Backend Memory Usage" in the UI climbs continuously to very high numbers (> 1-2 GB).
  • Your computer's fans spin up, and the whole system becomes sluggish.
  • The backend console shows errors like Network.[Command] timed out. Increase the 'protocolTimeout' setting...

Cause & Solutions:

  • Possible Reason: The scan configuration is too demanding for your hardware. The most common cause is using "Per Instance" screenshots on a very large crawl.
  • Solutions:
    • Adjust Screenshot Settings:
      • Location: General Behaviour & Output
      • Action: Switch Screenshot Mode to Per Issue Type (Lower Memory) or disable them entirely.
    • Reduce Parallelism: As described in Section 3, disable parallel modes or lower concurrency.
    • Limit the Scan Scope:
      • Location: Crawl Options
      • Action: Set a reasonable Max Crawl Size (e.g., 500) and/or use Max Crawl Depth.
    • Ensure Browser Restarts are Enabled:
      • Location: Advanced Scan Options -> Scan Behaviour
      • Action: For sequential scans, ensure Enable Periodic Browser Restarts is checked. This is a critical stability feature.

5. Understanding Scan Behavior (Not Errors)

Sometimes, the scanner's behavior is intentional and not an error.

  • Smart Crawl is Skipping Pages:
    • Symptom: The UI log shows messages like Skipping potential crawler trap... or Skipped by Smart Crawl.
    • Explanation: This is the Smart Crawl feature working as intended. It has identified a URL pattern that looks like a "trap" (e.g., a calendar or filter system) and is skipping it to prevent the scan from getting stuck and wasting time. This is a feature, not a bug. If you need to scan those pages, you can add them to the "Scan Specific URLs" list.
  • Adaptive Remediation is Activating:
    • Symptom: The UI log shows messages like [ADAPTIVE ACTION] Initial resistance detected. Increasing Crawl Delay... or Enabling Basic Stealth....
    • Explanation: This is the Adaptive Remediation Engine working correctly. It has detected that the target website is resisting the scan and is automatically taking steps to ensure the scan can continue. This is normal behavior when scanning protected sites.

Get πŸ“‘ Accessibility-Scanner