holaOlas logo
holaOlas logo
Back to methodology

Agentic Readiness Index (ARI) — Methodology

Measuring tourism operators' preparedness for the AI agent world

Why the ARI exists

AI agents are changing how travelers discover and book. ChatGPT, Perplexity, Google AIO — these systems do not "search" like a human. They read, interpret and recommend based on precise technical signals.

A tourism operator can be 100% independent from OTAs and remain structurally invisible to these new algorithmic intermediaries.

That is the double threat that TPDI and ARI capture together:

  • TPDI: dependence on human intermediaries (OTAs, platforms)
  • ARI: preparedness for algorithmic intermediaries (AI agents)

Calculation scope

The ARI is calculated on direct operators identified in each market analysis — i.e. actors classified as local_strict or local_extended with an accessible public URL.

Only direct operators are evaluated. Platforms (Viator, Booking.com, etc.) are excluded: they have their own technical infrastructure. The ARI measures the preparedness of operators who choose — or seek — autonomy.

Criteria and weighting

CriterionPointsWhat is measured
SSL / HTTPS10Basic security — prerequisite for any modern indexing
Mobile-friendly20Mobile rendering — majority of tourism searches
PageSpeed > 8025Loading speed (Google Lighthouse score)
Schema.org30Structured data readable by AI agents
Online booking15Presence of a detectable booking engine
Total100

Why these weightings

Schema.org (30 pts)

Schema.org receives the highest weight because it is the most discriminating signal. An AI agent that cannot read structured data on a page cannot accurately recommend what it contains. The majority of direct operators do not implement Schema.org — it is the most frequent and impactful gap.

PageSpeed (25 pts)

Because AI agents index and read pages. A slow page is a partially read page, or ignored.

Mobile (20 pts)

Because the majority of tourism searches are performed on mobile. A non-mobile-friendly site sends a strong negative signal to all modern indexing systems.

Online booking (15 pts)

Because an AI agent that does not detect a booking system cannot direct a traveler to a concrete action. The presence of a booking engine is a signal of direct operability.

SSL (10 pts)

Because it is a basic prerequisite. Its absence is eliminatory in practice — but its presence alone is not enough.

Score interpretation

A high ARI score does not guarantee being recommended by an AI agent. It guarantees not being excluded for technical reasons.

ScoreLevelMeaning
0–39LowOperator barely or not detectable by AI agents
40–69MediumDetectable but incomplete — critical signals are missing
70–89GoodWell prepared for current agentic indexing
90–100ExcellentOptimal infrastructure for the AI agent world

What ARI v1 does not measure

ARI v1 is a technical preparedness proxy. It does not yet measure:

  • These dimensions will be integrated in ARI v2. The roadmap is public by choice: transparency on limitations is what guarantees the credibility of what is measured.

Content quality

An AI agent needs clear descriptions in natural language: activity, price, duration, season. A technically perfect page without exploitable content remains invisible to recommendation.

External authority

AI agents train on the entire web. An operator cited in articles, reviews, travel blogs has a stronger agentic presence than an operator present only on their own site.

Data freshness

Up-to-date prices, availability, seasons. An agent that cannot answer "how much does it cost in March" will move to the next operator.

Technical infrastructure

ARI scores are calculated automatically and cached for 45 days per URL. Enrichment runs every 4 hours. Scores are displayed on public pages of each analyzed market and in the Pro Dashboard.

The list of detected booking engines covers 60+ systems: FareHarbor, Bokun, Rezdy, Checkfront, Regiondo, and their regional equivalents.

Acknowledged limitations

Like the TPDI, the ARI is a snapshot at a specific date. An operator can improve their score between two analyses. Scores evolve with the cache.

Automatic detection has an estimated margin of error of 2-5% for complex cases: multi-language sites, unreferenced booking engines, non-standard JavaScript architectures.

ARI v1 — publication context

ARI v1 was published simultaneously with the TPDI. This is deliberate: measuring platform dependence without measuring alternative preparedness would have been an incomplete analysis.

Current data indicates that a large majority of analyzed direct operators have an ARI score below 40. In other words: operators who chose autonomy from OTAs are not structurally prepared for the next wave of algorithmic intermediation.

That is precisely what the report "The Double Threat — State of Platform Dependence 2026" will document across 1,200 analyzed cities.

Reporting and improvement

Score anomaly, undetected booking engine, misrated site: contact@tpdi.io

Any correction is documented and applied in the next enrichment cycle.