Locata
Methodology

How Locata gets from thousands of candidates to a defensible shortlist.

Same four steps for every vertical. Different data sources, different scoring prompts, same auditable trail from raw input to ranked decision.

  1. 01

    Define your set

    Upload candidates or generate them from criteria.

  2. 02

    Enrich automatically

    Public European data layered onto every location.

  3. 03

    Score with three models

    Claude, GPT, and Gemini, in parallel.

  4. 04

    Output with reasoning

    Ranked shortlist, evidence per location.

01Step 1 of 4

Define your set.

The first thirty minutes of any project decide whether the rest goes well. Locata supports three ways to scope candidate sets — pick the one that matches how your data is organised today.

    Upload a candidate set

    CSV, GeoJSON, Excel, or KML. Bring whatever your team already maintains. Locata standardises identifiers and de-duplicates against your master data.

    Generate by criteria

    “All Shell gas stations in NL with over 500 m² parking surface” or “every parcel zoned bedrijventerrein in the gemeenten on this list”. Plain-language input, Locata resolves it against public registries.

    Hybrid — your list plus discovery

    Provide a seed list and let Locata expand it with adjacent candidates: same operator, same parcel category, same catchment area.

Set sizes: From 50 candidates (one-off market entry) to 50,000+ (national-scale scouting). Above 10k we batch processing across regions; the scoring methodology doesn’t change.
02Step 2 of 4

Enrich every location, automatically.

Every candidate is enriched with the same set of public European data layers before it ever sees a scoring prompt. No manual lookups, no inconsistent depth between locations, no missing context that surfaces three weeks into review.

SourceWhat it providesUpdate cadenceCoverage
KadasterFoundation
Cadastral records — parcel ownership, boundaries, transactions.DailyNL
BAGFoundation
Buildings & addresses — every registered structure with attributes.DailyNL
BGTFoundation
Large-scale topography — pavement, vegetation, surface usage.DailyNL
BROFoundation
Subsurface registry — soil composition, groundwater, contamination.WeeklyNL
PDOKFoundation
National data platform — flood maps, environmental contours, zoning.VariesNL
CBSFoundation
Demographics, household composition, mobility patterns, income bands.Annual + thematicNL
NDWSector-specific
Traffic intensity and flow data on the national road network.Live (1-min)NL
RDWSector-specific
Vehicle registrations, including EV density by postcode.WeeklyNL
Grid operator dataSector-specific
Liander, Stedin, Enexis public capacity indicators — congestion and lead times.QuarterlyNL by region
Municipal open dataSector-specific
Plankaarten, parking pressure, public-space inventory per gemeente.VariesNL · selected EU
Google Street ViewImagery & visual
Visual context — frontage, signage, accessibility, surrounding land use.On demandGlobal
Aerial / orthophotosImagery & visual
PDOK luchtfoto — roof surface, parking layout, parcel boundaries.AnnualNL
OpenStreetMapFoundation
POI density, road network classification, EU coverage where national feeds are thin.LiveGlobal

We add sources per project when a vertical demands them — historical permit decisions, retail chain locations, telecoms coverage, environmental impact maps. If it’s public and structured, we probably already touch it; if it’s not, we’ll tell you straight.

03Step 3 of 4

Three models score in parallel. Disagreement becomes the signal.

Each enriched location is scored independently by Claude, GPT, and Gemini against the same prompt. Where they agree, the rank is confident. Where they disagree, the location is flagged for human review — and we tell you why.

Why three, not one.

  • No single black box. Models are trained on different data with different objectives. Identical scores from three independent systems is meaningful agreement; one model’s confidence is just that model’s opinion.
  • Disagreement is the alert. Locations where scores diverge wide are exactly the ones that need a human read — usually because the data is ambiguous or the prompt missed a constraint.
  • Reasoning, not just rank. Every score comes with the three reasons that earned it and the two risks the model flagged — with citations to the enrichment data it used.

Disagreement, illustrated

Site A · clear agreement913/3
Site B · split — review682/3
Site C · wide spread — discard or refine prompt421/3

Prompt structure, in plain view.

You see and approve the scoring prompt before any model runs. The same prompt feeds all three. Below: a real shape used on a deposit-return scouting run.

scoring-prompt.mdillustrative
# Scoring prompt — bulk return machine candidates

## Hard constraints (auto-fail)
- Parking surface ≥ 400 m²
- 24/7 commercial accessibility
- Three-phase power within 50 m

## Score 0-100, weighted:
- 35  Visitor volume (NDW, CBS mobility)
- 25  Catchment density within 5 km
- 20  Host commercial alignment (operator, chain)
- 15  Site accessibility (truck access, manoeuvring)
-  5  Public-space sensitivity (BAG, BGT, opposition history)

## Required output per location:
- Score (integer 0-100)
- Top 3 reasons (with data citation)
- Top 2 risks (with data citation)
- Confidence (low | medium | high)
04Step 4 of 4

Output that survives the next meeting.

A Locata report is built to be defensible — internally to your investment committee, externally in public consultation. Every score is traceable to its inputs.

    PDF report

    Per-location pages with score, reasoning, citations, and visual evidence. The format that survives a council presentation.

    CSV / Excel

    Flat ranked table for filtering, sorting, and importing into your existing pipeline.

    GeoJSON

    Geospatial export with all enrichment fields, ready for QGIS, ArcGIS, or your internal GIS.

    REST API

    Pull scores and reasoning straight into your asset-management, CRM, or planning system. Audit-trailed per call.

Beyond the file

For larger programs we run scoring on a cadence and push deltas as new candidates emerge. The integration with your GIS or asset system stays one connection; the analysis behind it keeps refreshing.

Boundaries

What Locata doesn’t do.

These constraints matter most to our utility, government, and municipal customers. They’re not features we plan to add later — they’re design decisions.

    No automated decisions on public space

    Locata supports human decision-making. It does not replace permitting, council approval, or operator judgement. Every score is a recommendation with reasoning attached.

    No black-box scoring

    If a model can’t say why a location scored what it scored, we don’t show the score. Every output traces back to the enrichment data the model cited.

    No training on customer data

    Your candidate sets, your scoring prompts, your outputs — none of it trains the public foundation models we use. Inference only, audit-logged.

Security & governance

Built for buyers with procurement and compliance teams.

ISO 27001-aligned

Information-security management processes follow the ISO 27001 control framework. Certification roadmap on request.

GDPR-compliant

EU-resident processing, data-processing agreements available, sub-processor list maintained per project.

EU data residency

Customer data, enrichment artefacts, and model inferences are processed and stored within the EU. No transfers outside the bloc without explicit consent.

Audit-trailed scoring

Every model call is logged with inputs, prompt version, model identifier, and output. You can reconstruct any score, retroactively.

For deeper security questions, your procurement team can request our security pack via security@locata.io.

FAQ

Methodik, in eigenen Worten.

  • Why three AI models instead of one?

    Three independent reasoners scoring the same candidate against the same prompt produce a signal no single model can. Where Claude, GPT, and Gemini agree, the rank is robust. Where they disagree, the location is flagged for human review — that disagreement is exactly the value, not noise to average away.
  • How is the scoring prompt defined?

    Together with your team in the first week of the engagement. The prompt declares hard constraints (e.g. surface ≥ 400 m², three-phase power within 50 m), weighted scoring criteria, and the required output schema per location. You see and approve the prompt before any model runs — it's how the methodology stays defensible.
  • What happens to candidates that fail hard constraints?

    They're filtered out before the AI scoring run, with the failure reason cited per candidate. This keeps inference cost focused on candidates that could actually be selected and gives you a transparent paper trail for the ones that were excluded.
  • Does Locata make automated decisions on public space?

    No. Locata supports human decision-making — it does not replace permitting, council approval, or operator judgement. Every score is a recommendation with reasoning, and we treat that boundary as a design decision, not a roadmap gap.
  • Are AI models trained on our data?

    No. Your candidate sets, scoring prompts, and outputs are never used to train the public foundation models we run inference on. Inference only, audit-logged per call. Customer data stays in our EU-resident environment.
  • What size set can Locata handle?

    From 50 candidates (one-off market entry) to 50,000+ (national-scale scouting). Above 10k we batch processing across regions; the scoring methodology and per-location reasoning don't change.

Try it on your data

The methodology is the demo.

30 minutes, your candidate set, live scoring with the three models. No staged numbers, no slideware.