Overview

Understand the two-phase systematic workflow and how Permutable's historical data and live API fit together.

The systematic workflow has two distinct phases: building and testing your model offline using historical data, then feeding it live data in production via the API.

Historical CSV (research) → Model development → Live API (production)

These two phases use different delivery mechanisms by design. Bulk historical data comes as a structured CSV download — not through the API — so you can work with it freely in your analysis environment. The API is the production feed once your model is ready.

The data

There are two levels of data available for systematic use, both available live and historically.

Headline-level data

The most granular option. Each record is a single article matched to a ticker or macro topic, carrying a raw sentiment_score between -1 and 1, along with bullish/bearish/neutral probabilities, topic classification, and a timestamp.

This is the raw material. You aggregate it yourself — choosing your own time windows, weighting schemes, and normalisation — to construct exactly the features your model needs.

Endpoints:

  • GET /v1/headlines/feed/live/ticker/{ticker} — live headline-level records for a ticker
  • GET /v1/headlines/feed/historical/ticker/{ticker} — historical headline-level records for a ticker
  • GET /v1/headlines/feed/live/macro — live macro headline-level records

Pre-aggregated index data

If you want a ready-to-use starting point, the index endpoints provide the same headline data pre-aggregated into hourly bins per topic. Each bin contains headline_count, sentiment_sum, sentiment_abs_sum, and sentiment_std for that hour and topic combination.

This is not normalised — the raw sums and counts are provided so you can construct your preferred index value (e.g. sentiment_sum / headline_count, z-score, etc.) without losing flexibility.

Endpoints:

  • GET /v1/headlines/index/ticker/live/{ticker} — live hourly bins for a ticker
  • GET /v1/headlines/index/ticker/historical/{ticker} — historical hourly bins for a ticker
  • GET /v1/macro/live/sentiment/{model_id} — live macro hourly bins
  • GET /v1/macro/historical/sentiment/{model_id} — historical macro hourly bins

Both levels are available as bulk CSV downloads via the Permutable Platform for offline model development.

Phase 1 — Research and development

When you are onboarded, you receive access to historical index data via the Permutable Platform as a CSV download. This covers the full history of the indices in your subscription.

With this data you can:

  • Explore how indices move relative to price — lead/lag relationships, signal strength, regime behaviour
  • Build and train models using index values as features
  • Backtest strategies and simulate historical performance
  • Calibrate thresholds, lookback windows, and position sizing rules

This work is done entirely offline. The API is not involved at this stage.

Phase 2 — Production

Once your model is built and tested, you connect it to the live API to receive the same data in real time. Poll the live endpoint on a schedule that matches your strategy's rebalancing frequency and feed the results into your model.

The response structure is identical to the historical CSV, so the transition from research to production is straightforward.

Next steps

  • Going Live — connect your model to the live API feed
  • API Reference — full parameter details for index and macro endpoints
  • Recipes — end-to-end systematic workflow examples