Going Live
Connect your trained model to the live API — fetch live index data, trigger alerts from model output, and store data into a pipeline for downstream processes.
Once your model is built and backtested on historical data, connecting it to the live API is straightforward. The most common production patterns are: polling on a schedule and feeding results into your model, triggering alerts when your model produces a signal, and writing data into a data lake or database for downstream processes. Below are the key integration patterns. Full end-to-end examples are in the Recipes section.
Fetch live pre-aggregated index data
The index endpoints return hourly bins per topic, each containing headline_count, sentiment_sum, sentiment_abs_sum, and sentiment_std. The raw components are provided so you can construct your index value however your model expects — no normalisation is applied on our side.
curl "https://copilot-api.permutable.ai/v1/headlines/index/ticker/live/BZ_COM" \
-H "x-api-key: $PERMUTABLE_API_KEY"Invoke-RestMethod `
-Uri "https://copilot-api.permutable.ai/v1/headlines/index/ticker/live/BZ_COM" `
-Headers @{ "x-api-key" = $env:PERMUTABLE_API_KEY }import os, requests
resp = requests.get(
"https://copilot-api.permutable.ai/v1/headlines/index/ticker/live/BZ_COM",
headers={"x-api-key": os.environ["PERMUTABLE_API_KEY"]},
)
for bin in resp.json():
# construct your own index value from the raw components
avg = bin["sentiment_sum"] / bin["headline_count"] if bin["headline_count"] else 0
print(bin["publication_time"], bin["topic_name"], avg)Fetch live headline-level data
If you want full control over aggregation, use the headline feed to get individual article-level records. Each carries a raw sentiment_score between -1 and 1, topic classification, and bullish/bearish/neutral probabilities.
curl "https://copilot-api.permutable.ai/v1/headlines/feed/live/ticker/BZ_COM" \
-H "x-api-key: $PERMUTABLE_API_KEY"Invoke-RestMethod `
-Uri "https://copilot-api.permutable.ai/v1/headlines/feed/live/ticker/BZ_COM" `
-Headers @{ "x-api-key" = $env:PERMUTABLE_API_KEY }import os, requests
resp = requests.get(
"https://copilot-api.permutable.ai/v1/headlines/feed/live/ticker/BZ_COM",
headers={"x-api-key": os.environ["PERMUTABLE_API_KEY"]},
)
for headline in resp.json():
print(headline["publication_time"], headline["topic_name"], headline["sentiment_score"])Backfill a rolling window
Use the historical endpoints to fill any gaps — for example, on startup or after a missed poll.
curl "https://copilot-api.permutable.ai/v1/headlines/index/ticker/historical/BZ_COM?start_date=2024-01-01" \
-H "x-api-key: $PERMUTABLE_API_KEY"Invoke-RestMethod `
-Uri "https://copilot-api.permutable.ai/v1/headlines/index/ticker/historical/BZ_COM?start_date=2024-01-01" `
-Headers @{ "x-api-key" = $env:PERMUTABLE_API_KEY }import os, requests
from datetime import date, timedelta
start_date = (date.today() - timedelta(days=7)).isoformat()
resp = requests.get(
"https://copilot-api.permutable.ai/v1/headlines/index/ticker/historical/BZ_COM",
headers={"x-api-key": os.environ["PERMUTABLE_API_KEY"]},
params={"start_date": start_date},
)
print(resp.json())Trigger an alert from model output
A common production pattern is to run your model after each poll and fire an alert when the output crosses a threshold. The snippet below fetches the latest index data, computes a simple average sentiment, and sends a Slack notification if the signal is strong. Run this on a cron schedule that matches your strategy's rebalancing frequency.
import os, requests
HEADERS = {"x-api-key": os.environ["PERMUTABLE_API_KEY"]}
SLACK_WEBHOOK = os.environ["SLACK_WEBHOOK_URL"]
TICKER = "BZ_COM"
ALERT_THRESHOLD = 0.3
resp = requests.get(
f"https://copilot-api.permutable.ai/v1/headlines/index/ticker/live/{TICKER}",
headers=HEADERS,
)
bins = resp.json()
# compute average sentiment across all bins
total_sum = sum(b["sentiment_sum"] for b in bins if b["headline_count"])
total_count = sum(b["headline_count"] for b in bins)
avg_sentiment = total_sum / total_count if total_count else 0
if abs(avg_sentiment) >= ALERT_THRESHOLD:
direction = "bullish" if avg_sentiment > 0 else "bearish"
requests.post(SLACK_WEBHOOK, json={
"text": f"Sentiment alert: {TICKER} avg sentiment {avg_sentiment:.3f} ({direction})"
})Store data into a data lake
For downstream processes — model retraining, backtesting, or reporting — write each poll's results into your data store. The snippet below fetches the live index and appends it to a local CSV file, which can be replaced with any database write. The Data Lake Pipeline recipe shows a complete scheduled pipeline with deduplication and multi-ticker support.
import os, requests, pandas as pd
from pathlib import Path
HEADERS = {"x-api-key": os.environ["PERMUTABLE_API_KEY"]}
STORE = Path("sentiment_index.csv")
resp = requests.get(
"https://copilot-api.permutable.ai/v1/headlines/index/ticker/live/BZ_COM",
headers=HEADERS,
)
new_rows = pd.DataFrame(resp.json())
if STORE.exists():
existing = pd.read_csv(STORE, parse_dates=["publication_time"])
combined = pd.concat([existing, new_rows]).drop_duplicates(
subset=["publication_time", "topic_name"]
)
else:
combined = new_rows
combined.to_csv(STORE, index=False)
print(f"Stored {len(new_rows)} rows — {len(combined)} total")Next steps
- API Reference — full parameter details for index and macro endpoints
- Data Lake Pipeline — end-to-end recipe: scheduled ingest into a data lake
- Recipes — all end-to-end systematic workflow examples
Updated 5 days ago
