Public API Reference
Access your LogLens analytics data programmatically. Build custom dashboards, integrate with your tools, or automate reporting.
Base URL
https://api.loglens.ai
1 Authentication
All API requests require authentication using an API key. API keys can be created in your Organization Settings.
Pass your API key in the Authorization header:
Authorization: Bearer llapi_your_api_key_here
Note: API keys start with llapi_ to distinguish them from ingest keys.
2 Rate Limits
Rate limits are applied per organization, per hour. The limit depends on your plan:
| Plan | Requests/Hour |
|---|---|
| Free | No API access |
| Starter | 100 |
| Professional | 500 |
| Enterprise | 2,000 |
Rate limit headers are included in every response:
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 499
X-RateLimit-Reset: 2024-01-21T18:00:00Z
3 Error Handling
The API uses standard HTTP status codes and returns JSON error responses:
| Status | Description |
|---|---|
200 |
Success |
401 |
Invalid or missing API key |
403 |
API access not available on your plan |
404 |
Website not found |
429 |
Rate limit exceeded |
500 |
Internal server error |
{
"detail": "Rate limit exceeded",
"limit": 100,
"reset_at": "2024-01-21T18:00:00Z"
}
4 Date Ranges
All analytics endpoints accept a time range. Use hours for a rolling window, or start and end for a specific date range (ISO 8601 format).
GET /websites/{id}/bots?hours=168
GET /websites/{id}/bots?start=2026-03-05&end=2026-03-27
If neither is provided, the default is hours=24. When start/end are present, hours is ignored.
Googlebot Split Variants
The bots endpoint accepts split_variants=true to show Googlebot Desktop and Googlebot Smartphone as separate entries instead of merged:
GET /websites/{id}/bots?hours=168&split_variants=true
Command-Line Interface (CLI) NEW
Access your LogLens analytics from the terminal. Query traffic, bots, SEO data, and more — with table, JSON, and CSV output.
Installation
npm install -g @loglens/cli
Setup
# Save your API key
loglens config set-key YOUR_API_KEY
# Set a default website (optional)
loglens config set-website YOUR_WEBSITE_ID
Examples
# List websites
loglens websites
# Bot breakdown (last 7 days, JSON output)
loglens bots -w <id> -h 168 --json
# SEO crawl budget by directory
loglens seo budget-urls -w <id> --dir /blog/
# Export paths as CSV
loglens paths -w <id> --csv > paths.csv
# Pipe to jq
loglens bots -w <id> --json | jq '.[].name'
# Query a specific date range
loglens bots -w <id> --start 2026-03-05 --end 2026-03-27
# Show Googlebot Desktop and Smartphone separately
loglens bots -w <id> --split-variants
Run loglens --help for the full list of commands. See the help docs for detailed usage.
MCP Server Integration NEW
Connect LogLens to AI assistants like Claude, Cursor, and other tools that support the
Model Context Protocol (MCP).
The MCP server exposes all API endpoints as tools that AI assistants can call automatically. All tools support start, end, and hours date parameters.
MCP Endpoint
https://mcp-loglens.com/mcp?apiKey=YOUR_API_KEY
Setup
Install the mcp-remote bridge (requires Node.js 18+):
npm install -g mcp-remote
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"loglens": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp-loglens.com/mcp?apiKey=YOUR_API_KEY"
]
}
}
}
If npx isn't found, use the full path to node and mcp-remote. Run which node and which mcp-remote to find them.
Cursor
Go to Settings → MCP Servers and add the endpoint URL:
https://mcp-loglens.com/mcp?apiKey=YOUR_API_KEY
Other MCP Clients
Any MCP-compatible client can connect using the endpoint URL. Clients that don't support remote servers directly can use the mcp-remote bridge as shown above.
Available Tools
23 tools covering all LogLens analytics — traffic, bots, SEO, crawl budget, index coverage, and more. AI assistants will discover these automatically when connected.
| Tool | Description |
|---|---|
list_websites | List all websites in your account |
get_summary | Traffic summary for a website |
get_traffic | Traffic time-series data |
get_bots | Bot and crawler breakdown (supports split_variants for Googlebot Desktop/Smartphone) |
get_seo | SEO crawler analytics |
get_budget_urls | Per-URL crawl budget breakdown |
get_url_patterns | Auto-detected URL patterns |
get_index_coverage | Google index coverage summary |
Plus 15 more tools for paths, geography, status codes, IPs, referrers, devices, robots.txt, sitemap history, and site events.
List Websites
Returns all websites accessible to your API key.
GET /public/v1/websites
Response
{
"websites": [
{
"id": "ws_abc123",
"domain": "example.com",
"name": "Main Website",
"created_at": "2024-01-01T00:00:00Z"
}
],
"count": 1
}
Get Summary
Returns summary statistics for a website.
GET /public/v1/websites/{website_id}/summary
Parameters
| Name | Type | Description |
|---|---|---|
hours |
integer | Time period in hours (default: 24, max: 8760) |
Get Traffic
Returns traffic data with hourly breakdown.
GET /public/v1/websites/{website_id}/traffic?hours=24
Get Bots
Returns bot analytics including identification and verification status.
GET /public/v1/websites/{website_id}/bots?hours=24
Get Paths
Returns path/URL analytics sorted by request count.
GET /public/v1/websites/{website_id}/paths?hours=24&limit=100
Parameters
| Name | Type | Description |
|---|---|---|
hours |
integer | Time period (default: 24) |
limit |
integer | Max results (default: 100, max: 1000) |
Get Geography
Returns geographic distribution of traffic by country and city.
GET /public/v1/websites/{website_id}/geography?hours=24
Get Status Codes
Returns HTTP status code distribution with hourly breakdown.
GET /public/v1/websites/{website_id}/status-codes?hours=24
Get IPs
Returns IP address and IP range analytics.
GET /public/v1/websites/{website_id}/ips?hours=24
Get Referrers
Returns referrer domain analytics - see which sites are sending traffic to you.
GET /public/v1/websites/{website_id}/referrers?hours=24&limit=100
Query Parameters
hours- Time period (1-8760, default: 24)limit- Maximum referrers to return (1-500, default: 100)
Get Devices
Returns device, browser, and operating system analytics.
GET /public/v1/websites/{website_id}/devices?hours=24
Response includes
browsers- Browser breakdown (Chrome, Safari, Firefox, etc.)operating_systems- OS breakdown (Windows, macOS, iOS, Android, etc.)device_types- Device type breakdown (Desktop, Mobile, Tablet)
Get SEO Stats
Returns search engine crawler statistics - Googlebot, Bingbot, etc.
GET /public/v1/websites/{website_id}/seo?hours=24&bot=googlebot
Query Parameters
hours- Time period (1-8760, default: 24)bot- Filter by specific bot (optional, e.g. "googlebot", "bingbot")
Response includes
crawler_requests- Total crawler requestsverified_requests- Verified (legitimate) crawler requestsunverified_suspicious- Potentially spoofed crawler requestsavg_response_time_ms- Average response time to crawlerstop_crawlers- Breakdown by crawler
LLM Crawler Analytics
Get LLM and AI crawler activity for your website.
GET /public/v1/websites/{website_id}/llms?start_date=2024-01-01&end_date=2024-01-31
Parameters
| Name | Type | Description |
|---|---|---|
start_date |
string | Start date in YYYY-MM-DD format |
end_date |
string | End date in YYYY-MM-DD format |
Response
{
"llm_crawlers": [
{
"bot_name": "GPTBot",
"requests": 12847,
"pages_accessed": 3421,
"avg_response_time": 245.3,
"verified": true,
"first_seen": "2024-01-02T08:14:22Z",
"last_seen": "2024-01-31T22:41:09Z"
},
{
"bot_name": "ClaudeBot",
"requests": 8234,
"pages_accessed": 2105,
"avg_response_time": 189.7,
"verified": true,
"first_seen": "2024-01-05T11:32:48Z",
"last_seen": "2024-01-31T19:58:33Z"
}
]
}
Get Sitemap Coverage
Returns sitemap coverage data - tracks which URLs from your sitemap have been crawled by search engines, their crawl frequency, and current status.
GET /public/v1/websites/{website_id}/seo/sitemap?page=1&page_size=50&status=never_crawled
Query Parameters
page- Page number (default: 1)page_size- Results per page (1-500, default: 50)status- Filter: all, never_crawled, recently_crawled, stale, not_in_sitemapsort- Sort by: path, times_crawled, last_crawled, first_seensort_dir- Sort direction: asc, desc
Response fields (per URL)
url- Full URL from sitemapstatus- Crawl status: crawled or not_crawledlast_crawl_date- Timestamp of most recent crawlcrawl_count- Total number of times this URL has been crawledcontent_type- Content type of the URL (e.g. text/html)response_code- HTTP response code from last crawl
Index Coverage
Get Google index coverage summary for your website.
GET /public/v1/websites/{website_id}/index-coverage
Response
{
"buckets": {
"crawled_indexed": 1842,
"crawled_not_indexed": 356,
"not_crawled_indexed": 23,
"not_crawled_not_indexed": 491
}
}
Index Coverage URLs
Get paginated list of URLs in an index coverage bucket.
GET /public/v1/websites/{website_id}/index-coverage/urls?bucket=crawled_indexed&page=1&page_size=50
Parameters
| Name | Type | Description |
|---|---|---|
bucket |
string | Required. One of: crawled_indexed, crawled_not_indexed, not_crawled_indexed, not_crawled_not_indexed |
page |
integer | Page number (default: 1) |
page_size |
integer | Results per page (default: 50) |
Response
{
"urls": [
{
"url": "https://example.com/blog/post-1",
"last_crawl_date": "2024-01-28T14:22:00Z",
"last_status_code": 200
}
],
"page": 1,
"page_size": 50,
"total": 1842
}
List Exports
List data export jobs for your website.
GET /public/v1/websites/{website_id}/exports
Response
{
"exports": [
{
"id": "exp_abc123",
"status": "completed",
"created_at": "2024-01-20T10:30:00Z",
"download_url": "https://api.loglens.ai/public/v1/exports/exp_abc123/download"
}
]
}
Create Export
Create a new data export job.
POST /public/v1/websites/{website_id}/exports
Request Body
| Name | Type | Description |
|---|---|---|
type |
string | Export type: traffic, bots, paths, ips, status_codes, referrers, devices |
start_date |
string | Start date in YYYY-MM-DD format |
end_date |
string | End date in YYYY-MM-DD format |
format |
string | Export format: csv (default: csv) |
Response
{
"id": "exp_def456",
"status": "pending"
}
List Alerts
Get alert history for your website.
GET /public/v1/websites/{website_id}/alerts
Response
{
"alerts": [
{
"id": "alt_abc123",
"type": "traffic_spike",
"severity": "warning",
"message": "Traffic increased 340% compared to baseline",
"created_at": "2024-01-20T14:22:00Z",
"acknowledged": false
}
]
}
Get Alert Configuration
Get current alert configuration.
GET /public/v1/websites/{website_id}/alerts/config
Response
{
"traffic": {
"enabled": true,
"spike_threshold_percent": 200,
"drop_threshold_percent": 50
},
"error": {
"enabled": true,
"error_rate_threshold_percent": 5,
"status_codes": [500, 502, 503, 504]
},
"bot": {
"enabled": true,
"unverified_bot_threshold_percent": 30,
"new_bot_detection": true
}
}
Search IP Requests
Returns all requests made by a specific IP address. Useful for investigating suspicious activity.
GET /public/v1/websites/{website_id}/ips/{ip_address}/requests?hours=24&page=1
Query Parameters
hours- Time period (1-168, default: 24, max 7 days)page- Page number (default: 1)page_size- Results per page (1-500, default: 50)
Response includes (for each request)
timestamp- Request timestamppath- URL path requestedmethod- HTTP methodstatus- HTTP status codeuser_agent- User agent stringis_bot- Whether request was from a bot
Example Usage
$ cURL
curl -X GET \
"https://api.loglens.ai/public/v1/websites" \
-H "Authorization: Bearer llapi_your_key_here"
JS JavaScript / Node.js
const response = await fetch(
'https://api.loglens.ai/public/v1/websites',
{
headers: {
'Authorization': `Bearer ${apiKey}`
}
}
);
const data = await response.json();
console.log(data.websites);
PY Python
import requests
response = requests.get(
"https://api.loglens.ai/public/v1/websites",
headers={"Authorization": f"Bearer {api_key}"}
)
data = response.json()
print(data["websites"])