Introduction

Getting Started

Nerovai is a lightweight SDK that sits between your application and any AI provider - tracking tokens, costs, and latency in real time. Setup takes two minutes.

🔒
Privacy first. Nerovai only tracks metadata - model name, token counts, cost, latency, and timestamps. Your prompts, completions, and user data are never sent to our servers.

What Nerovai tracks

For every AI API call wrapped with track(), Nerovai captures:

Field Type Description
provider string Auto-detected from the API base URL (e.g. "openai", "anthropic")
model string The model identifier from the response (e.g. "gpt-4o")
tokens_in number Prompt / input tokens consumed
tokens_out number Completion / output tokens generated
cost_usd number Calculated cost in USD based on current published pricing
latency_ms number End-to-end wall-clock time from call start to response received

2-minute quickstart

Install the package, set your API key, and wrap your first call. That's it - your dashboard starts populating immediately.

bash
npm install nerovai
.env
NEROVAI_API_KEY=nrv_your_api_key_here
javascript
const { track } = require('nerovai') const OpenAI = require('openai') const openai = new OpenAI() // Just wrap your existing call with track() - nothing else changes const response = await track( openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello' }] }), { project_id: 'your-project-id' } ) // response is untouched - use it exactly as before console.log(response.choices[0].message.content)
✓
That's it. Open your dashboard at /app and you'll see your first tracked call within seconds.
Setup

Installation

Nerovai works in any Node.js environment - Express, Next.js, serverless functions, CLI scripts, anything.

Install the package

Install from npm. We recommend pinning to a minor version.

bash
# npm npm install nerovai # yarn yarn add nerovai # pnpm pnpm add nerovai

Configure your API key

Get your API key from the dashboard at /app → Settings → API Keys. Then set it as an environment variable:

.env
NEROVAI_API_KEY=nrv_your_api_key_here # Optional: override the default endpoint NEROVAI_ENDPOINT=https://api.nerovai.com
â„šī¸
The SDK reads NEROVAI_API_KEY automatically on startup. You can also pass the key explicitly via require('nerovai').init({ apiKey: '...' }) if you prefer programmatic config.

Complete working example

Below is a full example with OpenAI. The pattern is identical for every other provider - just swap the client.

javascript
// example.js require('dotenv/config') const { track, calculateCost } = require('nerovai') const OpenAI = require('openai') const openai = new OpenAI() async function main() { const response = await track( openai.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Summarise the Gettysburg Address in 3 bullets.' } ] }), { project_id: 'my-summarizer', endpoint_name: 'summarize', metadata: { user_id: 'u_123', env: 'production' } } ) console.log(response.choices[0].message.content) } main()

TypeScript

Nerovai ships with full TypeScript definitions. No @types package needed.

typescript
import { track } from 'nerovai' import OpenAI from 'openai' const openai = new OpenAI() const response = await track<OpenAI.Chat.ChatCompletion>( openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello' }] }), { project_id: 'my-project' } ) // response is typed as OpenAI.Chat.ChatCompletion
SDK

SDK Reference

The Nerovai SDK exports two primary functions and an optional initialiser.

track(operation, options?)

Wraps an AI API call, captures its metadata, and streams the data to your Nerovai dashboard. Returns the original API response completely untouched - your existing code needs zero changes beyond adding the wrapper.

typescript
function track<T>( operation: Promise<T>, options?: TrackOptions ): Promise<T>

Parameters

Parameter Type Required Description
operation Promise<T> required The AI API call to track. Pass the Promise directly - do not await it before passing.
options.project_id string optional Associate this call with a project in your dashboard. Create projects at /app → Projects.
options.endpoint_name string optional Label for the endpoint or feature making the call (e.g. "summarize", "chat", "classify"). Helps group costs by feature.
options.metadata object optional Arbitrary key-value pairs attached to the log entry. Useful for user IDs, request IDs, environment flags. Values must be strings, numbers, or booleans.
options.cost number optional Override the automatically calculated cost (in USD). Use this if you have a negotiated pricing agreement with a provider.
options.provider string optional Override the auto-detected provider name. Useful when using proxy clients or custom OpenAI-compatible endpoints.

Returns

A Promise<T> that resolves to exactly the same value as the original API call. The tracking happens as a non-blocking side-effect and does not alter the response in any way.

calculateCost(model, promptTokens, completionTokens)

Utility function to compute the USD cost for a given number of tokens without making an API call. Useful for cost estimation before making a request.

typescript
function calculateCost( model: string, promptTokens: number, completionTokens: number ): number | null // Example const cost = calculateCost('gpt-4o', 500, 200) // → 0.004 (USD) // Returns null if the model is not in Nerovai's pricing table const unknown = calculateCost('my-custom-model', 100, 50) // → null

Environment Variables

The SDK is configured entirely via environment variables. No config file needed.

Variable Required Default Description
NEROVAI_API_KEY required - Your Nerovai API key. Get one from /app → Settings → API Keys.
NEROVAI_ENDPOINT optional https://api.nerovai.com Override the API endpoint. Used for proxies, testing, or future self-hosted deployments.
NEROVAI_SILENT optional false Set to true to suppress all SDK console output including warnings and errors.
NEROVAI_DEBUG optional false Set to true to enable verbose debug logging. Useful when troubleshooting missing data.
REST API

API Reference

The Nerovai REST API is available at https://api.nerovai.com. All endpoints require your API key in the x-api-key header unless noted otherwise.

â„šī¸
The SDK handles all API calls for you. You only need to use the REST API directly if you're building a custom integration, calling from a language without an SDK, or building automation around your usage data.

Endpoints

Method Path Auth Description
POST /api/logs ✓ required Record a single AI API call. Used by the SDK internally.
GET /api/logs ✓ required List log entries. Supports filtering by project_id, provider, model, date range, and pagination.
GET /api/logs/:id ✓ required Retrieve a single log entry by ID.
GET /api/projects ✓ required List all projects in your account.
POST /api/projects ✓ required Create a new project.
GET /api/stats/summary ✓ required Get aggregated spend, call count, and token totals for a date range.
GET /api/stats/daily ✓ required Get per-day spend and call count - powers the dashboard chart.
POST /api/billing/create-checkout ✓ required Create a Stripe checkout session. Returns a { url } to redirect the user.
GET /api/billing/portal ✓ required Create a Stripe billing portal session for managing subscriptions and invoices.
GET /api/health open Returns {"status":"ok"}. Use for uptime monitoring.

POST /api/logs - example

Record an AI call manually. All fields except provider and model are optional.

bash
curl -X POST https://api.nerovai.com/api/logs \ -H "x-api-key: YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{ "provider": "openai", "model": "gpt-4o", "tokens_in": 500, "tokens_out": 200, "cost_usd": 0.005, "latency_ms": 340, "project_id": "my-project", "endpoint_name": "summarize", "metadata": { "user_id": "u_123" } }'

Response

json
{ "id": "log_01hx...", "provider": "openai", "model": "gpt-4o", "tokens_in": 500, "tokens_out": 200, "cost_usd": 0.005, "latency_ms": 340, "project_id": "my-project", "created_at": "2026-03-16T14:22:01.000Z" }

GET /api/logs - query parameters

Param Type Description
project_id string Filter by project.
provider string Filter by provider (e.g. "openai", "anthropic").
model string Filter by model name.
from ISO 8601 Start of date range (inclusive).
to ISO 8601 End of date range (inclusive).
limit number Number of results per page (default 50, max 500).
cursor string Pagination cursor from previous response.
Integrations

Providers

Nerovai supports 9 major AI providers out of the box. Provider detection is automatic - we parse the API base URL and response structure to identify who made the call.

Provider Key Models Auto Cost Calc
OpenAI gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, text-embedding-3-small, text-embedding-3-large ✓
Anthropic claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, claude-3-sonnet, claude-3-haiku ✓
Google gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash, gemini-2.5-pro, gemini-pro, text-embedding-004 ✓
Mistral mistral-large, mistral-small, mistral-nemo, codestral, mixtral-8x22b, pixtral-large ✓
DeepSeek deepseek-chat, deepseek-reasoner, deepseek-coder ✓
Groq llama-3.3-70b-versatile, llama-3.1-8b-instant, gemma2-9b-it, mixtral-8x7b-32768, whisper-large-v3 ✓
Kimi (Moonshot) moonshot-v1-8k, moonshot-v1-32k, moonshot-v1-128k, kimi-latest ✓
Cohere command-r-plus, command-r, command-light, embed-multilingual-v3.0, rerank-english-v3.0 ✓
Perplexity llama-3.1-sonar-large-128k-online, llama-3.1-sonar-small-128k-online, llama-3.1-sonar-large-128k-chat ✓
â„šī¸
New provider? If you're using a provider or model not listed here, you can still log calls manually via the REST API or use options.provider and options.cost to supply values explicitly. Request new providers on our GitHub issues.

OpenAI-compatible providers

Many providers expose an OpenAI-compatible API (Together, Fireworks, Anyscale, LM Studio, Ollama, etc.). These work automatically with Nerovai when you pass options.provider to identify the correct pricing:

javascript
const together = new OpenAI({ apiKey: process.env.TOGETHER_API_KEY, baseURL: 'https://api.together.xyz/v1' }) const res = await track( together.chat.completions.create({ model: 'meta-llama/Llama-3-70b-chat-hf', ... }), { provider: 'together', cost: 0.0009 // manually specify cost if not in our table } )
Help

FAQ

Answers to the most common questions about Nerovai.

No - never. Nerovai only intercepts the metadata from your API calls: the model name, token counts, cost, latency, project ID, endpoint name, and any metadata you choose to pass. The actual content of your prompts, system messages, and completions is never extracted, transmitted, or stored. We architected it this way deliberately - we believe you should never have to trust a third party with your users' data. Our SDK's source is open for inspection on GitHub.
We currently support OpenAI, Anthropic, Google (Gemini), Mistral, DeepSeek, Groq, Kimi (Moonshot), Cohere, and Perplexity. Provider detection is fully automatic - we identify the provider from the API endpoint and response structure so you don't need to configure anything. For OpenAI-compatible providers (Together, Fireworks, Anyscale, Ollama, etc.), pass options.provider and optionally options.cost to manually set the values. We add new providers within days of significant launches - request one on our GitHub issues page.
We maintain an internal pricing table that maps model IDs to their per-token input and output costs, sourced from each provider's official pricing page. When a tracked call completes, we read the token counts from the usage object in the response, multiply them by the current rates, and record the result. Our pricing table is updated within hours of any provider announcement. If a model isn't in our table yet, calculateCost() returns null and you can provide a manual cost override in track() options. We also expose cost overrides at the project level for teams with negotiated rates.
Self-hosting is on our public roadmap, planned for the Team+ tier in Q3 2026. The backend is built on Node.js, PostgreSQL, and Redis, which should make self-hosting straightforward for teams with existing infra. Today, all data is transmitted to our managed cloud over TLS 1.3. If self-hosting is a hard compliance requirement, reach out to us at team@nerovai.com - we're happy to discuss timelines, NDAs, and early-access options for qualifying teams.
After signing up at /app, you'll land on your dashboard and see your API key immediately. Copy it into your environment as NEROVAI_API_KEY, run npm install nerovai, and wrap your first AI call with track(). Your dashboard starts populating within 5-10 seconds of the first tracked request. No onboarding call is required. If you run into any issues, we're on Discord (link in the dashboard) and respond to every message, usually within the hour during business days.