Prerequisites

This tutorial shows how to set up Keywords AI tracing with Next.js and the Vercel AI SDK to monitor and trace your AI-powered applications.

Start with the pre-built example

We have built a pre-built example for you to get started quickly. You can find the example here.

Get up and running quickly with our pre-configured example:

npx create-next-app --example https://github.com/Keywords-AI/keywordsai-example-projects/tree/main/vercel_ai_next_openai my-keywordsai-app
yarn create next-app --example https://github.com/Keywords-AI/keywordsai-example-projects/tree/main/vercel_ai_next_openai my-keywordsai-app
pnpm create next-app --example https://github.com/Keywords-AI/keywordsai-example-projects/tree/main/vercel_ai_next_openai my-keywordsai-app

Then:

  1. Add your API keys to .env.local (see Step 3 below)
  2. Run yarn dev to start the development server
  3. Start chatting and check your KeywordsAI dashboard

Start from scratch

You can also follow the Step-by-Step Tutorial to build your own application.

If you want to understand the setup process or add KeywordsAI tracing to an existing project, follow the tutorial below.

Execute create-next-app with npm, Yarn, or pnpm to bootstrap the base example:

npx create-next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app
yarn create next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app
pnpm create next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app

To run the base example locally you need to:

  1. Sign up at OpenAI’s Developer Platform.
  2. Go to OpenAI’s dashboard and create an API KEY.
  3. If you choose to use external files for attachments, then create a Vercel Blob Store.
  4. Set the required environment variable as the token value as shown the example env file but in a new file called .env.local
  5. Run pnpm install to install the required dependencies.
  6. Run pnpm dev to launch the development server.

KeywordsAI Telemetry Setup

Now let’s add KeywordsAI tracing to monitor your AI application’s performance and usage.

Step 1: Install KeywordsAI Exporter

Install the KeywordsAI exporter package:

npm install @keywordsai/exporter-vercel
yarn add @keywordsai/exporter-vercel
bun add @keywordsai/exporter-vercel
pnpm add @keywordsai/exporter-vercel

Step 2: Set up OpenTelemetry Instrumentation

Next.js supports OpenTelemetry instrumentation out of the box. Following the Next.js OpenTelemetry guide, create an instrumentation.ts file in your project root:

Install vercel’s opentelemetry instrumentation:

yarn add @vercel/otel

Create instrumentation.ts in the root (where package.json lives) of your project

instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { KeywordsAIExporter } from "@keywordsai/exporter-vercel";

export function register() {
  registerOTel({
    serviceName: "next-app",
    traceExporter: new KeywordsAIExporter({ // <---- Use Keywords AI exporter as the custom exporter
      apiKey: process.env.KEYWORDSAI_API_KEY,
      baseUrl: process.env.KEYWORDSAI_BASE_URL,
      debug: true
    }),
  });
}

Step 3: Configure Environment Variables

Add your KeywordsAI credentials to your .env.local file:

.env.local
# OpenAI API Key (existing)
OPENAI_API_KEY=your_openai_api_key_here

# KeywordsAI Configuration
KEYWORDSAI_API_KEY=your_keywordsai_api_key_here
KEYWORDSAI_BASE_URL=https://api.keywordsai.co  # Optional: defaults to KeywordsAI API

Step 4: Enable Telemetry in API Routes

In your API route files (e.g., app/api/chat/route.ts), enable telemetry by adding the experimental_telemetry option to your AI SDK functions:

app/api/chat/route.ts
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages, id } = await req.json();

  console.log('chat id', id);

  const result = streamText({
    model: openai('gpt-4o'),
    messages,
    async onFinish({ text, toolCalls, toolResults, usage, finishReason }) {
      // implement your own logic here, e.g. for storing messages
      // or recording token usage
    },
    experimental_telemetry: {
      isEnabled: true,  // <---- Enable telemetry tracking
    },
  });

  return result.toDataStreamResponse();
}

Step 5: Test Your Setup

  1. Start your development server:
pnpm dev
yarn dev

There might be broken dependencies in the @vercel/otel package, simply install them if you see them:

yarn add @opentelementry/api-logs

If this fails, make sure you are use the right version of package manager

package.json
"packageManager" : "yarn@4.9.2"

And try again

  1. Make some chat requests through your application

  2. Check your KeywordsAI application:

  • Go to Signals -> Traces
  • Check the log that is traced

What Gets Traced

With this setup, KeywordsAI will automatically capture:

  • AI Model Calls: All calls to OpenAI models through the AI SDK
  • Request/Response Data: Input messages and generated responses
  • Token Usage: Input and output token counts for cost tracking
  • Performance Metrics: Latency and throughput data
  • Error Tracking: Failed requests and error details
  • Custom Metadata: Any additional context you want to track

Learn More

To learn more about the technologies used in this tutorial: