Perplexity (Metric)

Category: Technical Terms

Category: Technical Terms

Definition

Perplexity measures how well a language model predicts text - lower perplexity means better prediction accuracy.

How It Works

Perplexity calculates how "surprised" a model is by text. If the model assigns high probability to the actual next words, perplexity is low. If it's constantly wrong about what comes next, perplexity is high.

Think of it as a test score where lower is better - a perplexity of 10 is much better than 100.

Why It Matters

Perplexity provides an objective way to compare language models. Researchers use it to track progress and identify which training techniques work best.

It's one of the key metrics for evaluating whether a new model actually improves on existing ones.


Back to Technical Terms | All Terms

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.