Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
学ぶ How LLMs Actually Work | What is AI
Practical AI for Work

How LLMs Actually Work

メニューを表示するにはスワイプしてください

You've talked to Claude. Now let's open the hood — without any math.

Step 1: Text Becomes Numbers

Computers can't read words — they can only do math. So before anything happens, your message is split into pieces called tokens, and each token is turned into a number.

A token is roughly 4 characters of English text. It can be a full word, part of a word, or punctuation. Spaces and capitalization matter — they change the token.

What Claude Actually Sees

When you write:

Hello, marketing manager!

Claude receives:

["Hello", ",", " marketing", " manager", "!"]

Which becomes:

[9906, 11, 8661, 6783, 0]

Five tokens. Five numbers. That's the input.

Rule of thumb: 750 words is roughly 1,000 tokens. More tokens means higher cost and slower response.

Step 2: Predict the Next Token

Once everything is numbers, the model does one thing: predict what comes next. Over and over again.

You write:

The capital of France is

The model considers what token is most likely to follow:

  • "Paris" — 92%;
  • " Paris" — 5%;
  • "Lyon" — 1%;
  • "the" — 0.5%.

It picks the most probable one. Then repeats. Until it decides to stop.

Important: the model didn't pick Paris because it knows the answer. It picked it because in training data, this phrase was almost always followed by "Paris."

AI = pattern prediction, not knowledge.

Step 3: Where Patterns Come From

The model was trained on public internet text, books, and code — up to a certain cutoff date.

Training means adjusting probabilities: which tokens follow which, and in what context.

Two consequences:

  • The model has no knowledge of recent events, your company data, or anything after the cutoff;
  • It learned the internet — including its mistakes. If part of the training data was wrong, the model can repeat those mistakes confidently.

Why AI Hallucinates

A hallucination is when the model generates something that sounds confident and looks realistic — but is actually wrong.

Common examples: fake citations, made-up URLs, incorrect statistics.

This happens because the model's goal is to produce the most probable sequence of tokens — not the most true one. If something sounds right, it may generate it even if it's false.

The fix

The solution isn't just making the model smarter. It's giving it tools — web search, real data, documents — so it can rely on facts rather than patterns alone.

That's exactly what you saw in the previous chapter, and what you'll keep using going forward.

question mark

A colleague says: "I asked Claude about a regulation that changed last month and it gave me the wrong answer — this tool is unreliable." What is the most accurate explanation for what happened?

正しい答えを選んでください

すべて明確でしたか?

どのように改善できますか?

フィードバックありがとうございます!

セクション 1.  2

AIに質問する

expand

AIに質問する

ChatGPT

何でも質問するか、提案された質問の1つを試してチャットを始めてください

セクション 1.  2
some-alt