Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
学ぶ ChatGPT Work Principles | ChatGPT Overview
Introduction to ChatGPT

bookChatGPT Work Principles

メニューを表示するにはスワイプしてください

ChatGPT, like other models in the GPT family, operates in a series of steps to generate text-based responses. Here's a simplified breakdown of how ChatGPT works:

Note

In the context of ChatGPT, an output token refers to a unit of text that the model generates as part of its response.

How does ChatGPT determine probability distribution to generate the next output token?

ChatGPT determines the probability distribution for generating the next output token using its neural network architecture and pre-trained parameters.
Just like each neural network, it is trained on some training data to provide meaningful responses. The process of training ChatGPT consisted of two main steps:

  1. Firstly, it underwent a pre-training phase where it was exposed to a massive corpus of text from the internet. The model learned language patterns, grammar, and general knowledge during this phase. This pre-training process equipped ChatGPT with a broad understanding of language;
  2. Secondly, there was a fine-tuning phase. In this phase, the model was refined on specific datasets created by OpenAI. These datasets included demonstrations of correct behavior and comparisons to rank different responses. Fine-tuning helped customize the model's behavior, making it more suitable for generating safe and coherent responses in a conversational context. The combination of pre-training and fine-tuning contributed to ChatGPT's capabilities and behavior.
question mark

What is ChatGPT primarily designed for?

正しい答えを選んでください

すべて明確でしたか?

どのように改善できますか?

フィードバックありがとうございます!

セクション 1.  2

AIに質問する

expand

AIに質問する

ChatGPT

何でも質問するか、提案された質問の1つを試してチャットを始めてください

セクション 1.  2
some-alt