"# Embedding texts that are larger than the model's context length\n",
"# Embedding texts that are longer than the model's context length\n",
"\n",
"All models have a maximum context length for the input text they take in. However, this maximum length is defined in terms of _tokens_ instead of string length. If you are unfamiliar with tokenization, you can check out the [\"How to count tokens with tiktoken\"](How_to_count_tokens_with_tiktoken.ipynb) notebook in this same cookbook.\n",