![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9) Written by [@xtekky](https://github.com/hlohaus) & maintained by [@hlohaus](https://github.com/hlohaus) > By using this repository or any code related to it, you agree to the [legal notice](LEGAL_NOTICE.md). The author is not responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses. > [!Note] Lastet version: [![PyPI version](https://img.shields.io/pypi/v/g4f?color=blue)](https://pypi.org/project/g4f) [![Docker version](https://img.shields.io/docker/v/hlohaus789/g4f?label=docker&color=blue)](https://hub.docker.com/r/hlohaus789/g4f) ```sh pip install -U g4f ``` ```sh docker pull hlohaus789/g4f ``` ## π What's New - - Join our Telegram Channel: [t.me/g4f_channel](https://telegram.me/g4f_channel) - Join our Discord Group: [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5) - Explore the g4f Documentation (unfinished): [g4f.mintlify.app](https://g4f.mintlify.app) | Contribute to the docs via: [github.com/xtekky/gpt4free-docs](https://github.com/xtekky/gpt4free-docs) ## π Table of Contents - [π What's New](#-whats-new) - [π Table of Contents](#-table-of-contents) - [π οΈ Getting Started](#-getting-started) + [Docker container](#docker-container) - [Quick start](#quick-start) + [Use python package](#use-python-package) - [Prerequisites](#prerequisites) - [Install using pypi](#install-using-pypi) + [Docker for Developers](#docker-for-developers) - [π‘ Usage](#-usage) * [The `g4f` Package](#the-g4f-package) + [ChatCompletion](#chatcompletion) - [Completion](#completion) - [Providers](#providers) - [Using Browser](#using-browser) - [Async Support](#async-support) - [Proxy and Timeout Support](#proxy-and-timeout-support) * [Interference openai-proxy API](#interference-openai-proxy-api-use-with-openai-python-package-) + [Run interference API from PyPi package](#run-interference-api-from-pypi-package) + [Run interference API from repo](#run-interference-api-from-repo) - [π Providers and Models](#-providers-and-models) * [GPT-4](#gpt-4) * [GPT-3.5](#gpt-35) * [Other](#other) * [Models](#models) - [π Related GPT4Free Projects](#-related-gpt4free-projects) - [π€ Contribute](#-contribute) + [Create Provider with AI Tool](#create-provider-with-ai-tool) + [Create Provider](#create-provider) - [π Contributors](#-contributors) - [Β©οΈ Copyright](#-copyright) - [β Star History](#-star-history) - [π License](#-license) ## π οΈ Getting Started #### Docker container ##### Quick start: 1. [Download and install Docker](https://docs.docker.com/get-docker/) 2. Pull lastet image and run the container: ```sh docker pull hlohaus789/g4f docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" hlohaus789/g4f:latest ``` 3. Open the included client on: [http://localhost:8080/chat/](http://localhost:8080/chat/) or set the api base in your client to: [http://localhost:1337/v1](http://localhost:1337/v1) 4. (Optional) If you need to log in to a provider, you can view the desktop from the container here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret. #### Use python package ##### Prerequisites: 1. [Download and install Python](https://www.python.org/downloads/) (Version 3.10+ is recommended). 2. [Install Google Chrome](https://www.google.com/chrome/) for providers with webdriver ##### Install using pypi: ``` pip install -U g4f ``` ##### or: 1. Clone the GitHub repository: ``` git clone https://github.com/xtekky/gpt4free.git ``` 2. Navigate to the project directory: ``` cd gpt4free ``` 3. (Recommended) Create a Python virtual environment: You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments. ``` python3 -m venv venv ``` 4. Activate the virtual environment: - On Windows: ``` .\venv\Scripts\activate ``` - On macOS and Linux: ``` source venv/bin/activate ``` 5. Install the required Python packages from `requirements.txt`: ``` pip install -r requirements.txt ``` 6. Create a `test.py` file in the root folder and start using the repo, further Instructions are below ```py import g4f ... ``` #### Docker for Developers If you have Docker installed, you can easily set up and run the project without manually installing dependencies. 1. First, ensure you have both Docker and Docker Compose installed. - [Install Docker](https://docs.docker.com/get-docker/) - [Install Docker Compose](https://docs.docker.com/compose/install/) 2. Clone the GitHub repo: ```bash git clone https://github.com/xtekky/gpt4free.git ``` 3. Navigate to the project directory: ```bash cd gpt4free ``` 4. Build the Docker image: ```bash docker pull selenium/node-chrome docker-compose build ``` 5. Start the service using Docker Compose: ```bash docker-compose up ``` Your server will now be running at `http://localhost:1337`. You can interact with the API or run your tests as you would normally. To stop the Docker containers, simply run: ```bash docker-compose down ``` > [!Note] > When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the `docker-compose.yml` file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using `docker-compose build`. ## π‘ Usage ### The `g4f` Package #### ChatCompletion ```python import g4f g4f.debug.logging = True # Enable debug logging g4f.debug.check_version = False # Disable automatic version checking print(g4f.Provider.Bing.params) # Print supported args for Bing # Using automatic a provider for the given model ## Streamed completion response = g4f.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello"}], stream=True, ) for message in response: print(message, flush=True, end='') ## Normal response response = g4f.ChatCompletion.create( model=g4f.models.gpt_4, messages=[{"role": "user", "content": "Hello"}], ) # Alternative model setting print(response) ``` ##### Completion ```python import g4f allowed_models = [ 'code-davinci-002', 'text-ada-001', 'text-babbage-001', 'text-curie-001', 'text-davinci-002', 'text-davinci-003' ] response = g4f.Completion.create( model='text-davinci-003', prompt='say this is a test' ) print(response) ``` ##### Providers ```python import g4f # Print all available providers print([ provider.__name__ for provider in g4f.Provider.__providers__ if provider.working ]) # Execute with a specific provider response = g4f.ChatCompletion.create( model="gpt-3.5-turbo", provider=g4f.Provider.Aichat, messages=[{"role": "user", "content": "Hello"}], stream=True, ) for message in response: print(message) ``` ##### Using Browser Some providers using a browser to bypass the bot protection. They using the selenium webdriver to control the browser. The browser settings and the login data are saved in a custom directory. If the headless mode is enabled, the browser windows are loaded invisibly. For performance reasons, it is recommended to reuse the browser instances and close them yourself at the end: ```python import g4f from undetected_chromedriver import Chrome, ChromeOptions from g4f.Provider import ( Bard, Poe, AItianhuSpace, MyShell, PerplexityAi, ) options = ChromeOptions() options.add_argument("--incognito"); webdriver = Chrome(options=options, headless=True) for idx in range(10): response = g4f.ChatCompletion.create( model=g4f.models.default, provider=g4f.Provider.MyShell, messages=[{"role": "user", "content": "Suggest me a name."}], webdriver=webdriver ) print(f"{idx}:", response) webdriver.quit() ``` ##### Async Support To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution. ```python import g4f import asyncio _providers = [ g4f.Provider.Aichat, g4f.Provider.ChatBase, g4f.Provider.Bing, g4f.Provider.GptGo, g4f.Provider.You, g4f.Provider.Yqcloud, ] async def run_provider(provider: g4f.Provider.BaseProvider): try: response = await g4f.ChatCompletion.create_async( model=g4f.models.default, messages=[{"role": "user", "content": "Hello"}], provider=provider, ) print(f"{provider.__name__}:", response) except Exception as e: print(f"{provider.__name__}:", e) async def run_all(): calls = [ run_provider(provider) for provider in _providers ] await asyncio.gather(*calls) asyncio.run(run_all()) ``` ##### Proxy and Timeout Support All providers support specifying a proxy and increasing timeout in the create functions. ```python import g4f response = g4f.ChatCompletion.create( model=g4f.models.default, messages=[{"role": "user", "content": "Hello"}], proxy="http://host:port", # or socks5://user:pass@host:port timeout=120, # in secs ) print(f"Result:", response) ``` You can also set a proxy globally via an environment variable: ```sh export G4F_PROXY="http://host:port" ``` ### Interference openai-proxy API (Use with openai python package) #### Run interference API from PyPi package ```python from g4f.api import run_api run_api() ``` #### Run interference API from repo If you want to use the embedding function, you need to get a Hugging Face token. You can get one at [Hugging Face Tokens](https://huggingface.co/settings/tokens). Make sure your role is set to write. If you have your token, just use it instead of the OpenAI api-key. Run server: ```sh g4f api ``` or ```sh python -m g4f.api.run ``` ```python import openai # Set your Hugging Face token as the API key if you use embeddings # If you don't use embeddings, leave it empty openai.api_key = "YOUR_HUGGING_FACE_TOKEN" # Replace with your actual token # Set the API base URL if needed, e.g., for a local development environment openai.api_base = "http://localhost:1337/v1" def main(): chat_completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "write a poem about a tree"}], stream=True, ) if isinstance(chat_completion, dict): # Not streaming print(chat_completion.choices[0].message.content) else: # Streaming for token in chat_completion: content = token["choices"][0]["delta"].get("content") if content is not None: print(content, end="", flush=True) if __name__ == "__main__": main() ``` ## π Providers and Models ### GPT-4 | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | ------ | ------- | ------- | ----- | ------ | ------ | ---- | | [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | β | βοΈ | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [chat.geekgpt.org](https://chat.geekgpt.org) | `g4f.Provider.GeekGpt` | βοΈ | βοΈ | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [gptchatly.com](https://gptchatly.com) | `g4f.Provider.GptChatly` | βοΈ | βοΈ | β | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | βοΈ | βοΈ | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [www.phind.com](https://www.phind.com) | `g4f.Provider.Phind` | β | βοΈ | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | βοΈ | βοΈ | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | βοΈ | ### GPT-3.5 | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | ------ | ------- | ------- | ----- | ------ | ------ | ---- | | [www.aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [e.aiask.me](https://e.aiask.me) | `g4f.Provider.AiAsk` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chat-gpt.org](https://chat-gpt.org/chat) | `g4f.Provider.Aichat` | βοΈ | β | β | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [www.chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chat-shared2.zhile.io](https://chat-shared2.zhile.io) | `g4f.Provider.FakeGpt` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [freegpts1.aifree.site](https://freegpts1.aifree.site/) | `g4f.Provider.FreeGpt` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [gptalk.net](https://gptalk.net) | `g4f.Provider.GPTalk` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [ai18.gptforlove.com](https://ai18.gptforlove.com) | `g4f.Provider.GptForLove` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [gptgo.ai](https://gptgo.ai) | `g4f.Provider.GptGo` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [hashnode.com](https://hashnode.com) | `g4f.Provider.Hashnode` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [noowai.com](https://noowai.com) | `g4f.Provider.NoowAi` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | βοΈ | | [theb.ai](https://theb.ai) | `g4f.Provider.Theb` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | βοΈ | | [sdk.vercel.ai](https://sdk.vercel.ai) | `g4f.Provider.Vercel` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [you.com](https://you.com) | `g4f.Provider.You` | βοΈ | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [chat9.yqcloud.top](https://chat9.yqcloud.top/) | `g4f.Provider.Yqcloud` | βοΈ | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [chat.acytoo.com](https://chat.acytoo.com) | `g4f.Provider.Acytoo` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [aibn.cc](https://aibn.cc) | `g4f.Provider.Aibn` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [ai.ls](https://ai.ls) | `g4f.Provider.Ails` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chat.chatgptdemo.net](https://chat.chatgptdemo.net) | `g4f.Provider.ChatgptDemo` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chatgptduo.com](https://chatgptduo.com) | `g4f.Provider.ChatgptDuo` | βοΈ | β | β | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chatgptfree.ai](https://chatgptfree.ai) | `g4f.Provider.ChatgptFree` | βοΈ | β | β | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chatgptlogin.ai](https://chatgptlogin.ai) | `g4f.Provider.ChatgptLogin` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [cromicle.top](https://cromicle.top) | `g4f.Provider.Cromicle` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [gptgod.site](https://gptgod.site) | `g4f.Provider.GptGod` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [opchatgpts.net](https://opchatgpts.net) | `g4f.Provider.Opchatgpts` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | | [chat.ylokh.xyz](https://chat.ylokh.xyz) | `g4f.Provider.Ylokh` | βοΈ | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | β | ### Other | Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth | | ------ | ------- | ------- | ----- | ------ | ------ | ---- | | [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | β | β | β | ![Unknown](https://img.shields.io/badge/Unknown-grey) | βοΈ | | [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | β | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | β | | [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | β | β | βοΈ | ![Active](https://img.shields.io/badge/Active-brightgreen) | βοΈ | | [www.llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | β | β | βοΈ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | β | | [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | β | β | βοΈ | ![Inactive](https://img.shields.io/badge/Inactive-red) | βοΈ | ### Models | Model | Base Provider | Provider | Website | | --------------------------------------- | ------------- | ------------------- | ------------------------------------------- | | palm | Google | g4f.Provider.Bard | [bard.google.com](https://bard.google.com/) | | h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | h2ogpt-gm-oasst1-en-2048-open-llama-13b | Hugging Face | g4f.Provider.H2o | [www.h2o.ai](https://www.h2o.ai/) | | claude-instant-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | claude-v1 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | claude-v2 | Anthropic | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | command-light-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | command-nightly | Cohere | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-neox-20b | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | oasst-sft-1-pythia-12b | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | oasst-sft-4-pythia-12b-epoch-3.5 | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | santacoder | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | bloom | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | flan-t5-xxl | Hugging Face | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | code-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | gpt-4-0613 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-ada-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-babbage-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-curie-001 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-davinci-002 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | text-davinci-003 | OpenAI | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | llama13b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | | llama7b-v2-chat | Replicate | g4f.Provider.Vercel | [sdk.vercel.ai](https://sdk.vercel.ai/) | ## π Related GPT4Free Projects
π Projects | β Stars | π Forks | π Issues | π¬ Pull requests |
gpt4free | gpt4free-ts | |||
Free AI API's & Potential Providers List | ||||
ChatGPT-Clone | ||||
ChatGpt Discord Bot | ||||
Nyx-Bot (Discord) | ||||
LangChain gpt4free | ||||
ChatGpt Telegram Bot | ||||
ChatGpt Line Bot | ||||
Action Translate Readme | ||||
Langchain Document GPT |
|
This project is licensed under GNU_GPL_v3.0. |