Compare commits

..

No commits in common. 'main' and '0.3.0.9' have entirely different histories.

3
.gitignore vendored

@ -64,5 +64,4 @@ dist.py
x.txt
bench.py
to-reverse.txt
g4f/Provider/OpenaiChat2.py
generated_images/
g4f/Provider/OpenaiChat2.py

@ -2,13 +2,7 @@
<a href="https://trendshift.io/repositories/1692" target="_blank"><img src="https://trendshift.io/api/badge/repositories/1692" alt="xtekky%2Fgpt4free | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
The **ETA** till (v3 for g4f) where I, [@xtekky](https://github.com/xtekky) will pick this project back up and improve it is **`29` days** (written Tue 28 May), join [t.me/g4f_channel](https://t.me/g4f_channel) in the meanwhile to stay updated.
_____
Written by [@xtekky](https://github.com/xtekky) & maintained by [@hlohaus](https://github.com/hlohaus)
Written by [@xtekky](https://github.com/hlohaus) & maintained by [@hlohaus](https://github.com/hlohaus)
<div id="top"></div>
@ -30,7 +24,6 @@ docker pull hlohaus789/g4f
## 🆕 What's New
- Added `gpt-4o`, simply use `gpt-4o` in `chat.completion.create`.
- Installation Guide for Windows (.exe): 💻 [#installation-guide-for-windows](#installation-guide-for-windows-exe)
- Join our Telegram Channel: 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
- Join our Discord Group: 💬 [discord.gg/XfybzPXPH5](https://discord.gg/XfybzPXPH5)
@ -98,12 +91,7 @@ As per the survey, here is a list of improvements to come
```sh
docker pull hlohaus789/g4f
docker run \
-p 8080:8080 -p 1337:1337 -p 7900:7900 \
--shm-size="2g" \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_images:/app/generated_images \
hlohaus789/g4f:latest
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/har_and_cookies:/app/har_and_cookies hlohaus789/g4f:latest
```
3. **Access the Client:**
@ -115,10 +103,12 @@ docker run \
#### Installation Guide for Windows (.exe)
To ensure the seamless operation of our application, please follow the instructions below. These steps are designed to guide you through the installation process on Windows operating systems.
### Installation Steps
1. **Download the Application**: Visit our [releases page](https://github.com/xtekky/gpt4free/releases/tag/0.3.1.7) and download the most recent version of the application, named `g4f.exe.zip`.
2. **File Placement**: After downloading, locate the `.zip` file in your Downloads folder. Unpack it to a directory of your choice on your system, then execute the `g4f.exe` file to run the app.
3. **Open GUI**: The app starts a web server with the GUI. Open your favorite browser and navigate to `http://localhost:8080/chat/` to access the application interface.
##### Prerequisites
1. **WebView2 Runtime**: Our application requires the *WebView2 Runtime* to be installed on your system. If you do not have it installed, please download and install it from the [Microsoft Developer Website](https://developer.microsoft.com/en-us/microsoft-edge/webview2/). If you already have *WebView2 Runtime* installed but are encountering issues, navigate to *Installed Windows Apps*, select *WebView2*, and opt for the repair option.
##### Installation Steps
2. **Download the Application**: Visit our [latest releases page](https://github.com/xtekky/gpt4free/releases/latest) and download the most recent version of the application, named `g4f.webview.*.exe`.
3. **File Placement**: Once downloaded, transfer the `.exe` file from your downloads folder to a directory of your choice on your system, and then execute it to run the app.
##### Post-Installation Adjustment
4. **Firewall Configuration (Hotfix)**: Upon installation, it may be necessary to adjust your Windows Firewall settings to allow the application to operate correctly. To do this, access your Windows Firewall settings and allow the application.
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
@ -245,48 +235,28 @@ set_cookies(".google.com", {
})
```
#### Using .har and Cookie Files
You can place `.har` and cookie files in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
#### Creating .har Files to Capture Cookies
Alternatively, you can place your .har and cookie files in the `/har_and_cookies` directory. To export a cookie file, use the EditThisCookie extension available on the Chrome Web Store: [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg).
To capture cookies, you can also create `.har` files. For more details, refer to the next section.
You can also create .har files to capture cookies. If you need further assistance, refer to the next section.
#### Changing the Cookies Directory and Loading Cookie Files in Python
You can change the cookies directory and load cookie files in your Python environment. To set the cookies directory relative to your Python file, use the following code:
```python
import os.path
from g4f.cookies import set_cookies_dir, read_cookie_files
import g4f.debug
g4f.debug.logging = True
cookies_dir = os.path.join(os.path.dirname(__file__), "har_and_cookies")
set_cookies_dir(cookies_dir)
read_cookie_files(cookies_dir)
```bash
python -m g4f.cli api --debug
```
### Debug Mode
If you enable debug mode, you will see logs similar to the following:
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
Starting server... [g4f v-0.0.0] (debug)
```
#### .HAR File for OpenaiChat Provider
##### Generating a .HAR File
To utilize the OpenaiChat provider, a .har file is required from https://chatgpt.com/. Follow the steps below to create a valid .har file:
To utilize the OpenaiChat provider, a .har file is required from https://chat.openai.com/. Follow the steps below to create a valid .har file:
1. Navigate to https://chatgpt.com/ using your preferred web browser and log in with your credentials.
1. Navigate to https://chat.openai.com/ using your preferred web browser and log in with your credentials.
2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).
3. With the Developer Tools open, switch to the "Network" tab.
4. Reload the website to capture the loading process within the Network tab.
@ -322,7 +292,7 @@ set G4F_PROXY=http://host:port
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt.com](https://chatgpt.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
@ -331,7 +301,7 @@ set G4F_PROXY=http://host:port
While we wait for gpt-5, here is a list of new models that are at least better than gpt-3.5-turbo. **Some are better than gpt-4**. Expect this list to grow.
| Website | Provider | parameters | better than |
| ------ | ------- | ------ | ------ |
| ------ | ------- | ------ | ------ |
| [claude-3-opus](https://anthropic.com/) | `g4f.Provider.You` | ?B | gpt-4-0125-preview |
| [command-r+](https://txt.cohere.com/command-r-plus-microsoft-azure/) | `g4f.Provider.HuggingChat` | 104B | gpt-4-0314 |
| [llama-3-70b](https://meta.ai/) | `g4f.Provider.Llama` or `DeepInfra` | 70B | gpt-4-0314 |
@ -352,6 +322,7 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [f1.cnote.top](https://f1.cnote.top) | `g4f.Provider.Cnote` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [duckduckgo.com](https://duckduckgo.com/duckchat) | `g4f.Provider.DuckDuckGo` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ecosia.org](https://www.ecosia.org) | `g4f.Provider.Ecosia` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [feedough.com](https://www.feedough.com) | `g4f.Provider.Feedough` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
@ -441,11 +412,24 @@ While we wait for gpt-5, here is a list of new models that are at least better t
| Gemini | `g4f.Provider.Gemini` | ✔️ | ✔️ | [gemini.google.com](https://gemini.google.com) |
| Gemini API | `g4f.Provider.GeminiPro` | ❌ | gemini-1.5-pro | [ai.google.dev](https://ai.google.dev) |
| Meta AI | `g4f.Provider.MetaAI` | ✔️ | ❌ | [meta.ai](https://www.meta.ai) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | dall-e-3 | gpt-4-vision | [chatgpt.com](https://chatgpt.com) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | dall-e-3 | gpt-4-vision | [chat.openai.com](https://chat.openai.com) |
| Reka | `g4f.Provider.Reka` | ❌ | ✔️ | [chat.reka.ai](https://chat.reka.ai/) |
| Replicate | `g4f.Provider.Replicate` | stability-ai/sdxl| llava-v1.6-34b | [replicate.com](https://replicate.com) |
| You.com | `g4f.Provider.You` | dall-e-3| ✔️ | [you.com](https://you.com) |
```python
import requests
from g4f.client import Client
client = Client()
image = requests.get("https://change_me.jpg", stream=True).raw
response = client.chat.completions.create(
"",
messages=[{"role": "user", "content": "what is in this picture?"}],
image=image
)
print(response.choices[0].message.content)
```
## 🔗 Powered by gpt4free

@ -9,52 +9,21 @@ Designed to maintain compatibility with the existing OpenAI API, the G4F AsyncCl
The G4F AsyncClient API offers several key features:
- **Custom Providers:** The G4F Client API allows you to use custom providers. This feature enhances the flexibility of the API, enabling it to cater to a wide range of use cases.
- **ChatCompletion Interface:** The G4F package provides an interface for interacting with chat models through the ChatCompletion class. This class provides methods for creating both streaming and non-streaming responses.
- **Streaming Responses:** The ChatCompletion.create method can return a response iteratively as and when they are received if the stream parameter is set to True.
- **Non-Streaming Responses:** The ChatCompletion.create method can also generate non-streaming responses.
- **Image Generation and Vision Models:** The G4F Client API also supports image generation and vision models, expanding its utility beyond text-based interactions.
## Initializing the Client
To utilize the G4F `AsyncClient`, you need to create a new instance. Below is an example showcasing how to initialize the client with custom providers:
```python
from g4f.client import AsyncClient
from g4f.Provider import BingCreateImages, OpenaiChat, Gemini
client = AsyncClient(
provider=OpenaiChat,
image_provider=Gemini,
...
)
```
In this example:
- `provider` specifies the primary provider for generating text completions.
- `image_provider` specifies the provider for image-related functionalities.
## Configuration
- **ChatCompletion Interface:** The G4F package provides an interface for interacting with chat models through the ChatCompletion class. This class provides methods for creating both streaming and non-streaming responses.
You can configure the `AsyncClient` with additional settings, such as an API key for your provider and a proxy for all outgoing requests:
- **Streaming Responses:** The ChatCompletion.create method can return a response iteratively as and when they are received if the stream parameter is set to True.
```python
from g4f.client import AsyncClient
- **Non-Streaming Responses:** The ChatCompletion.create method can also generate non-streaming responses.
client = AsyncClient(
api_key="your_api_key_here",
proxies="http://user:pass@host",
...
)
```
- **Image Generation and Vision Models:** The G4F Client API also supports image generation and vision models, expanding its utility beyond text-based interactions.
- `api_key`: Your API key for the provider.
- `proxies`: The proxy configuration for routing requests.
## Using AsyncClient
### Text Completions
### Text Completions:
You can use the `ChatCompletions` endpoint to generate text completions. Heres how you can do it:
You can use the ChatCompletions endpoint to generate text completions as follows:
```python
response = await client.chat.completions.create(
@ -65,9 +34,7 @@ response = await client.chat.completions.create(
print(response.choices[0].message.content)
```
### Streaming Completions
The `AsyncClient` also supports streaming completions. This allows you to process the response incrementally as it is generated:
Streaming completions are also supported:
```python
stream = client.chat.completions.create(
@ -81,33 +48,6 @@ async for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
In this example:
- `stream=True` enables streaming of the response.
### Example: Using a Vision Model
The following code snippet demonstrates how to use a vision model to analyze an image and generate a description based on the content of the image. This example shows how to fetch an image, send it to the model, and then process the response.
```python
import requests
from g4f.client import Client
from g4f.Provider import Bing
client = AsyncClient(
provider=Bing
)
image = requests.get("https://my_website/image.jpg", stream=True).raw
# Or: image = open("local_path/image.jpg", "rb")
response = client.chat.completions.create(
"",
messages=[{"role": "user", "content": "what is in this picture?"}],
image=image
)
print(response.choices[0].message.content)
```
### Image Generation:
You can generate images using a specified prompt:
@ -122,17 +62,6 @@ response = await client.images.generate(
image_url = response.data[0].url
```
#### Base64 as the response format
```python
response = await client.images.generate(
prompt="a cool cat",
response_format="b64_json"
)
base64_text = response.data[0].b64_json
```
### Example usage with asyncio.gather
Start two tasks at the same time:

@ -3,7 +3,7 @@ import json
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "",
"provider": "",
"provider": "MetaAI",
"stream": True,
"messages": [
{"role": "assistant", "content": "What can you do? Who are you?"}

@ -0,0 +1,18 @@
import asyncio
import g4f
from g4f.client import AsyncClient
async def main():
client = AsyncClient(
provider=g4f.Provider.Ecosia,
)
async for chunk in client.chat.completions.create(
[{"role": "user", "content": "happy dogs on work. write some lines"}],
g4f.models.default,
stream=True,
green=True,
):
print(chunk.choices[0].delta.content or "", end="")
print(f"\nwith {chunk.model}")
asyncio.run(main())

@ -1,9 +0,0 @@
import requests
url = "http://localhost:1337/v1/images/generations"
body = {
"prompt": "heaven for dogs",
"provider": "OpenaiAccount",
"response_format": "b64_json",
}
data = requests.post(url, json=body, stream=True).json()
print(data)

@ -8,7 +8,7 @@ except ImportError:
has_nest_asyncio = False
from g4f.client import Client, ChatCompletion
from g4f.Provider import Bing, OpenaiChat
from g4f.Provider import Bing, OpenaiChat, DuckDuckGo
DEFAULT_MESSAGES = [{"role": "system", "content": 'Response in json, Example: {"success": false}'},
{"role": "user", "content": "Say success true in json"}]
@ -25,6 +25,13 @@ class TestProviderIntegration(unittest.TestCase):
self.assertIsInstance(response, ChatCompletion)
self.assertIn("success", json.loads(response.choices[0].message.content))
def test_duckduckgo(self):
self.skipTest("Not working")
client = Client(provider=DuckDuckGo)
response = client.chat.completions.create(DEFAULT_MESSAGES, "", response_format={"type": "json_object"})
self.assertIsInstance(response, ChatCompletion)
self.assertIn("success", json.loads(response.choices[0].message.content))
def test_openai(self):
self.skipTest("not working in this network")
client = Client(provider=OpenaiChat)

@ -11,8 +11,7 @@ from datetime import datetime, date
from ..typing import AsyncResult, Messages, ImageType, Cookies
from ..image import ImageRequest
from ..errors import ResponseError, ResponseStatusError, RateLimitError
from ..requests import DEFAULT_HEADERS
from ..requests.aiohttp import StreamSession
from ..requests import StreamSession, DEFAULT_HEADERS
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import get_random_hex
from .bing.upload_image import upload_image
@ -468,7 +467,7 @@ async def stream_generate(
continue
try:
response = json.loads(obj)
except ValueError:
except json.JSONDecodeError:
continue
if response and response.get('type') == 1 and response['arguments'][0].get('messages'):
message = response['arguments'][0]['messages'][0]

@ -1,14 +1,18 @@
from __future__ import annotations
import asyncio
import os
from typing import Iterator, Union
from ..cookies import get_cookies
from ..image import ImageResponse
from ..errors import MissingAuthError
from ..errors import MissingRequirementsError, MissingAuthError
from ..typing import AsyncResult, Messages, Cookies
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .bing.create_images import create_images, create_session
from .bing.create_images import create_images, create_session, get_cookies_from_browser
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
label = "Microsoft Designer in Bing"
label = "Microsoft Designer"
parent = "Bing"
url = "https://www.bing.com/images/create"
working = True
@ -34,9 +38,30 @@ class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
**kwargs
) -> AsyncResult:
session = BingCreateImages(cookies, proxy, api_key)
yield await session.generate(messages[-1]["content"])
yield await session.create_async(messages[-1]["content"])
def create(self, prompt: str) -> Iterator[Union[ImageResponse, str]]:
"""
Generator for creating imagecompletion based on a prompt.
Args:
prompt (str): Prompt to generate images.
Yields:
Generator[str, None, None]: The final output as markdown formatted string with images.
"""
cookies = self.cookies or get_cookies(".bing.com", False)
if "_U" not in cookies:
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Bing]({login_url})\n\n"
try:
self.cookies = get_cookies_from_browser(self.proxy)
except MissingRequirementsError as e:
raise MissingAuthError(f'Missing "_U" cookie. {e}')
yield asyncio.run(self.create_async(prompt))
async def generate(self, prompt: str) -> ImageResponse:
async def create_async(self, prompt: str) -> ImageResponse:
"""
Asynchronously creates a markdown formatted string with images based on the prompt.
@ -47,8 +72,9 @@ class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
str: Markdown formatted string with images.
"""
cookies = self.cookies or get_cookies(".bing.com", False)
if cookies is None or "_U" not in cookies:
if "_U" not in cookies:
raise MissingAuthError('Missing "_U" cookie')
async with create_session(cookies, self.proxy) as session:
images = await create_images(session, prompt)
proxy = self.proxy or os.environ.get("G4F_PROXY")
async with create_session(cookies, proxy) as session:
images = await create_images(session, prompt, proxy)
return ImageResponse(images, prompt, {"preview": "{image}?w=200&h=200"} if len(images) > 1 else {})

@ -4,8 +4,7 @@ import uuid
import secrets
from aiohttp import ClientSession
from ..typing import AsyncResult, Messages, ImageType
from ..image import to_data_uri
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class Blackbox(AsyncGeneratorProvider):
@ -18,15 +17,8 @@ class Blackbox(AsyncGeneratorProvider):
model: str,
messages: Messages,
proxy: str = None,
image: ImageType = None,
image_name: str = None,
**kwargs
) -> AsyncResult:
if image is not None:
messages[-1]["data"] = {
"fileText": image_name,
"imageBase64": to_data_uri(image)
}
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36",
"Accept": "*/*",

@ -9,7 +9,7 @@ from .helper import format_prompt
class Cohere(AbstractProvider):
url = "https://cohereforai-c4ai-command-r-plus.hf.space"
working = False
working = True
supports_gpt_35_turbo = False
supports_gpt_4 = False
supports_stream = True

@ -8,10 +8,11 @@ class DeepInfra(Openai):
label = "DeepInfra"
url = "https://deepinfra.com"
working = True
needs_auth = True
needs_auth = False
has_auth = True
supports_stream = True
supports_message_history = True
default_model = "meta-llama/Meta-Llama-3-70B-Instruct"
default_model = "meta-llama/Meta-Llama-3-70b-instruct"
default_vision_model = "llava-hf/llava-1.5-7b-hf"
model_aliases = {
'dbrx-instruct': 'databricks/dbrx-instruct',

@ -0,0 +1,82 @@
from __future__ import annotations
import json
import aiohttp
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import get_connector
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from ..providers.conversation import BaseConversation
class DuckDuckGo(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://duckduckgo.com/duckchat"
working = True
supports_gpt_35_turbo = True
supports_message_history = True
default_model = "gpt-3.5-turbo-0125"
models = ["gpt-3.5-turbo-0125", "claude-instant-1.2"]
model_aliases = {"gpt-3.5-turbo": "gpt-3.5-turbo-0125"}
status_url = "https://duckduckgo.com/duckchat/v1/status"
chat_url = "https://duckduckgo.com/duckchat/v1/chat"
user_agent = 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0'
headers = {
'User-Agent': user_agent,
'Accept': 'text/event-stream',
'Accept-Language': 'de,en-US;q=0.7,en;q=0.3',
'Accept-Encoding': 'gzip, deflate, br',
'Referer': 'https://duckduckgo.com/',
'Content-Type': 'application/json',
'Origin': 'https://duckduckgo.com',
'Connection': 'keep-alive',
'Cookie': 'dcm=1',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'Pragma': 'no-cache',
'TE': 'trailers'
}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
connector: aiohttp.BaseConnector = None,
conversation: Conversation = None,
return_conversation: bool = False,
**kwargs
) -> AsyncResult:
async with aiohttp.ClientSession(headers=cls.headers, connector=get_connector(connector, proxy)) as session:
if conversation is not None and len(messages) > 1:
vqd_4 = conversation.vqd_4
messages = [*conversation.messages, messages[-2], messages[-1]]
else:
async with session.get(cls.status_url, headers={"x-vqd-accept": "1"}) as response:
await raise_for_status(response)
vqd_4 = response.headers.get("x-vqd-4")
messages = [messages[-1]]
payload = {
'model': cls.get_model(model),
'messages': messages
}
async with session.post(cls.chat_url, json=payload, headers={"x-vqd-4": vqd_4}) as response:
await raise_for_status(response)
if return_conversation:
yield Conversation(response.headers.get("x-vqd-4"), messages)
async for line in response.content:
if line.startswith(b"data: "):
chunk = line[6:]
if chunk.startswith(b"[DONE]"):
break
data = json.loads(chunk)
if "message" in data and data["message"]:
yield data["message"]
class Conversation(BaseConversation):
def __init__(self, vqd_4: str, messages: Messages) -> None:
self.vqd_4 = vqd_4
self.messages = messages

@ -0,0 +1,47 @@
from __future__ import annotations
import base64
import json
from aiohttp import ClientSession, BaseConnector
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import get_connector
class Ecosia(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://www.ecosia.org"
working = True
supports_gpt_35_turbo = True
default_model = "gpt-3.5-turbo-0125"
models = [default_model, "green"]
model_aliases = {"gpt-3.5-turbo": default_model}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
connector: BaseConnector = None,
proxy: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
headers = {
"authority": "api.ecosia.org",
"accept": "*/*",
"origin": cls.url,
"referer": f"{cls.url}/",
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36",
}
async with ClientSession(headers=headers, connector=get_connector(connector, proxy)) as session:
data = {
"messages": base64.b64encode(json.dumps(messages).encode()).decode()
}
api_url = f"https://api.ecosia.org/v2/chat/?sp={'eco' if model == 'green' else 'productivity'}"
async with session.post(api_url, json=data) as response:
await raise_for_status(response)
async for chunk in response.content.iter_any():
if chunk:
yield chunk.decode(errors="ignore")

@ -18,7 +18,7 @@ class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
needs_auth = True
default_model = "gemini-1.5-pro-latest"
default_vision_model = default_model
models = [default_model, "gemini-pro", "gemini-pro-vision", "gemini-1.5-flash"]
models = [default_model, "gemini-pro", "gemini-pro-vision"]
@classmethod
async def create_async_generator(

@ -2,18 +2,16 @@ from __future__ import annotations
import time
from hashlib import sha256
from aiohttp import ClientSession, BaseConnector
from aiohttp import BaseConnector, ClientSession
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
from ..errors import RateLimitError
from ..requests import raise_for_status
from ..requests.aiohttp import get_connector
from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider
class GeminiProChat(AsyncGeneratorProvider):
url = "https://www.chatgemini.net/"
url = "https://gemini-chatbot-sigma.vercel.app"
working = True
supports_message_history = True
@ -24,7 +22,7 @@ class GeminiProChat(AsyncGeneratorProvider):
messages: Messages,
proxy: str = None,
connector: BaseConnector = None,
**kwargs,
**kwargs
) -> AsyncResult:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0",
@ -40,35 +38,26 @@ class GeminiProChat(AsyncGeneratorProvider):
"Connection": "keep-alive",
"TE": "trailers",
}
async with ClientSession(
connector=get_connector(connector, proxy), headers=headers
) as session:
async with ClientSession(connector=get_connector(connector, proxy), headers=headers) as session:
timestamp = int(time.time() * 1e3)
data = {
"messages": [
{
"role": "model" if message["role"] == "assistant" else "user",
"parts": [{"text": message["content"]}],
}
for message in messages
],
"messages":[{
"role": "model" if message["role"] == "assistant" else "user",
"parts": [{"text": message["content"]}]
} for message in messages],
"time": timestamp,
"pass": None,
"sign": generate_signature(timestamp, messages[-1]["content"]),
}
async with session.post(
f"{cls.url}/api/generate", json=data, proxy=proxy
) as response:
async with session.post(f"{cls.url}/api/generate", json=data, proxy=proxy) as response:
if response.status == 500:
if "Quota exceeded" in await response.text():
raise RateLimitError(
f"Response {response.status}: Rate limit reached"
)
raise RateLimitError(f"Response {response.status}: Rate limit reached")
await raise_for_status(response)
async for chunk in response.content.iter_any():
yield chunk.decode(errors="ignore")
def generate_signature(time: int, text: str, secret: str = ""):
message = f"{time}:{text}:{secret}"
message = f'{time}:{text}:{secret}';
return sha256(message.encode()).hexdigest()

@ -23,8 +23,7 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO',
'mistralai/Mistral-7B-Instruct-v0.2',
'meta-llama/Meta-Llama-3-70B-Instruct',
'microsoft/Phi-3-mini-4k-instruct',
'01-ai/Yi-1.5-34B-Chat'
'microsoft/Phi-3-mini-4k-instruct'
]
model_aliases = {
"mistralai/Mistral-7B-Instruct-v0.1": "mistralai/Mistral-7B-Instruct-v0.2"
@ -103,7 +102,7 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
elif line["type"] == "stream":
token = line["token"]
if first_token:
token = token.lstrip().replace('\u0000', '')
token = token.lstrip()
first_token = False
yield token
elif line["type"] == "finalAnswer":

@ -10,15 +10,6 @@ from .helper import get_connector
from ..requests import raise_for_status
models = {
"gpt-4o": {
"context": "8K",
"id": "gpt-4o-free",
"maxLength": 31200,
"model": "ChatGPT",
"name": "GPT-4o-free",
"provider": "OpenAI",
"tokenLimit": 7800,
},
"gpt-3.5-turbo": {
"id": "gpt-3.5-turbo",
"name": "GPT-3.5-Turbo",
@ -104,7 +95,7 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
model_aliases = {
"claude-v2": "claude-2"
}
_auth_code = ""
_auth_code = None
_cookie_jar = None
@classmethod
@ -129,13 +120,7 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
cookie_jar=cls._cookie_jar,
connector=get_connector(connector, proxy, True)
) as session:
data = {
"conversationId": str(uuid.uuid4()),
"model": models[cls.get_model(model)],
"messages": messages,
"key": "",
"prompt": kwargs.get("system_message", "You are a helpful assistant."),
}
cls._auth_code = auth if isinstance(auth, str) else cls._auth_code
if not cls._auth_code:
async with session.post(
"https://liaobots.work/recaptcha/api/login",
@ -143,49 +128,31 @@ class Liaobots(AsyncGeneratorProvider, ProviderModelMixin):
verify_ssl=False
) as response:
await raise_for_status(response)
try:
async with session.post(
"https://liaobots.work/api/user",
json={"authcode": cls._auth_code},
json={"authcode": ""},
verify_ssl=False
) as response:
await raise_for_status(response)
cls._auth_code = (await response.json(content_type=None))["authCode"]
if not cls._auth_code:
raise RuntimeError("Empty auth code")
cls._cookie_jar = session.cookie_jar
async with session.post(
"https://liaobots.work/api/chat",
json=data,
headers={"x-auth-code": cls._auth_code},
verify_ssl=False
) as response:
await raise_for_status(response)
async for chunk in response.content.iter_any():
if b"<html coupert-item=" in chunk:
raise RuntimeError("Invalid session")
if chunk:
yield chunk.decode(errors="ignore")
except:
async with session.post(
"https://liaobots.work/api/user",
json={"authcode": "pTIQr4FTnVRfr"},
verify_ssl=False
) as response:
await raise_for_status(response)
cls._auth_code = (await response.json(content_type=None))["authCode"]
if not cls._auth_code:
raise RuntimeError("Empty auth code")
cls._cookie_jar = session.cookie_jar
async with session.post(
"https://liaobots.work/api/chat",
json=data,
headers={"x-auth-code": cls._auth_code},
verify_ssl=False
) as response:
await raise_for_status(response)
async for chunk in response.content.iter_any():
if b"<html coupert-item=" in chunk:
raise RuntimeError("Invalid session")
if chunk:
yield chunk.decode(errors="ignore")
data = {
"conversationId": str(uuid.uuid4()),
"model": models[cls.get_model(model)],
"messages": messages,
"key": "",
"prompt": kwargs.get("system_message", "You are a helpful assistant."),
}
async with session.post(
"https://liaobots.work/api/chat",
json=data,
headers={"x-auth-code": cls._auth_code},
verify_ssl=False
) as response:
await raise_for_status(response)
async for chunk in response.content.iter_any():
if b"<html coupert-item=" in chunk:
raise RuntimeError("Invalid session")
if chunk:
yield chunk.decode(errors="ignore")

@ -9,7 +9,7 @@ from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
class Llama(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://www.llama2.ai"
working = False
working = True
supports_message_history = True
default_model = "meta/meta-llama-3-70b-instruct"
models = [

@ -15,20 +15,14 @@ class PerplexityLabs(AsyncGeneratorProvider, ProviderModelMixin):
working = True
default_model = "mixtral-8x7b-instruct"
models = [
"llama-3-sonar-large-32k-online", "llama-3-sonar-small-32k-online", "llama-3-sonar-large-32k-chat", "llama-3-sonar-small-32k-chat",
"dbrx-instruct", "claude-3-haiku-20240307", "llama-3-8b-instruct", "llama-3-70b-instruct", "codellama-70b-instruct", "mistral-7b-instruct",
"llava-v1.5-7b-wrapper", "llava-v1.6-34b", "mixtral-8x7b-instruct", "mixtral-8x22b-instruct", "mistral-medium", "gemma-2b-it", "gemma-7b-it",
"related"
"sonar-small-online", "sonar-medium-online", "sonar-small-chat", "sonar-medium-chat", "dbrx-instruct", "claude-3-haiku-20240307", "llama-3-8b-instruct", "llama-3-70b-instruct", "codellama-70b-instruct", "mistral-7b-instruct", "llava-v1.5-7b-wrapper", "llava-v1.6-34b", "mixtral-8x7b-instruct", "mixtral-8x22b-instruct", "mistral-medium", "gemma-2b-it", "gemma-7b-it", "related"
]
model_aliases = {
"mistralai/Mistral-7B-Instruct-v0.1": "mistral-7b-instruct",
"mistralai/Mistral-7B-Instruct-v0.2": "mistral-7b-instruct",
"mistralai/Mistral-7B-Instruct-v0.1": "mistral-7b-instruct",
"mistralai/Mixtral-8x7B-Instruct-v0.1": "mixtral-8x7b-instruct",
"codellama/CodeLlama-70b-Instruct-hf": "codellama-70b-instruct",
"llava-v1.5-7b": "llava-v1.5-7b-wrapper",
"databricks/dbrx-instruct": "dbrx-instruct",
"meta-llama/Meta-Llama-3-70B-Instruct": "llama-3-70b-instruct",
"meta-llama/Meta-Llama-3-8B-Instruct": "llama-3-8b-instruct"
'databricks/dbrx-instruct': "dbrx-instruct"
}
@classmethod

@ -1,48 +0,0 @@
import json
from aiohttp import ClientSession
from ..typing import Messages, AsyncResult
from .base_provider import AsyncGeneratorProvider
class Pizzagpt(AsyncGeneratorProvider):
url = "https://www.pizzagpt.it"
api_endpoint = "/api/chatx-completion"
supports_message_history = False
supports_gpt_35_turbo = True
working = True
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
payload = {
"question": messages[-1]["content"]
}
headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Accept-Language": "en-US,en;q=0.9",
"Content-Type": "application/json",
"Origin": cls.url,
"Referer": f"{cls.url}/en",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36",
"X-Secret": "Marinara"
}
async with ClientSession() as session:
async with session.post(
f"{cls.url}{cls.api_endpoint}",
json=payload,
proxy=proxy,
headers=headers
) as response:
response.raise_for_status()
response_json = await response.json()
yield response_json["answer"]["content"]

@ -9,7 +9,6 @@ from ..image import to_bytes
class Reka(AbstractProvider):
url = "https://chat.reka.ai/"
working = True
needs_auth = True
supports_stream = True
default_vision_model = "reka"
cookies = {}
@ -21,12 +20,13 @@ class Reka(AbstractProvider):
messages: Messages,
stream: bool,
proxy: str = None,
timeout: int = 180,
api_key: str = None,
image: ImageType = None,
**kwargs
) -> CreateResult:
cls.proxy = proxy
if not api_key:
cls.cookies = get_cookies("chat.reka.ai")
if not cls.cookies:
@ -34,19 +34,19 @@ class Reka(AbstractProvider):
elif "appSession" not in cls.cookies:
raise ValueError("No appSession found in cookies for chat.reka.ai, log in or provide bearer_auth")
api_key = cls.get_access_token(cls)
conversation = []
for message in messages:
conversation.append({
"type": "human",
"text": message["content"],
})
if image:
image_url = cls.upload_image(cls, api_key, image)
conversation[-1]["image_url"] = image_url
conversation[-1]["media_type"] = "image"
headers = {
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
@ -64,7 +64,7 @@ class Reka(AbstractProvider):
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36',
}
json_data = {
'conversation_history': conversation,
'stream': True,
@ -73,7 +73,7 @@ class Reka(AbstractProvider):
'model_name': 'reka-core',
'random_seed': int(time.time() * 1000),
}
tokens = ''
response = requests.post('https://chat.reka.ai/api/chat',
@ -82,11 +82,11 @@ class Reka(AbstractProvider):
for completion in response.iter_lines():
if b'data' in completion:
token_data = json.loads(completion.decode('utf-8')[5:])['text']
yield (token_data.replace(tokens, ''))
tokens = token_data
def upload_image(cls, access_token, image: ImageType) -> str:
boundary_token = os.urandom(8).hex()
@ -120,7 +120,7 @@ class Reka(AbstractProvider):
cookies=cls.cookies, headers=headers, proxies=cls.proxy, data=data.encode('latin-1'))
return response.json()['media_url']
def get_access_token(cls):
headers = {
'accept': '*/*',
@ -141,8 +141,8 @@ class Reka(AbstractProvider):
try:
response = requests.get('https://chat.reka.ai/bff/auth/access_token',
cookies=cls.cookies, headers=headers, proxies=cls.proxy)
return response.json()['accessToken']
except Exception as e:
raise ValueError(f"Failed to get access token: {e}, refresh your cookies / log in into chat.reka.ai")

@ -10,7 +10,6 @@ from ..errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
needs_auth = True
default_model = "meta/meta-llama-3-70b-instruct"
model_aliases = {
"meta-llama/Meta-Llama-3-70B-Instruct": default_model

@ -8,7 +8,7 @@ import uuid
from ..typing import AsyncResult, Messages, ImageType, Cookies
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ..image import ImageResponse, ImagePreview, EXTENSIONS_MAP, to_bytes, is_accepted_format
from ..image import ImageResponse, ImagePreview, to_bytes, is_accepted_format
from ..requests import StreamSession, FormData, raise_for_status
from .you.har_file import get_telemetry_ids
from .. import debug
@ -24,7 +24,6 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
image_models = ["dall-e"]
models = [
default_model,
"gpt-4o",
"gpt-4",
"gpt-4-turbo",
"claude-instant",
@ -43,7 +42,7 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
*image_models
]
model_aliases = {
"claude-v2": "claude-2",
"claude-v2": "claude-2"
}
_cookies = None
_cookies_used = 0
@ -94,22 +93,17 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
"q": format_prompt(messages),
"domain": "youchat",
"selectedChatMode": chat_mode,
"conversationTurnId": str(uuid.uuid4()),
"chatId": str(uuid.uuid4()),
}
params = {
"userFiles": upload,
"selectedChatMode": chat_mode,
}
if chat_mode == "custom":
if debug.logging:
print(f"You model: {model}")
params["selectedAiModel"] = model.replace("-", "_")
params["selectedAIModel"] = model.replace("-", "_")
async with (session.post if chat_mode == "default" else session.get)(
f"{cls.url}/api/streamingSearch",
data=data if chat_mode == "default" else None,
params=params if chat_mode == "default" else data,
data=data,
params=params,
headers=headers,
cookies=cookies
) as response:
@ -120,9 +114,9 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
elif line.startswith(b'data: '):
if event in ["youChatUpdate", "youChatToken"]:
data = json.loads(line[6:])
if event == "youChatToken" and event in data and data[event]:
if event == "youChatToken" and event in data:
yield data[event]
elif event == "youChatUpdate" and "t" in data and data["t"]:
elif event == "youChatUpdate" and "t" in data and data["t"] is not None:
if chat_mode == "create":
match = re.search(r"!\[(.+?)\]\((.+?)\)", data["t"])
if match:
@ -144,9 +138,7 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
await raise_for_status(response)
upload_nonce = await response.text()
data = FormData()
content_type = is_accepted_format(file)
filename = f"image.{EXTENSIONS_MAP[content_type]}" if filename is None else filename
data.add_field('file', file, content_type=content_type, filename=filename)
data.add_field('file', file, content_type=is_accepted_format(file), filename=filename)
async with client.post(
f"{cls.url}/api/upload",
data=data,
@ -220,4 +212,4 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
'stytch_session_jwt': session["session_jwt"],
'ydc_stytch_session': session["session_token"],
'ydc_stytch_session_jwt': session["session_jwt"],
}
}

@ -1,7 +1,7 @@
from __future__ import annotations
from ..providers.types import BaseProvider, ProviderType
from ..providers.retry_provider import RetryProvider, IterListProvider
from ..providers.retry_provider import RetryProvider, IterProvider
from ..providers.base_provider import AsyncProvider, AsyncGeneratorProvider
from ..providers.create_images import CreateImagesProvider
@ -25,6 +25,8 @@ from .Cnote import Cnote
from .Cohere import Cohere
from .DeepInfra import DeepInfra
from .DeepInfraImage import DeepInfraImage
from .DuckDuckGo import DuckDuckGo
from .Ecosia import Ecosia
from .Feedough import Feedough
from .FlowGpt import FlowGpt
from .FreeChatgpt import FreeChatgpt
@ -44,7 +46,6 @@ from .MetaAIAccount import MetaAIAccount
from .Ollama import Ollama
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Pizzagpt import Pizzagpt
from .Replicate import Replicate
from .ReplicateImage import ReplicateImage
from .Vercel import Vercel

@ -1,4 +1,3 @@
from ..providers.base_provider import *
from ..providers.types import FinishReason, Streaming
from ..providers.conversation import BaseConversation
from .helper import get_cookies, format_prompt

@ -1,3 +1,7 @@
"""
This module provides functionalities for creating and managing images using Bing's service.
It includes functions for user login, session creation, image creation, and processing.
"""
from __future__ import annotations
import asyncio
@ -13,7 +17,9 @@ try:
except ImportError:
has_requirements = False
from ...providers.create_images import CreateImagesProvider
from ..helper import get_connector
from ...providers.types import ProviderType
from ...errors import MissingRequirementsError, RateLimitError
from ...webdriver import WebDriver, get_driver_cookies, get_browser
@ -95,7 +101,7 @@ def create_session(cookies: Dict[str, str], proxy: str = None, connector: BaseCo
headers["Cookie"] = "; ".join(f"{k}={v}" for k, v in cookies.items())
return ClientSession(headers=headers, connector=get_connector(connector, proxy))
async def create_images(session: ClientSession, prompt: str, timeout: int = TIMEOUT_IMAGE_CREATION) -> List[str]:
async def create_images(session: ClientSession, prompt: str, proxy: str = None, timeout: int = TIMEOUT_IMAGE_CREATION) -> List[str]:
"""
Creates images based on a given prompt using Bing's service.
@ -126,7 +132,7 @@ async def create_images(session: ClientSession, prompt: str, timeout: int = TIME
raise RuntimeError(f"Create images failed: {error}")
if response.status != 302:
url = f"{BING_URL}/images/create?q={url_encoded_prompt}&rt=3&FORM=GENCRE"
async with session.post(url, allow_redirects=False, timeout=timeout) as response:
async with session.post(url, allow_redirects=False, proxy=proxy, timeout=timeout) as response:
if response.status != 302:
raise RuntimeError(f"Create images failed. Code: {response.status}")
@ -179,4 +185,22 @@ def read_images(html_content: str) -> List[str]:
raise RuntimeError("Bad images found")
if not images:
raise RuntimeError("No images found")
return images
return images
def patch_provider(provider: ProviderType) -> CreateImagesProvider:
"""
Patches a provider to include image creation capabilities.
Args:
provider (ProviderType): The provider to be patched.
Returns:
CreateImagesProvider: The patched provider with image creation capabilities.
"""
from ..BingCreateImages import BingCreateImages
service = BingCreateImages()
return CreateImagesProvider(
provider,
service.create,
service.create_async
)

@ -17,12 +17,12 @@ except ImportError:
pass
from ... import debug
from ...typing import Messages, Cookies, ImageType, AsyncResult, AsyncIterator
from ..base_provider import AsyncGeneratorProvider, BaseConversation
from ...typing import Messages, Cookies, ImageType, AsyncResult
from ..base_provider import AsyncGeneratorProvider
from ..helper import format_prompt, get_cookies
from ...requests.raise_for_status import raise_for_status
from ...errors import MissingAuthError, MissingRequirementsError
from ...image import ImageResponse, to_bytes
from ...image import to_bytes, ImageResponse
from ...webdriver import get_browser, get_driver_cookies
REQUEST_HEADERS = {
@ -32,7 +32,7 @@ REQUEST_HEADERS = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'x-same-domain': '1',
}
REQUEST_BL_PARAM = "boq_assistant-bard-web-server_20240519.16_p0"
REQUEST_BL_PARAM = "boq_assistant-bard-web-server_20240421.18_p0"
REQUEST_URL = "https://gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate"
UPLOAD_IMAGE_URL = "https://content-push.googleapis.com/upload/"
UPLOAD_IMAGE_HEADERS = {
@ -57,11 +57,9 @@ class Gemini(AsyncGeneratorProvider):
image_models = ["gemini"]
default_vision_model = "gemini"
_cookies: Cookies = None
_snlm0e: str = None
_sid: str = None
@classmethod
async def nodriver_login(cls, proxy: str = None) -> AsyncIterator[str]:
async def nodriver_login(cls) -> Cookies:
try:
import nodriver as uc
except ImportError:
@ -73,13 +71,7 @@ class Gemini(AsyncGeneratorProvider):
user_data_dir = None
if debug.logging:
print(f"Open nodriver with user_dir: {user_data_dir}")
browser = await uc.start(
user_data_dir=user_data_dir,
browser_args=None if proxy is None else [f"--proxy-server={proxy}"],
)
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
browser = await uc.start(user_data_dir=user_data_dir)
page = await browser.get(f"{cls.url}/app")
await page.select("div.ql-editor.textarea", 240)
cookies = {}
@ -87,10 +79,10 @@ class Gemini(AsyncGeneratorProvider):
if c.domain.endswith(".google.com"):
cookies[c.name] = c.value
await page.close()
cls._cookies = cookies
return cookies
@classmethod
async def webdriver_login(cls, proxy: str) -> AsyncIterator[str]:
async def webdriver_login(cls, proxy: str):
driver = None
try:
driver = get_browser(proxy=proxy)
@ -119,40 +111,40 @@ class Gemini(AsyncGeneratorProvider):
model: str,
messages: Messages,
proxy: str = None,
api_key: str = None,
cookies: Cookies = None,
connector: BaseConnector = None,
image: ImageType = None,
image_name: str = None,
response_format: str = None,
return_conversation: bool = False,
conversation: Conversation = None,
language: str = "en",
**kwargs
) -> AsyncResult:
prompt = format_prompt(messages) if conversation is None else messages[-1]["content"]
prompt = format_prompt(messages)
if api_key is not None:
if cookies is None:
cookies = {}
cookies["__Secure-1PSID"] = api_key
cls._cookies = cookies or cls._cookies or get_cookies(".google.com", False, True)
base_connector = get_connector(connector, proxy)
async with ClientSession(
headers=REQUEST_HEADERS,
connector=base_connector
) as session:
if not cls._snlm0e:
await cls.fetch_snlm0e(session, cls._cookies) if cls._cookies else None
if not cls._snlm0e:
async for chunk in cls.nodriver_login(proxy):
yield chunk
snlm0e = await cls.fetch_snlm0e(session, cls._cookies) if cls._cookies else None
if not snlm0e:
cls._cookies = await cls.nodriver_login();
if cls._cookies is None:
async for chunk in cls.webdriver_login(proxy):
yield chunk
if not cls._snlm0e:
if cls._cookies is None or "__Secure-1PSID" not in cls._cookies:
if not snlm0e:
if "__Secure-1PSID" not in cls._cookies:
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
await cls.fetch_snlm0e(session, cls._cookies)
if not cls._snlm0e:
snlm0e = await cls.fetch_snlm0e(session, cls._cookies)
if not snlm0e:
raise RuntimeError("Invalid cookies. SNlM0e not found")
image_url = await cls.upload_image(base_connector, to_bytes(image), image_name) if image else None
async with ClientSession(
cookies=cls._cookies,
headers=REQUEST_HEADERS,
@ -160,17 +152,13 @@ class Gemini(AsyncGeneratorProvider):
) as client:
params = {
'bl': REQUEST_BL_PARAM,
'hl': language,
'_reqid': random.randint(1111, 9999),
'rt': 'c',
"f.sid": cls._sid,
'rt': 'c'
}
data = {
'at': cls._snlm0e,
'at': snlm0e,
'f.req': json.dumps([None, json.dumps(cls.build_request(
prompt,
language=language,
conversation=conversation,
image_url=image_url,
image_name=image_name
))])
@ -181,53 +169,37 @@ class Gemini(AsyncGeneratorProvider):
params=params,
) as response:
await raise_for_status(response)
image_prompt = response_part = None
last_content_len = 0
async for line in response.content:
try:
try:
line = json.loads(line)
except ValueError:
continue
if not isinstance(line, list):
continue
if len(line[0]) < 3 or not line[0][2]:
continue
response_part = json.loads(line[0][2])
if not response_part[4]:
continue
if return_conversation:
yield Conversation(response_part[1][0], response_part[1][1], response_part[4][0][0])
content = response_part[4][0][1][0]
except (ValueError, KeyError, TypeError, IndexError) as e:
print(f"{cls.__name__}:{e.__class__.__name__}:{e}")
continue
match = re.search(r'\[Imagen of (.*?)\]', content)
if match:
image_prompt = match.group(1)
content = content.replace(match.group(0), '')
yield content[last_content_len:]
last_content_len = len(content)
response = await response.text()
response_part = json.loads(json.loads(response.splitlines()[-5])[0][2])
if response_part[4] is None:
response_part = json.loads(json.loads(response.splitlines()[-7])[0][2])
content = response_part[4][0][1][0]
image_prompt = None
match = re.search(r'\[Imagen of (.*?)\]', content)
if match:
image_prompt = match.group(1)
content = content.replace(match.group(0), '')
yield content
if image_prompt:
images = [image[0][3][3] for image in response_part[4][0][12][7][0]]
if response_format == "b64_json":
yield ImageResponse(images, image_prompt, {"cookies": cls._cookies})
else:
resolved_images = []
preview = []
for image in images:
async with client.get(image, allow_redirects=False) as fetch:
image = fetch.headers["location"]
async with client.get(image, allow_redirects=False) as fetch:
image = fetch.headers["location"]
resolved_images.append(image)
preview.append(image.replace('=s512', '=s200'))
yield ImageResponse(resolved_images, image_prompt, {"orginal_links": images, "preview": preview})
resolved_images = []
preview = []
for image in images:
async with client.get(image, allow_redirects=False) as fetch:
image = fetch.headers["location"]
async with client.get(image, allow_redirects=False) as fetch:
image = fetch.headers["location"]
resolved_images.append(image)
preview.append(image.replace('=s512', '=s200'))
yield ImageResponse(resolved_images, image_prompt, {"orginal_links": images, "preview": preview})
def build_request(
prompt: str,
language: str,
conversation: Conversation = None,
conversation_id: str = "",
response_id: str = "",
choice_id: str = "",
image_url: str = None,
image_name: str = None,
tools: list[list[str]] = []
@ -235,15 +207,8 @@ class Gemini(AsyncGeneratorProvider):
image_list = [[[image_url, 1], image_name]] if image_url else []
return [
[prompt, 0, None, image_list, None, None, 0],
[language],
[
None if conversation is None else conversation.conversation_id,
None if conversation is None else conversation.response_id,
None if conversation is None else conversation.choice_id,
None,
None,
[]
],
["en"],
[conversation_id, response_id, choice_id, None, None, []],
None,
None,
None,
@ -260,7 +225,7 @@ class Gemini(AsyncGeneratorProvider):
headers=UPLOAD_IMAGE_HEADERS,
connector=connector
) as session:
async with session.options(UPLOAD_IMAGE_URL) as response:
async with session.options(UPLOAD_IMAGE_URL) as reponse:
await raise_for_status(response)
headers = {
@ -289,20 +254,7 @@ class Gemini(AsyncGeneratorProvider):
async def fetch_snlm0e(cls, session: ClientSession, cookies: Cookies):
async with session.get(cls.url, cookies=cookies) as response:
await raise_for_status(response)
response_text = await response.text()
match = re.search(r'SNlM0e\":\"(.*?)\"', response_text)
text = await response.text()
match = re.search(r'SNlM0e\":\"(.*?)\"', text)
if match:
cls._snlm0e = match.group(1)
sid_match = re.search(r'"FdrFJe":"([\d-]+)"', response_text)
if sid_match:
cls._sid = sid_match.group(1)
class Conversation(BaseConversation):
def __init__(self,
conversation_id: str = "",
response_id: str = "",
choice_id: str = ""
) -> None:
self.conversation_id = conversation_id
self.response_id = response_id
self.choice_id = choice_id
return match.group(1)

@ -26,7 +26,7 @@ from ...webdriver import get_browser
from ...typing import AsyncResult, Messages, Cookies, ImageType, AsyncIterator
from ...requests import get_args_from_browser, raise_for_status
from ...requests.aiohttp import StreamSession
from ...image import ImageResponse, ImageRequest, to_image, to_bytes, is_accepted_format
from ...image import to_image, to_bytes, ImageResponse, ImageRequest
from ...errors import MissingAuthError, ResponseError
from ...providers.conversation import BaseConversation
from ..helper import format_cookies
@ -38,7 +38,7 @@ DEFAULT_HEADERS = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br, zstd",
"accept-language": "en-US,en;q=0.5",
"referer": "https://chatgpt.com/",
"referer": "https://chat.openai.com/",
"sec-ch-ua": "\"Brave\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Windows\"",
@ -53,15 +53,15 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
"""A class for creating and managing conversations with OpenAI chat service"""
label = "OpenAI ChatGPT"
url = "https://chatgpt.com"
url = "https://chat.openai.com"
working = True
supports_gpt_35_turbo = True
supports_gpt_4 = True
supports_message_history = True
supports_system_message = True
default_model = None
default_vision_model = "gpt-4o"
models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-gizmo", "gpt-4o", "auto"]
default_vision_model = "gpt-4-vision"
models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-gizmo"]
model_aliases = {
"text-davinci-002-render-sha": "gpt-3.5-turbo",
"": "gpt-3.5-turbo",
@ -138,22 +138,23 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
An ImageRequest object that contains the download URL, file name, and other data
"""
# Convert the image to a PIL Image object and get the extension
data_bytes = to_bytes(image)
image = to_image(data_bytes)
image = to_image(image)
extension = image.format.lower()
# Convert the image to a bytes object and get the size
data_bytes = to_bytes(image)
data = {
"file_name": "" if image_name is None else image_name,
"file_name": image_name if image_name else f"{image.width}x{image.height}.{extension}",
"file_size": len(data_bytes),
"use_case": "multimodal"
}
# Post the image data to the service and get the image data
async with session.post(f"{cls.url}/backend-api/files", json=data, headers=headers) as response:
cls._update_request_args(session)
cls._update_request_args()
await raise_for_status(response)
image_data = {
**data,
**await response.json(),
"mime_type": is_accepted_format(data_bytes),
"mime_type": f"image/{extension}",
"extension": extension,
"height": image.height,
"width": image.width
@ -274,7 +275,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
first_part = line["message"]["content"]["parts"][0]
if "asset_pointer" not in first_part or "metadata" not in first_part:
return
if first_part["metadata"] is None or first_part["metadata"]["dalle"] is None:
if first_part["metadata"] is None:
return
prompt = first_part["metadata"]["dalle"]["prompt"]
file_id = first_part["asset_pointer"].split("file-service://", 1)[1]
@ -329,7 +330,6 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
image: ImageType = None,
image_name: str = None,
return_conversation: bool = False,
max_retries: int = 3,
**kwargs
) -> AsyncResult:
"""
@ -364,24 +364,84 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
) as session:
if cls._expires is not None and cls._expires < time.time():
cls._headers = cls._api_key = None
arkose_token = None
proofTokens = None
try:
arkose_token, api_key, cookies, headers, proofTokens = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
if cls._headers is None or cookies is not None:
cls._create_request_args(cookies)
api_key = kwargs["access_token"] if "access_token" in kwargs else api_key
if api_key is not None:
cls._set_api_key(api_key)
except NoValidHarFileError as e:
if cls._api_key is None and cls.needs_auth:
raise e
cls._create_request_args()
if cls.default_model is None and (not cls.needs_auth or cls._api_key is not None):
if cls._api_key is None:
cls._create_request_args(cookies)
async with session.get(
f"{cls.url}/",
headers=DEFAULT_HEADERS
) as response:
cls._update_request_args(session)
await raise_for_status(response)
try:
if not model:
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
else:
cls.default_model = cls.get_model(model)
except MissingAuthError:
pass
except Exception as e:
api_key = cls._api_key = None
cls._create_request_args()
if debug.logging:
print("OpenaiChat: Load default model failed")
print(f"{e.__class__.__name__}: {e}")
arkose_token = None
if cls.default_model is None:
error = None
try:
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
except NoValidHarFileError as e:
error = e
if cls._api_key is None:
await cls.nodriver_access_token()
if cls._api_key is None and cls.needs_auth:
raise error
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
async with session.post(
f"{cls.url}/backend-anon/sentinel/chat-requirements"
if cls._api_key is None else
f"{cls.url}/backend-api/sentinel/chat-requirements",
json={"conversation_mode_kind": "primary_assistant"},
headers=cls._headers
) as response:
cls._update_request_args(session)
await raise_for_status(response)
data = await response.json()
blob = data["arkose"]["dx"]
need_arkose = data["arkose"]["required"]
chat_token = data["token"]
proofofwork = ""
if "proofofwork" in data:
proofofwork = generate_proof_token(**data["proofofwork"], user_agent=cls._headers["user-agent"])
if need_arkose and arkose_token is None:
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
if arkose_token is None:
raise MissingAuthError("No arkose token found in .har file")
if debug.logging:
print(
'Arkose:', False if not need_arkose else arkose_token[:12]+"...",
'Turnstile:', data["turnstile"]["required"],
'Proofofwork:', False if proofofwork is None else proofofwork[:12]+"...",
)
try:
image_request = await cls.upload_image(session, cls._headers, image, image_name) if image else None
except Exception as e:
image_request = None
if debug.logging:
print("OpenaiChat: Upload image failed")
print(f"{e.__class__.__name__}: {e}")
@ -396,43 +456,6 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
auto_continue = False
conversation.finish_reason = None
while conversation.finish_reason is None:
async with session.post(
f"{cls.url}/backend-anon/sentinel/chat-requirements"
if cls._api_key is None else
f"{cls.url}/backend-api/sentinel/chat-requirements",
json={"p": generate_proof_token(True, user_agent=cls._headers["user-agent"], proofTokens=proofTokens)},
headers=cls._headers
) as response:
cls._update_request_args(session)
await raise_for_status(response)
requirements = await response.json()
need_arkose = requirements.get("arkose", {}).get("required")
chat_token = requirements["token"]
if need_arkose and arkose_token is None:
arkose_token, api_key, cookies, headers, proofTokens = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
if arkose_token is None:
raise MissingAuthError("No arkose token found in .har file")
if "proofofwork" in requirements:
proofofwork = generate_proof_token(
**requirements["proofofwork"],
user_agent=cls._headers["user-agent"],
proofTokens=proofTokens
)
if debug.logging:
print(
'Arkose:', False if not need_arkose else arkose_token[:12]+"...",
'Proofofwork:', False if proofofwork is None else proofofwork[:12]+"...",
)
ws = None
if need_arkose:
async with session.post(f"{cls.url}/backend-api/register-websocket", headers=cls._headers) as response:
wss_url = (await response.json()).get("wss_url")
if wss_url:
ws = await session.ws_connect(wss_url)
websocket_request_id = str(uuid.uuid4())
data = {
"action": action,
@ -458,21 +481,14 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
if proofofwork is not None:
headers["Openai-Sentinel-Proof-Token"] = proofofwork
async with session.post(
f"{cls.url}/backend-anon/conversation"
if cls._api_key is None else
f"{cls.url}/backend-anon/conversation" if cls._api_key is None else
f"{cls.url}/backend-api/conversation",
json=data,
headers=headers
) as response:
cls._update_request_args(session)
if response.status == 403 and max_retries > 0:
max_retries -= 1
if debug.logging:
print(f"Retry: Error {response.status}: {await response.text()}")
await asyncio.sleep(5)
continue
await raise_for_status(response)
async for chunk in cls.iter_messages_chunk(response.iter_lines(), session, conversation, ws):
async for chunk in cls.iter_messages_chunk(response.iter_lines(), session, conversation):
if return_conversation:
history_disabled = False
return_conversation = False
@ -502,14 +518,13 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
cls,
messages: AsyncIterator,
session: StreamSession,
fields: Conversation,
ws = None
fields: Conversation
) -> AsyncIterator:
last_message: int = 0
async for message in messages:
if message.startswith(b'{"wss_url":'):
message = json.loads(message)
ws = await session.ws_connect(message["wss_url"]) if ws is None else ws
ws = await session.ws_connect(message["wss_url"])
try:
async for chunk in cls.iter_messages_chunk(
cls.iter_messages_ws(ws, message["conversation_id"], hasattr(ws, "recv")),
@ -549,9 +564,12 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
raise RuntimeError(line["error"])
if "message_type" not in line["message"]["metadata"]:
return
image_response = await cls.get_generated_image(session, cls._headers, line)
if image_response is not None:
yield image_response
try:
image_response = await cls.get_generated_image(session, cls._headers, line)
if image_response is not None:
yield image_response
except Exception as e:
yield e
if line["message"]["author"]["role"] != "assistant":
return
if line["message"]["content"]["content_type"] != "text":
@ -583,7 +601,7 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
this._fetch = this.fetch;
this.fetch = async (url, options) => {
const response = await this._fetch(url, options);
if (url == "https://chatgpt.com/backend-api/conversation") {
if (url == "https://chat.openai.com/backend-api/conversation") {
this._headers = options.headers;
return response;
}
@ -606,7 +624,7 @@ this.fetch = async (url, options) => {
cls._update_cookie_header()
@classmethod
async def nodriver_access_token(cls, proxy: str = None):
async def nodriver_access_token(cls):
try:
import nodriver as uc
except ImportError:
@ -618,11 +636,8 @@ this.fetch = async (url, options) => {
user_data_dir = None
if debug.logging:
print(f"Open nodriver with user_dir: {user_data_dir}")
browser = await uc.start(
user_data_dir=user_data_dir,
browser_args=None if proxy is None else [f"--proxy-server={proxy}"],
)
page = await browser.get("https://chatgpt.com/")
browser = await uc.start(user_data_dir=user_data_dir)
page = await browser.get("https://chat.openai.com/")
await page.select("[id^=headlessui-menu-button-]", 240)
api_key = await page.evaluate(
"(async () => {"
@ -637,7 +652,7 @@ this.fetch = async (url, options) => {
)
cookies = {}
for c in await page.browser.cookies.get_all():
if c.domain.endswith("chatgpt.com"):
if c.domain.endswith("chat.openai.com"):
cookies[c.name] = c.value
user_agent = await page.evaluate("window.navigator.userAgent")
await page.close()

@ -1,31 +0,0 @@
from __future__ import annotations
from .Openai import Openai
from ...typing import AsyncResult, Messages
class PerplexityApi(Openai):
label = "Perplexity API"
url = "https://www.perplexity.ai"
working = True
default_model = "llama-3-sonar-large-32k-online"
models = [
"llama-3-sonar-small-32k-chat",
"llama-3-sonar-small-32k-online",
"llama-3-sonar-large-32k-chat",
"llama-3-sonar-large-32k-online",
"llama-3-8b-instruct",
"llama-3-70b-instruct",
"mixtral-8x7b-instruct"
]
@classmethod
def create_async_generator(
cls,
model: str,
messages: Messages,
api_base: str = "https://api.perplexity.ai",
**kwargs
) -> AsyncResult:
return super().create_async_generator(
model, messages, api_base=api_base, **kwargs
)

@ -7,5 +7,4 @@ from .Poe import Poe
from .Openai import Openai
from .Groq import Groq
from .OpenRouter import OpenRouter
from .OpenaiAccount import OpenaiAccount
from .PerplexityApi import PerplexityApi
from .OpenaiAccount import OpenaiAccount

@ -12,8 +12,6 @@ from copy import deepcopy
from .crypt import decrypt, encrypt
from ...requests import StreamSession
from ...cookies import get_cookies_dir
from ... import debug
class NoValidHarFileError(Exception):
...
@ -28,23 +26,24 @@ class arkReq:
self.userAgent = userAgent
arkPreURL = "https://tcr9i.chat.openai.com/fc/gt2/public_key/35536E1E-65B4-4D96-9D97-6ADB7EFF8147"
sessionUrl = "https://chatgpt.com/"
sessionUrl = "https://chat.openai.com/api/auth/session"
chatArk: arkReq = None
accessToken: str = None
cookies: dict = None
headers: dict = None
proofTokens: list = []
def readHAR():
global proofTokens
dirPath = "./"
harPath = []
chatArks = []
accessToken = None
cookies = {}
for root, dirs, files in os.walk(get_cookies_dir()):
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harPath.append(os.path.join(root, file))
if harPath:
break
if not harPath:
raise NoValidHarFileError("No .har file found")
for path in harPath:
@ -55,26 +54,15 @@ def readHAR():
# Error: not a HAR file!
continue
for v in harFile['log']['entries']:
v_headers = get_headers(v)
try:
if "openai-sentinel-proof-token" in v_headers:
proofTokens.append(json.loads(base64.b64decode(
v_headers["openai-sentinel-proof-token"].split("gAAAAAB", 1)[-1].encode()
).decode()))
except Exception as e:
if debug.logging:
print(f"Read proof token: {e}")
if arkPreURL in v['request']['url']:
chatArks.append(parseHAREntry(v))
elif v['request']['url'] == sessionUrl:
try:
match = re.search(r'"accessToken":"(.*?)"', v["response"]["content"]["text"])
if match:
accessToken = match.group(1)
accessToken = json.loads(v["response"]["content"]["text"]).get("accessToken")
except KeyError:
continue
cookies = {c['name']: c['value'] for c in v['request']['cookies'] if c['name'] != "oai-did"}
headers = v_headers
cookies = {c['name']: c['value'] for c in v['request']['cookies']}
headers = get_headers(v)
if not accessToken:
raise NoValidHarFileError("No accessToken found in .har files")
if not chatArks:
@ -113,8 +101,7 @@ def genArkReq(chatArk: arkReq) -> arkReq:
async def sendRequest(tmpArk: arkReq, proxy: str = None):
async with StreamSession(headers=tmpArk.arkHeader, cookies=tmpArk.arkCookies, proxies={"https": proxy}) as session:
async with session.post(tmpArk.arkURL, data=tmpArk.arkBody) as response:
data = await response.json()
arkose = data.get("token")
arkose = (await response.json()).get("token")
if "sup=1|rid=" not in arkose:
return RuntimeError("No valid arkose token generated")
return arkose
@ -144,10 +131,10 @@ def getN() -> str:
return base64.b64encode(timestamp.encode()).decode()
async def getArkoseAndAccessToken(proxy: str) -> tuple[str, str, dict, dict]:
global chatArk, accessToken, cookies, headers, proofTokens
global chatArk, accessToken, cookies, headers
if chatArk is None or accessToken is None:
chatArk, accessToken, cookies, headers = readHAR()
if chatArk is None:
return None, accessToken, cookies, headers, proofTokens
return None, accessToken, cookies, headers
newReq = genArkReq(chatArk)
return await sendRequest(newReq, proxy), accessToken, cookies, headers, proofTokens
return await sendRequest(newReq, proxy), accessToken, cookies, headers

@ -2,32 +2,29 @@ import random
import hashlib
import json
import base64
from datetime import datetime, timezone
from datetime import datetime, timedelta, timezone
def generate_proof_token(required: bool, seed: str = "", difficulty: str = "", user_agent: str = None, proofTokens: list = None):
def generate_proof_token(required: bool, seed: str, difficulty: str, user_agent: str):
if not required:
return
if proofTokens:
config = proofTokens[-1]
else:
screen = random.choice([3008, 4010, 6000]) * random.choice([1, 2, 4])
# Get current UTC time
now_utc = datetime.now(timezone.utc)
parse_time = now_utc.strftime('%a, %d %b %Y %H:%M:%S GMT')
config = [
screen, parse_time,
None, 0, user_agent,
"https://tcr9i.chat.openai.com/v2/35536E1E-65B4-4D96-9D97-6ADB7EFF8147/api.js",
"dpl=1440a687921de39ff5ee56b92807faaadce73f13","en","en-US",
None,
"plugins[object PluginArray]",
random.choice(["_reactListeningcfilawjnerp", "_reactListening9ne2dfo1i47", "_reactListening410nzwhan2a"]),
random.choice(["alert", "ontransitionend", "onprogress"])
]
diff_len = len(difficulty)
cores = [8, 12, 16, 24]
screens = [3000, 4000, 6000]
core = random.choice(cores)
screen = random.choice(screens)
# Get current UTC time
now_utc = datetime.now(timezone.utc)
# Convert UTC time to Eastern Time
now_et = now_utc.astimezone(timezone(timedelta(hours=-5)))
parse_time = now_et.strftime('%a, %d %b %Y %H:%M:%S GMT')
config = [core + screen, parse_time, 4294705152, 0, user_agent]
diff_len = len(difficulty) // 2
for i in range(100000):
config[3] = i
json_data = json.dumps(config)
@ -35,7 +32,8 @@ def generate_proof_token(required: bool, seed: str = "", difficulty: str = "", u
hash_value = hashlib.sha3_512((seed + base).encode()).digest()
if hash_value.hex()[:diff_len] <= difficulty:
return "gAAAAAB" + base
result = "gAAAAAB" + base
return result
fallback_base = base64.b64encode(f'"{seed}"'.encode()).decode()
return "gAAAAABwQ8Lk5FbGpA2NcR9dShT6gYjU7VxZ4D" + fallback_base

@ -4,15 +4,11 @@ import json
import os
import os.path
import random
import logging
from ...requests import StreamSession, raise_for_status
from ...cookies import get_cookies_dir
from ...errors import MissingRequirementsError
from ... import debug
logging.basicConfig(level=logging.ERROR)
class NoValidHarFileError(Exception):
...
@ -29,9 +25,10 @@ public_token = "public-token-live-507a52ad-7e69-496b-aee0-1c9863c7c819"
chatArks: list = None
def readHAR():
dirPath = "./"
harPath = []
chatArks = []
for root, dirs, files in os.walk(get_cookies_dir()):
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harPath.append(os.path.join(root, file))
@ -81,35 +78,28 @@ async def get_telemetry_ids(proxy: str = None) -> list:
return [await create_telemetry_id(proxy)]
except NoValidHarFileError as e:
if debug.logging:
logging.error(e)
print(e)
if debug.logging:
print('Getting telemetry_id for you.com with nodriver')
try:
from nodriver import start
except ImportError:
raise MissingRequirementsError('Add .har file from you.com or install "nodriver" package | pip install -U nodriver')
if debug.logging:
logging.error('Getting telemetry_id for you.com with nodriver')
browser = page = None
page = None
try:
browser = await start(
browser_args=None if proxy is None else [f"--proxy-server={proxy}"],
)
browser = await start()
page = await browser.get("https://you.com")
while not await page.evaluate('"GetTelemetryID" in this'):
await page.sleep(1)
async def get_telemetry_id():
return await page.evaluate(
f'this.GetTelemetryID("{public_token}", "{telemetry_url}");',
await_promise=True
)
return [await get_telemetry_id()]
finally:
try:
if page is not None:
await page.close()
if browser is not None:
await browser.stop()
except Exception as e:
if debug.logging:
logging.error(e)
if page is not None:
await page.close()

@ -47,24 +47,22 @@ class ChatCompletionsForm(BaseModel):
web_search: Optional[bool] = None
proxy: Optional[str] = None
class ImagesGenerateForm(BaseModel):
model: Optional[str] = None
provider: Optional[str] = None
prompt: str
response_format: Optional[str] = None
api_key: Optional[str] = None
proxy: Optional[str] = None
class AppConfig():
list_ignored_providers: Optional[list[str]] = None
g4f_api_key: Optional[str] = None
ignore_cookie_files: bool = False
defaults: dict = {}
@classmethod
def set_config(cls, **data):
for key, value in data.items():
setattr(cls, key, value)
def set_list_ignored_providers(cls, ignored: list[str]):
cls.list_ignored_providers = ignored
@classmethod
def set_g4f_api_key(cls, key: str = None):
cls.g4f_api_key = key
@classmethod
def set_ignore_cookie_files(cls, value: bool):
cls.ignore_cookie_files = value
class Api:
def __init__(self, app: FastAPI) -> None:
@ -128,10 +126,7 @@ class Api:
'created': 0,
'owned_by': model.base_provider
} for model_id, model in model_list.items()]
return JSONResponse({
"object": "list",
"data": model_list,
})
return JSONResponse(model_list)
@self.app.get("/v1/models/{model_name}")
async def model_info(model_name: str):
@ -157,53 +152,33 @@ class Api:
if auth_header and auth_header != "Bearer":
config.api_key = auth_header
response = self.client.chat.completions.create(
**{
**AppConfig.defaults,
**config.dict(exclude_none=True),
},
**config.dict(exclude_none=True),
ignored=AppConfig.list_ignored_providers
)
if not config.stream:
return JSONResponse((await response).to_json())
async def streaming():
try:
async for chunk in response:
yield f"data: {json.dumps(chunk.to_json())}\n\n"
except GeneratorExit:
pass
except Exception as e:
logging.exception(e)
yield f'data: {format_exception(e, config)}\n\n'
yield "data: [DONE]\n\n"
return StreamingResponse(streaming(), media_type="text/event-stream")
except Exception as e:
logging.exception(e)
return Response(content=format_exception(e, config), status_code=500, media_type="application/json")
if not config.stream:
return JSONResponse((await response).to_json())
async def streaming():
try:
async for chunk in response:
yield f"data: {json.dumps(chunk.to_json())}\n\n"
except GeneratorExit:
pass
except Exception as e:
logging.exception(e)
yield f'data: {format_exception(e, config)}\n\n'
yield "data: [DONE]\n\n"
return StreamingResponse(streaming(), media_type="text/event-stream")
@self.app.post("/v1/completions")
async def completions():
return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json")
@self.app.post("/v1/images/generations")
async def images_generate(config: ImagesGenerateForm, request: Request = None, provider: str = None):
try:
config.provider = provider if config.provider is None else config.provider
if config.api_key is None and request is not None:
auth_header = request.headers.get("Authorization")
if auth_header is not None:
auth_header = auth_header.split(None, 1)[-1]
if auth_header and auth_header != "Bearer":
config.api_key = auth_header
response = self.client.images.generate(
**config.dict(exclude_none=True),
)
return JSONResponse((await response).to_json())
except Exception as e:
logging.exception(e)
return Response(content=format_exception(e, config), status_code=500, media_type="application/json")
def format_exception(e: Exception, config: ChatCompletionsForm) -> str:
last_provider = g4f.get_last_provider(True)
return json.dumps({

@ -3,9 +3,6 @@ from __future__ import annotations
import time
import random
import string
import asyncio
import base64
from aiohttp import ClientSession, BaseConnector
from .types import Client as BaseClient
from .types import ProviderType, FinishReason
@ -14,12 +11,10 @@ from .types import AsyncIterResponse, ImageProvider
from .image_models import ImageModels
from .helper import filter_json, find_stop, filter_none, cast_iter_async
from .service import get_last_provider, get_model_and_provider
from ..Provider import ProviderUtils
from ..typing import Union, Messages, AsyncIterator, ImageType
from ..errors import NoImageResponseError, ProviderNotFoundError
from ..requests.aiohttp import get_connector
from ..providers.conversation import BaseConversation
from ..image import ImageResponse as ImageProviderResponse, ImageDataResponse
from ..typing import Union, Iterator, Messages, AsyncIterator, ImageType
from ..errors import NoImageResponseError
from ..image import ImageResponse as ImageProviderResponse
from ..providers.base_provider import AsyncGeneratorProvider
try:
anext
@ -43,9 +38,6 @@ async def iter_response(
if isinstance(chunk, FinishReason):
finish_reason = chunk.reason
break
elif isinstance(chunk, BaseConversation):
yield chunk
continue
content += str(chunk)
count += 1
if max_tokens is not None and count >= max_tokens:
@ -96,7 +88,7 @@ def create_response(
api_key: str = None,
**kwargs
):
has_asnyc = hasattr(provider, "create_async_generator")
has_asnyc = isinstance(provider, type) and issubclass(provider, AsyncGeneratorProvider)
if has_asnyc:
create = provider.create_async_generator
else:
@ -165,37 +157,20 @@ class Chat():
def __init__(self, client: AsyncClient, provider: ProviderType = None):
self.completions = Completions(client, provider)
async def iter_image_response(
response: AsyncIterator,
response_format: str = None,
connector: BaseConnector = None,
proxy: str = None
) -> Union[ImagesResponse, None]:
async def iter_image_response(response: Iterator) -> Union[ImagesResponse, None]:
async for chunk in response:
if isinstance(chunk, ImageProviderResponse):
if response_format == "b64_json":
async with ClientSession(
connector=get_connector(connector, proxy),
cookies=chunk.options.get("cookies")
) as session:
async def fetch_image(image):
async with session.get(image) as response:
return base64.b64encode(await response.content.read()).decode()
images = await asyncio.gather(*[fetch_image(image) for image in chunk.get_list()])
return ImagesResponse([Image(None, image, chunk.alt) for image in images], int(time.time()))
return ImagesResponse([Image(image, None, chunk.alt) for image in chunk.get_list()], int(time.time()))
elif isinstance(chunk, ImageDataResponse):
return ImagesResponse([Image(None, image, chunk.alt) for image in chunk.get_list()], int(time.time()))
return ImagesResponse([Image(image) for image in chunk.get_list()])
def create_image(provider: ProviderType, prompt: str, model: str = "", **kwargs) -> AsyncIterator:
if isinstance(provider, type) and provider.__name__ == "You":
def create_image(client: AsyncClient, provider: ProviderType, prompt: str, model: str = "", **kwargs) -> AsyncIterator:
prompt = f"create a image with: {prompt}"
if provider.__name__ == "You":
kwargs["chat_mode"] = "create"
else:
prompt = f"create a image with: {prompt}"
return provider.create_async_generator(
model,
[{"role": "user", "content": prompt}],
stream=True,
proxy=client.get_proxy(),
**kwargs
)
@ -205,71 +180,31 @@ class Images():
self.provider: ImageProvider = provider
self.models: ImageModels = ImageModels(client)
def get_provider(self, model: str, provider: ProviderType = None):
if isinstance(provider, str):
if provider in ProviderUtils.convert:
provider = ProviderUtils.convert[provider]
else:
raise ProviderNotFoundError(f'Provider not found: {provider}')
else:
provider = self.models.get(model, self.provider)
return provider
async def generate(
self,
prompt,
model: str = "",
provider: ProviderType = None,
response_format: str = None,
connector: BaseConnector = None,
proxy: str = None,
**kwargs
) -> ImagesResponse:
provider = self.get_provider(model, provider)
if hasattr(provider, "create_async_generator"):
response = create_image(
provider,
prompt,
**filter_none(
response_format=response_format,
connector=connector,
proxy=self.client.get_proxy() if proxy is None else proxy,
),
**kwargs
)
async def generate(self, prompt, model: str = "", **kwargs) -> ImagesResponse:
provider = self.models.get(model, self.provider)
if isinstance(provider, type) and issubclass(provider, AsyncGeneratorProvider):
response = create_image(self.client, provider, prompt, **kwargs)
else:
response = await provider.create_async(prompt)
return ImagesResponse([Image(image) for image in response.get_list()])
image = await iter_image_response(response, response_format, connector, proxy)
image = await iter_image_response(response)
if image is None:
raise NoImageResponseError()
return image
async def create_variation(
self,
image: ImageType,
model: str = None,
response_format: str = None,
connector: BaseConnector = None,
proxy: str = None,
**kwargs
):
provider = self.get_provider(model, provider)
async def create_variation(self, image: ImageType, model: str = None, **kwargs):
provider = self.models.get(model, self.provider)
result = None
if hasattr(provider, "create_async_generator"):
if isinstance(provider, type) and issubclass(provider, AsyncGeneratorProvider):
response = provider.create_async_generator(
"",
[{"role": "user", "content": "create a image like this"}],
stream=True,
True,
image=image,
**filter_none(
response_format=response_format,
connector=connector,
proxy=self.client.get_proxy() if proxy is None else proxy,
),
proxy=self.client.get_proxy(),
**kwargs
)
result = iter_image_response(response, response_format, connector, proxy)
result = iter_image_response(response)
if result is None:
raise NoImageResponseError()
return result

@ -6,7 +6,6 @@ import string
from ..typing import Union, Iterator, Messages, ImageType
from ..providers.types import BaseProvider, ProviderType, FinishReason
from ..providers.conversation import BaseConversation
from ..image import ImageResponse as ImageProviderResponse
from ..errors import NoImageResponseError
from .stubs import ChatCompletion, ChatCompletionChunk, Image, ImagesResponse
@ -30,9 +29,6 @@ def iter_response(
if isinstance(chunk, FinishReason):
finish_reason = chunk.reason
break
elif isinstance(chunk, BaseConversation):
yield chunk
continue
content += str(chunk)
if max_tokens is not None and idx + 1 >= max_tokens:
finish_reason = "length"
@ -129,12 +125,9 @@ def iter_image_response(response: Iterator) -> Union[ImagesResponse, None]:
return ImagesResponse([Image(image) for image in chunk.get_list()])
def create_image(client: Client, provider: ProviderType, prompt: str, model: str = "", **kwargs) -> Iterator:
if isinstance(provider, type) and provider.__name__ == "You":
prompt = f"create a image with: {prompt}"
if provider.__name__ == "You":
kwargs["chat_mode"] = "create"
else:
prompt = f"create a image with: {prompt}"
return provider.create_completion(
model,
[{"role": "user", "content": prompt}],

@ -4,7 +4,7 @@ from typing import Union
from .. import debug, version
from ..errors import ProviderNotFoundError, ModelNotFoundError, ProviderNotWorkingError, StreamNotSupportedError
from ..models import Model, ModelUtils, default
from ..models import Model, ModelUtils
from ..Provider import ProviderUtils
from ..providers.types import BaseRetryProvider, ProviderType
from ..providers.retry_provider import IterProvider
@ -60,9 +60,7 @@ def get_model_and_provider(model : Union[Model, str],
model = ModelUtils.convert[model]
if not provider:
if not model:
model = default
elif isinstance(model, str):
if isinstance(model, str):
raise ModelNotFoundError(f'Model not found: {model}')
provider = model.best_provider

@ -78,14 +78,12 @@ class ChatCompletionDelta(Model):
def __init__(self, content: Union[str, None]):
if content is not None:
self.content = content
self.role = "assistant"
def to_json(self):
return self.__dict__
class ChatCompletionDeltaChoice(Model):
def __init__(self, delta: ChatCompletionDelta, finish_reason: Union[str, None]):
self.index = 0
self.delta = delta
self.finish_reason = finish_reason
@ -96,24 +94,13 @@ class ChatCompletionDeltaChoice(Model):
}
class Image(Model):
def __init__(self, url: str = None, b64_json: str = None, revised_prompt: str = None) -> None:
if url is not None:
self.url = url
if b64_json is not None:
self.b64_json = b64_json
if revised_prompt is not None:
self.revised_prompt = revised_prompt
url: str
def to_json(self):
return self.__dict__
def __init__(self, url: str) -> None:
self.url = url
class ImagesResponse(Model):
def __init__(self, data: list[Image], created: int = 0) -> None:
self.data = data
self.created = created
data: list[Image]
def to_json(self):
return {
**self.__dict__,
"data": [image.to_json() for image in self.data]
}
def __init__(self, data: list) -> None:
self.data = data

@ -23,9 +23,8 @@ from .typing import Dict, Cookies
from .errors import MissingRequirementsError
from . import debug
class CookiesConfig():
cookies: Dict[str, Cookies] = {}
cookies_dir: str = "./har_and_cookies"
# Global variable to store cookies
_cookies: Dict[str, Cookies] = {}
DOMAINS = [
".bing.com",
@ -49,18 +48,20 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
Returns:
Dict[str, str]: A dictionary of cookie names and values.
"""
if domain_name in CookiesConfig.cookies:
return CookiesConfig.cookies[domain_name]
global _cookies
if domain_name in _cookies:
return _cookies[domain_name]
cookies = load_cookies_from_browsers(domain_name, raise_requirements_error, single_browser)
CookiesConfig.cookies[domain_name] = cookies
_cookies[domain_name] = cookies
return cookies
def set_cookies(domain_name: str, cookies: Cookies = None) -> None:
global _cookies
if cookies:
CookiesConfig.cookies[domain_name] = cookies
elif domain_name in CookiesConfig.cookies:
CookiesConfig.cookies.pop(domain_name)
_cookies[domain_name] = cookies
elif domain_name in _cookies:
_cookies.pop(domain_name)
def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool = True, single_browser: bool = False) -> Cookies:
"""
@ -95,13 +96,7 @@ def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool
print(f"Error reading cookies from {cookie_fn.__name__} for {domain_name}: {e}")
return cookies
def set_cookies_dir(dir: str) -> None:
CookiesConfig.cookies_dir = dir
def get_cookies_dir() -> str:
return CookiesConfig.cookies_dir
def read_cookie_files(dirPath: str = None):
def read_cookie_files(dirPath: str = "./har_and_cookies"):
def get_domain(v: dict) -> str:
host = [h["value"] for h in v['request']['headers'] if h["name"].lower() in ("host", ":authority")]
if not host:
@ -111,16 +106,16 @@ def read_cookie_files(dirPath: str = None):
if d in host:
return d
global _cookies
harFiles = []
cookieFiles = []
for root, dirs, files in os.walk(CookiesConfig.cookies_dir if dirPath is None else dirPath):
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harFiles.append(os.path.join(root, file))
elif file.endswith(".json"):
cookieFiles.append(os.path.join(root, file))
CookiesConfig.cookies = {}
_cookies = {}
for path in harFiles:
with open(path, 'rb') as file:
try:
@ -139,7 +134,7 @@ def read_cookie_files(dirPath: str = None):
for c in v['request']['cookies']:
v_cookies[c['name']] = c['value']
if len(v_cookies) > 0:
CookiesConfig.cookies[domain] = v_cookies
_cookies[domain] = v_cookies
new_cookies[domain] = len(v_cookies)
if debug.logging:
for domain, new_values in new_cookies.items():
@ -164,7 +159,7 @@ def read_cookie_files(dirPath: str = None):
for domain, new_values in new_cookies.items():
if debug.logging:
print(f"Cookies added: {len(new_values)} from {domain}")
CookiesConfig.cookies[domain] = new_values
_cookies[domain] = new_values
def _g4f(domain_name: str) -> list:
"""

@ -19,7 +19,8 @@
<script src="/static/js/highlightjs-copy.min.js"></script>
<script src="/static/js/chat.v1.js" defer></script>
<script src="https://cdn.jsdelivr.net/npm/markdown-it@13.0.1/dist/markdown-it.min.js"></script>
<link rel="stylesheet" href="/static/css/dracula.min.css">
<link rel="stylesheet"
href="//cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.7.0/build/styles/base16/dracula.min.css">
<script>
MathJax = {
chtml: {
@ -32,10 +33,10 @@
<script type="module" src="https://cdn.jsdelivr.net/npm/mistral-tokenizer-js" async>
import mistralTokenizer from "mistral-tokenizer-js"
</script>
<script type="module" src="https://cdn.jsdelivr.net/gh/belladoreai/llama-tokenizer-js@master/llama-tokenizer.js" async>
<script type="module" src="https://belladoreai.github.io/llama-tokenizer-js/llama-tokenizer.js" async>
import llamaTokenizer from "llama-tokenizer-js"
</script>
<script src="https://cdn.jsdelivr.net/npm/gpt-tokenizer/dist/cl100k_base.js" async></script>
<script src="https://unpkg.com/gpt-tokenizer/dist/cl100k_base.js" async></script>
<script src="/static/js/text_to_speech/index.js" async></script>
<!--
<script src="/static/js/whisper-web/index.js" async></script>
@ -93,27 +94,22 @@
<div class="paper">
<h3>Settings</h3>
<div class="field">
<span class="label">Enable Dark Mode</span>
<input type="checkbox" id="darkMode" checked />
<label for="darkMode" class="toogle" title=""></label>
</div>
<div class="field">
<span class="label">Web Access with DuckDuckGo</span>
<span class="label">Web Access</span>
<input type="checkbox" id="switch" />
<label for="switch" class="toogle" title="Add the pages of the first 5 search results to the query."></label>
</div>
<div class="field">
<span class="label">Disable Conversation History</span>
<span class="label">Disable History</span>
<input type="checkbox" id="history" />
<label for="history" class="toogle" title="To improve the reaction time or if you have trouble with large conversations."></label>
</div>
<div class="field">
<span class="label">Hide System-prompt</span>
<span class="label">Hide System prompt</span>
<input type="checkbox" id="hide-systemPrompt" />
<label for="hide-systemPrompt" class="toogle" title="For more space on phones"></label>
</div>
<div class="field">
<span class="label">Auto continue in ChatGPT</span>
<span class="label">Auto continue</span>
<input id="auto_continue" type="checkbox" name="auto_continue" checked/>
<label for="auto_continue" class="toogle" title="Continue large responses in OpenaiChat"></label>
</div>
@ -126,8 +122,8 @@
<input type="text" id="recognition-language" value="" placeholder="navigator.language"/>
</div>
<div class="field box">
<label for="BingCreateImages-api_key" class="label" title="">Microsoft Designer in Bing:</label>
<textarea id="BingCreateImages-api_key" name="BingCreateImages[api_key]" placeholder="&quot;_U&quot; cookie"></textarea>
<label for="Bing-api_key" class="label" title="">Bing:</label>
<textarea id="Bing-api_key" name="Bing[api_key]" class="BingCreateImages-api_key" placeholder="&quot;_U&quot; cookie"></textarea>
</div>
<div class="field box">
<label for="DeepInfra-api_key" class="label" title="">DeepInfra:</label>
@ -150,12 +146,12 @@
<textarea id="Openai-api_key" name="Openai[api_key]" placeholder="api_key"></textarea>
</div>
<div class="field box">
<label for="OpenRouter-api_key" class="label" title="">OpenRouter:</label>
<textarea id="OpenRouter-api_key" name="OpenRouter[api_key]" placeholder="api_key"></textarea>
<label for="OpenaiAccount-api_key" class="label" title="">OpenAI ChatGPT:</label>
<textarea id="OpenaiAccount-api_key" name="OpenaiAccount[api_key]" class="OpenaiChat-api_key" placeholder="access_key"></textarea>
</div>
<div class="field box">
<label for="PerplexityApi-api_key" class="label" title="">Perplexity API:</label>
<textarea id="PerplexityApi-api_key" name="PerplexityApi[api_key]" placeholder="api_key"></textarea>
<label for="OpenRouter-api_key" class="label" title="">OpenRouter:</label>
<textarea id="OpenRouter-api_key" name="OpenRouter[api_key]" placeholder="api_key"></textarea>
</div>
<div class="field box">
<label for="Replicate-api_key" class="label" title="">Replicate:</label>
@ -174,17 +170,11 @@
</div>
</div>
<div class="conversation">
<textarea id="systemPrompt" class="box" placeholder="System prompt"></textarea>
<textarea id="systemPrompt" class="box" placeholder="System prompt"></textarea>
<div id="messages" class="box"></div>
<button class="slide-systemPrompt">
<i class="fa-solid fa-angles-up"></i>
</button>
<div class="toolbar">
<div id="input-count" class="">
<button class="hide-input">
<i class="fa-solid fa-angles-down"></i>
</button>
<span class="text"></span>
&nbsp;
</div>
<div class="stop_generating stop_generating-hidden">
<button id="cancelButton">
@ -254,5 +244,8 @@
<div class="mobile-sidebar">
<i class="fa-solid fa-bars"></i>
</div>
<script>
</script>
</body>
</html>

@ -1,7 +0,0 @@
/*!
Theme: Dracula
Author: Mike Barkmin (http://github.com/mikebarkmin) based on Dracula Theme (http://github.com/dracula)
License: ~ MIT (or more permissive) [via base16-schemes-source]
Maintainer: @highlightjs/core-team
Version: 2021.09.0
*/pre code.hljs{display:block;overflow-x:auto;padding:1em}code.hljs{padding:3px 5px}.hljs{color:#e9e9f4;background:#282936}.hljs ::selection,.hljs::selection{background-color:#4d4f68;color:#e9e9f4}.hljs-comment{color:#626483}.hljs-tag{color:#62d6e8}.hljs-operator,.hljs-punctuation,.hljs-subst{color:#e9e9f4}.hljs-operator{opacity:.7}.hljs-bullet,.hljs-deletion,.hljs-name,.hljs-selector-tag,.hljs-template-variable,.hljs-variable{color:#ea51b2}.hljs-attr,.hljs-link,.hljs-literal,.hljs-number,.hljs-symbol,.hljs-variable.constant_{color:#b45bcf}.hljs-class .hljs-title,.hljs-title,.hljs-title.class_{color:#00f769}.hljs-strong{font-weight:700;color:#00f769}.hljs-addition,.hljs-code,.hljs-string,.hljs-title.class_.inherited__{color:#ebff87}.hljs-built_in,.hljs-doctag,.hljs-keyword.hljs-atrule,.hljs-quote,.hljs-regexp{color:#a1efe4}.hljs-attribute,.hljs-function .hljs-title,.hljs-section,.hljs-title.function_,.ruby .hljs-property{color:#62d6e8}.diff .hljs-meta,.hljs-keyword,.hljs-template-tag,.hljs-type{color:#b45bcf}.hljs-emphasis{color:#b45bcf;font-style:italic}.hljs-meta,.hljs-meta .hljs-keyword,.hljs-meta .hljs-string{color:#00f769}.hljs-meta .hljs-keyword,.hljs-meta-keyword{font-weight:700}

@ -55,14 +55,11 @@
--colour-6: #242424;
--accent: #8b3dff;
--gradient: var(--accent);
--blur-bg: #16101b66;
--blur-border: #84719040;
--user-input: #ac87bb;
--conversations: #c7a2ff;
--conversations-hover: #c7a2ff4d;
--scrollbar: var(--colour-3);
--scrollbar-thumb: var(--blur-bg);
}
:root {
@ -171,7 +168,7 @@ body {
z-index: -1;
border-radius: calc(0.5 * var(--size));
background-color: var(--accent);
background: radial-gradient(circle at center, var(--gradient), var(--gradient));
background: radial-gradient(circle at center, var(--accent), var(--accent));
width: 70vw;
height: 70vw;
top: 50%;
@ -265,14 +262,6 @@ body {
padding-bottom: 0;
}
.message.print {
height: 100%;
position: absolute;
background-color: #fff;
z-index: 100;
top: 0;
}
.message.regenerate {
opacity: 0.75;
}
@ -347,14 +336,14 @@ body {
flex-wrap: wrap;
}
.message .content_inner,
.message .content_inner a:link,
.message .content_inner a:visited{
.message .content,
.message .content a:link,
.message .content a:visited{
font-size: 15px;
line-height: 1.3;
color: var(--colour-3);
}
.message .content_inner pre{
.message .content pre{
white-space: pre-wrap;
}
@ -391,25 +380,24 @@ body {
cursor: pointer;
}
.message .user .fa-xmark {
color: var(--colour-1);
}
.message .count .fa-clipboard,
.message .count .fa-volume-high,
.message .count .fa-rotate,
.message .count .fa-print {
.message .count .fa-volume-high {
z-index: 1000;
cursor: pointer;
}
.message .count .fa-clipboard,
.message .count .fa-whatsapp {
.message .user .fa-xmark {
color: var(--colour-1);
}
.message .count .fa-clipboard {
color: var(--colour-3);
}
.message .count .fa-clipboard.clicked,
.message .count .fa-print.clicked,
.message .count .fa-clipboard.clicked {
color: var(--accent);
}
.message .count .fa-volume-high.active {
color: var(--accent);
}
@ -468,11 +456,7 @@ body {
#input-count {
width: fit-content;
font-size: 12px;
padding: 6px 6px;
}
#input-count .text {
padding: 0 4px;
padding: 6px var(--inner-gap);
}
.stop_generating, .toolbar .regenerate {
@ -506,13 +490,6 @@ body {
animation: show_popup 0.4s;
}
.toolbar .hide-input {
background: transparent;
border: none;
color: var(--colour-3);
cursor: pointer;
}
@keyframes show_popup {
from {
opacity: 0;
@ -698,15 +675,8 @@ select {
resize: vertical;
}
.slide-systemPrompt {
position: absolute;
top: 0;
padding: var(--inner-gap) 10px;
border: none;
background: transparent;
cursor: pointer;
height: 49px;
color: var(--colour-3);
#systemPrompt {
padding-left: 35px;
}
@media only screen and (min-width: 40em) {
@ -799,7 +769,7 @@ select {
}
50% {
background: var(--colour-3);
background: white;
}
100% {
@ -1006,23 +976,120 @@ a:-webkit-any-link {
width: 1px;
}
.white {
--blur-bg: transparent;
--accent: #007bff;
--gradient: #ccc;
--conversations: #0062cc;
.color-picker>fieldset {
border: 0;
display: flex;
width: fit-content;
background: var(--colour-1);
margin-inline: auto;
border-radius: 8px;
-webkit-backdrop-filter: blur(20px);
backdrop-filter: blur(20px);
cursor: pointer;
background-color: var(--blur-bg);
border: 1px solid var(--blur-border);
color: var(--colour-3);
display: block;
position: relative;
overflow: hidden;
outline: none;
padding: 6px 16px;
}
.color-picker input[type="radio"]:checked {
background-color: var(--radio-color);
}
.color-picker input[type="radio"]#light {
--radio-color: gray;
}
.color-picker input[type="radio"]#pink {
--radio-color: white;
}
.color-picker input[type="radio"]#blue {
--radio-color: blue;
}
.color-picker input[type="radio"]#green {
--radio-color: green;
}
.color-picker input[type="radio"]#dark {
--radio-color: #232323;
}
.pink {
--colour-1: #ffffff;
--colour-3: #212529;
--scrollbar: var(--colour-1);
--scrollbar-thumb: var(--gradient);
--colour-2: #000000;
--colour-3: #000000;
--colour-4: #000000;
--colour-5: #000000;
--colour-6: #000000;
--accent: #ffffff;
--blur-bg: #98989866;
--blur-border: #00000040;
--user-input: #000000;
--conversations: #000000;
}
.white .message .assistant .fa-xmark {
color: var(--colour-1);
.blue {
--colour-1: hsl(209 50% 90%);
--clr-card-bg: hsl(209 50% 100%);
--colour-3: hsl(209 50% 15%);
--conversations: hsl(209 50% 25%);
}
.white .message .user .fa-xmark {
color: var(--colour-3);
.green {
--colour-1: hsl(109 50% 90%);
--clr-card-bg: hsl(109 50% 100%);
--colour-3: hsl(109 50% 15%);
--conversations: hsl(109 50% 25%);
}
.dark {
--colour-1: hsl(209 50% 10%);
--clr-card-bg: hsl(209 50% 5%);
--colour-3: hsl(209 50% 90%);
--conversations: hsl(209 50% 80%);
}
:root:has(#pink:checked) {
--colour-1: #ffffff;
--colour-2: #000000;
--colour-3: #000000;
--colour-4: #000000;
--colour-5: #000000;
--colour-6: #000000;
--accent: #ffffff;
--blur-bg: #98989866;
--blur-border: #00000040;
--user-input: #000000;
--conversations: #000000;
}
:root:has(#blue:checked) {
--colour-1: hsl(209 50% 90%);
--clr-card-bg: hsl(209 50% 100%);
--colour-3: hsl(209 50% 15%);
--conversations: hsl(209 50% 25%);
}
:root:has(#green:checked) {
--colour-1: hsl(109 50% 90%);
--clr-card-bg: hsl(109 50% 100%);
--colour-3: hsl(109 50% 15%);
--conversations: hsl(109 50% 25%);
}
:root:has(#dark:checked) {
--colour-1: hsl(209 50% 10%);
--clr-card-bg: hsl(209 50% 5%);
--colour-3: hsl(209 50% 90%);
--conversations: hsl(209 50% 80%);
}
#send-button {
@ -1092,12 +1159,15 @@ a:-webkit-any-link {
white-space:nowrap;
}
::-webkit-scrollbar {
width: 10px;
}
::-webkit-scrollbar-track {
background: var(--scrollbar);
background: var(--colour-3);
}
::-webkit-scrollbar-thumb {
background: var(--scrollbar-thumb);
border-radius: 2px;
background: var(--blur-bg);
border-radius: 5px;
}
::-webkit-scrollbar-thumb:hover {
background: var(--accent);
@ -1117,6 +1187,10 @@ a:-webkit-any-link {
max-height: 200px;
}
#message-input::-webkit-scrollbar {
width: 5px;
}
.hidden {
display: none;
}
@ -1129,21 +1203,4 @@ a:-webkit-any-link {
50% {
opacity: 0;
}
}
@media print {
#systemPrompt:placeholder-shown,
.conversations,
.conversation .user-input,
.conversation .buttons,
.conversation .toolbar,
.conversation .slide-systemPrompt,
.message .count i,
.message .assistant,
.message .user {
display: none;
}
.message.regenerate {
opacity: 1;
}
}

@ -11,7 +11,7 @@ const imageInput = document.getElementById("image");
const cameraInput = document.getElementById("camera");
const fileInput = document.getElementById("file");
const microLabel = document.querySelector(".micro-label");
const inputCount = document.getElementById("input-count").querySelector(".text");
const inputCount = document.getElementById("input-count");
const providerSelect = document.getElementById("provider");
const modelSelect = document.getElementById("model");
const modelProvider = document.getElementById("model2");
@ -41,7 +41,9 @@ appStorage = window.localStorage || {
length: 0
}
const markdown = window.markdownit();
const markdown = window.markdownit({
html: true,
});
const markdown_render = (content) => {
return markdown.render(content
.replaceAll(/<!-- generated images start -->|<!-- generated images end -->/gm, "")
@ -107,9 +109,8 @@ const register_message_buttons = async () => {
let playlist = [];
function play_next() {
const next = playlist.shift();
if (next && el.dataset.do_play) {
if (next)
next.play();
}
}
if (el.dataset.stopped) {
el.classList.remove("blink")
@ -178,40 +179,6 @@ const register_message_buttons = async () => {
});
}
});
document.querySelectorAll(".message .fa-rotate").forEach(async (el) => {
if (!("click" in el.dataset)) {
el.dataset.click = "true";
el.addEventListener("click", async () => {
const message_el = el.parentElement.parentElement.parentElement;
el.classList.add("clicked");
setTimeout(() => el.classList.remove("clicked"), 1000);
prompt_lock = true;
await hide_message(window.conversation_id, message_el.dataset.index);
window.token = message_id();
await ask_gpt(message_el.dataset.index);
})
}
});
document.querySelectorAll(".message .fa-whatsapp").forEach(async (el) => {
if (!el.parentElement.href) {
const text = el.parentElement.parentElement.parentElement.innerText;
el.parentElement.href = `https://wa.me/?text=${encodeURIComponent(text)}`;
}
});
document.querySelectorAll(".message .fa-print").forEach(async (el) => {
if (!("click" in el.dataset)) {
el.dataset.click = "true";
el.addEventListener("click", async () => {
const message_el = el.parentElement.parentElement.parentElement;
el.classList.add("clicked");
message_box.scrollTop = 0;
message_el.classList.add("print");
setTimeout(() => el.classList.remove("clicked"), 1000);
setTimeout(() => message_el.classList.remove("print"), 1000);
window.print()
})
}
});
}
const delete_conversations = async () => {
@ -273,8 +240,6 @@ const handle_ask = async () => {
${count_words_and_tokens(message, get_selected_model())}
<i class="fa-solid fa-volume-high"></i>
<i class="fa-regular fa-clipboard"></i>
<a><i class="fa-brands fa-whatsapp"></i></a>
<i class="fa-solid fa-print"></i>
</div>
</div>
</div>
@ -292,9 +257,9 @@ const remove_cancel_button = async () => {
}, 300);
};
const prepare_messages = (messages, message_index = -1) => {
const prepare_messages = (messages, filter_last_message=true) => {
// Removes none user messages at end
if (message_index == -1) {
if (filter_last_message) {
let last_message;
while (last_message = messages.pop()) {
if (last_message["role"] == "user") {
@ -302,16 +267,14 @@ const prepare_messages = (messages, message_index = -1) => {
break;
}
}
} else if (message_index >= 0) {
messages = messages.filter((_, index) => message_index >= index);
}
// Remove history, if it's selected
if (document.getElementById('history')?.checked) {
if (message_index == null) {
messages = [messages.pop(), messages.pop()];
} else {
if (filter_last_message) {
messages = [messages.pop()];
} else {
messages = [messages.pop(), messages.pop()];
}
}
@ -324,7 +287,7 @@ const prepare_messages = (messages, message_index = -1) => {
}
messages.forEach((new_message) => {
// Include only not regenerated messages
if (new_message && !new_message.regenerate) {
if (!new_message.regenerate) {
// Remove generated images from history
new_message.content = filter_message(new_message.content);
delete new_message.provider;
@ -398,11 +361,11 @@ imageInput?.addEventListener("click", (e) => {
}
});
const ask_gpt = async (message_index = -1) => {
const ask_gpt = async () => {
regenerate.classList.add(`regenerate-hidden`);
messages = await get_messages(window.conversation_id);
total_messages = messages.length;
messages = prepare_messages(messages, message_index);
messages = prepare_messages(messages);
stop_generating.classList.remove(`stop_generating-hidden`);
@ -565,7 +528,6 @@ const hide_option = async (conversation_id) => {
const span_el = document.createElement("span");
span_el.innerText = input_el.value;
span_el.classList.add("convo-title");
span_el.onclick = () => set_conversation(conversation_id);
left_el.removeChild(input_el);
left_el.appendChild(span_el);
}
@ -647,8 +609,6 @@ const load_conversation = async (conversation_id, scroll=true) => {
${count_words_and_tokens(item.content, next_provider?.model)}
<i class="fa-solid fa-volume-high"></i>
<i class="fa-regular fa-clipboard"></i>
<a><i class="fa-brands fa-whatsapp"></i></a>
<i class="fa-solid fa-print"></i>
</div>
</div>
</div>
@ -656,7 +616,7 @@ const load_conversation = async (conversation_id, scroll=true) => {
}
if (window.GPTTokenizer_cl100k_base) {
const filtered = prepare_messages(messages, null);
const filtered = prepare_messages(messages, false);
if (filtered.length > 0) {
last_model = last_model?.startsWith("gpt-4") ? "gpt-4" : "gpt-3.5-turbo"
let count_total = GPTTokenizer_cl100k_base?.encodeChat(filtered, last_model).length
@ -723,15 +683,15 @@ async function save_system_message() {
await save_conversation(window.conversation_id, conversation);
}
}
const hide_message = async (conversation_id, message_index =- 1) => {
const hide_last_message = async (conversation_id) => {
const conversation = await get_conversation(conversation_id)
message_index = message_index == -1 ? conversation.items.length - 1 : message_index
const last_message = message_index in conversation.items ? conversation.items[message_index] : null;
const last_message = conversation.items.pop();
if (last_message !== null) {
if (last_message["role"] == "assistant") {
last_message["regenerate"] = true;
}
conversation.items[message_index] = last_message;
conversation.items.push(last_message);
}
await save_conversation(conversation_id, conversation);
};
@ -830,22 +790,11 @@ document.getElementById("cancelButton").addEventListener("click", async () => {
document.getElementById("regenerateButton").addEventListener("click", async () => {
prompt_lock = true;
await hide_message(window.conversation_id);
await hide_last_message(window.conversation_id);
window.token = message_id();
await ask_gpt();
});
const hide_input = document.querySelector(".toolbar .hide-input");
hide_input.addEventListener("click", async (e) => {
const icon = hide_input.querySelector("i");
const func = icon.classList.contains("fa-angles-down") ? "add" : "remove";
const remv = icon.classList.contains("fa-angles-down") ? "remove" : "add";
icon.classList[func]("fa-angles-up");
icon.classList[remv]("fa-angles-down");
document.querySelector(".conversation .user-input").classList[func]("hidden");
document.querySelector(".conversation .buttons").classList[func]("hidden");
});
const uuid = () => {
return `xxxxxxxx-xxxx-4xxx-yxxx-${Date.now().toString(16)}`.replace(
/[xy]/g,
@ -1049,7 +998,7 @@ const count_input = async () => {
if (countFocus.value) {
inputCount.innerText = count_words_and_tokens(countFocus.value, get_selected_model());
} else {
inputCount.innerText = "";
inputCount.innerHTML = "&nbsp;"
}
}, 100);
};
@ -1093,8 +1042,6 @@ async function on_api() {
messageInput.addEventListener("keydown", async (evt) => {
if (prompt_lock) return;
// If not mobile
if (!window.matchMedia("(pointer:coarse)").matches)
if (evt.keyCode === 13 && !evt.shiftKey) {
evt.preventDefault();
console.log("pressed enter");
@ -1132,11 +1079,8 @@ async function on_api() {
await load_settings_storage()
const hide_systemPrompt = document.getElementById("hide-systemPrompt")
const slide_systemPrompt_icon = document.querySelector(".slide-systemPrompt i");
if (hide_systemPrompt.checked) {
systemPrompt.classList.add("hidden");
slide_systemPrompt_icon.classList.remove("fa-angles-up");
slide_systemPrompt_icon.classList.add("fa-angles-down");
}
hide_systemPrompt.addEventListener('change', async (event) => {
if (event.target.checked) {
@ -1145,13 +1089,6 @@ async function on_api() {
systemPrompt.classList.remove("hidden");
}
});
document.querySelector(".slide-systemPrompt")?.addEventListener("click", () => {
hide_systemPrompt.click();
let checked = hide_systemPrompt.checked;
systemPrompt.classList[checked ? "add": "remove"]("hidden");
slide_systemPrompt_icon.classList[checked ? "remove": "add"]("fa-angles-up");
slide_systemPrompt_icon.classList[checked ? "add": "remove"]("fa-angles-down");
});
const messageInputHeight = document.getElementById("message-input-height");
if (messageInputHeight) {
if (messageInputHeight.value) {
@ -1161,19 +1098,6 @@ async function on_api() {
messageInput.style.maxHeight = `${messageInputHeight.value}px`;
});
}
const darkMode = document.getElementById("darkMode");
if (darkMode) {
if (!darkMode.checked) {
document.body.classList.add("white");
}
darkMode.addEventListener('change', async (event) => {
if (event.target.checked) {
document.body.classList.remove("white");
} else {
document.body.classList.add("white");
}
});
}
}
async function load_version() {
@ -1320,7 +1244,6 @@ async function load_provider_models(providerIndex=null) {
if (!providerIndex) {
providerIndex = providerSelect.selectedIndex;
}
modelProvider.innerHTML = '';
const provider = providerSelect.options[providerIndex].value;
if (!provider) {
modelProvider.classList.add("hidden");
@ -1328,6 +1251,7 @@ async function load_provider_models(providerIndex=null) {
return;
}
const models = await api('models', provider);
modelProvider.innerHTML = '';
if (models.length > 0) {
modelSelect.classList.add("hidden");
modelProvider.classList.remove("hidden");

@ -1,27 +1,18 @@
from __future__ import annotations
import logging
import os
import os.path
import uuid
import asyncio
import time
from aiohttp import ClientSession
from typing import Iterator, Optional
from flask import send_from_directory
import json
from typing import Iterator
from g4f import version, models
from g4f import get_last_provider, ChatCompletion
from g4f.errors import VersionNotFoundError
from g4f.typing import Cookies
from g4f.image import ImagePreview, ImageResponse, is_accepted_format
from g4f.requests.aiohttp import get_connector
from g4f.image import ImagePreview
from g4f.Provider import ProviderType, __providers__, __map__
from g4f.providers.base_provider import ProviderModelMixin, FinishReason
from g4f.providers.conversation import BaseConversation
conversations: dict[dict[str, BaseConversation]] = {}
images_dir = "./generated_images"
class Api():
@ -119,8 +110,14 @@ class Api():
"latest_version": version.utils.latest_version,
}
def serve_images(self, name):
return send_from_directory(os.path.abspath(images_dir), name)
def generate_title(self):
"""
Generates and returns a title based on the request data.
Returns:
dict: A dictionary with the generated title.
"""
return {'title': ''}
def _prepare_conversation_kwargs(self, json_data: dict, kwargs: dict):
"""
@ -188,27 +185,6 @@ class Api():
yield self._format_json("message", get_error_message(chunk))
elif isinstance(chunk, ImagePreview):
yield self._format_json("preview", chunk.to_string())
elif isinstance(chunk, ImageResponse):
async def copy_images(images: list[str], cookies: Optional[Cookies] = None):
async with ClientSession(
connector=get_connector(None, os.environ.get("G4F_PROXY")),
cookies=cookies
) as session:
async def copy_image(image):
async with session.get(image) as response:
target = os.path.join(images_dir, f"{int(time.time())}_{str(uuid.uuid4())}")
with open(target, "wb") as f:
async for chunk in response.content.iter_any():
f.write(chunk)
with open(target, "rb") as f:
extension = is_accepted_format(f.read(12)).split("/")[-1]
extension = "jpg" if extension == "jpeg" else extension
new_target = f"{target}.{extension}"
os.rename(target, new_target)
return f"/images/{os.path.basename(new_target)}"
return await asyncio.gather(*[copy_image(image) for image in images])
images = asyncio.run(copy_images(chunk.get_list(), chunk.options.get("cookies")))
yield self._format_json("content", str(ImageResponse(images, chunk.alt)))
elif not isinstance(chunk, FinishReason):
yield self._format_json("content", str(chunk))
except Exception as e:

@ -47,13 +47,13 @@ class Backend_Api(Api):
'function': self.handle_conversation,
'methods': ['POST']
},
'/backend-api/v2/gen.set.summarize:title': {
'function': self.generate_title,
'methods': ['POST']
},
'/backend-api/v2/error': {
'function': self.handle_error,
'methods': ['POST']
},
'/images/<path:name>': {
'function': self.serve_images,
'methods': ['GET']
}
}

@ -16,13 +16,6 @@ from .errors import MissingRequirementsError
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif', 'webp', 'svg'}
EXTENSIONS_MAP: dict[str, str] = {
"image/png": "png",
"image/jpeg": "jpg",
"image/gif": "gif",
"image/webp": "webp",
}
def to_image(image: ImageType, is_svg: bool = False) -> Image:
"""
Converts the input image to a PIL Image object.
@ -217,8 +210,8 @@ def format_images_markdown(images: Union[str, list], alt: str, preview: Union[st
if not isinstance(preview, list):
preview = [preview.replace('{image}', image) if preview else image for image in images]
result = "\n".join(
f"[![#{idx+1} {alt}]({preview[idx]})]({image})"
#f'[<img src="{preview[idx]}" width="200" alt="#{idx+1} {alt}">]({image})'
#f"[![#{idx+1} {alt}]({preview[idx]})]({image})"
f'[<img src="{preview[idx]}" width="200" alt="#{idx+1} {alt}">]({image})'
for idx, image in enumerate(images)
)
start_flag = "<!-- generated images start -->\n"
@ -282,18 +275,6 @@ class ImagePreview(ImageResponse):
def to_string(self):
return super().__str__()
class ImageDataResponse():
def __init__(
self,
images: Union[str, list],
alt: str,
):
self.images = images
self.alt = alt
def get_list(self) -> list[str]:
return [self.images] if isinstance(self.images, str) else self.images
class ImageRequest:
def __init__(
self,

@ -37,10 +37,8 @@ def get_model_dir() -> str:
local_dir = os.path.dirname(os.path.abspath(__file__))
project_dir = os.path.dirname(os.path.dirname(local_dir))
model_dir = os.path.join(project_dir, "models")
if not os.path.exists(model_dir):
os.mkdir(model_dir)
return model_dir
if os.path.exists(model_dir):
return model_dir
def get_models() -> dict[str, dict]:
model_dir = get_model_dir()
@ -50,4 +48,4 @@ def get_models() -> dict[str, dict]:
else:
models = load_models()
save_models(file_path, models)
return models
return models

@ -2,25 +2,27 @@ from __future__ import annotations
from dataclasses import dataclass
from .Provider import IterListProvider, ProviderType
from .Provider import RetryProvider, ProviderType
from .Provider import (
Aichatos,
Bing,
Blackbox,
Chatgpt4Online,
ChatgptAi,
ChatgptNext,
Cohere,
Cnote,
DeepInfra,
Feedough,
FreeGpt,
Gemini,
GeminiPro,
GeminiProChat,
GigaChat,
HuggingChat,
HuggingFace,
Koala,
Liaobots,
MetaAI,
Llama,
OpenaiChat,
PerplexityLabs,
Replicate,
@ -30,6 +32,7 @@ from .Provider import (
Reka
)
@dataclass(unsafe_hash=True)
class Model:
"""
@ -52,11 +55,12 @@ class Model:
default = Model(
name = "",
base_provider = "",
best_provider = IterListProvider([
best_provider = RetryProvider([
Bing,
ChatgptAi,
You,
OpenaiChat,
Chatgpt4Online,
OpenaiChat
])
)
@ -64,12 +68,11 @@ default = Model(
gpt_35_long = Model(
name = 'gpt-3.5-turbo',
base_provider = 'openai',
best_provider = IterListProvider([
best_provider = RetryProvider([
FreeGpt,
You,
ChatgptNext,
OpenaiChat,
Koala,
])
)
@ -77,7 +80,7 @@ gpt_35_long = Model(
gpt_35_turbo = Model(
name = 'gpt-3.5-turbo',
base_provider = 'openai',
best_provider = IterListProvider([
best_provider = RetryProvider([
FreeGpt,
You,
ChatgptNext,
@ -92,19 +95,11 @@ gpt_35_turbo = Model(
gpt_4 = Model(
name = 'gpt-4',
base_provider = 'openai',
best_provider = IterListProvider([
best_provider = RetryProvider([
Bing, Liaobots,
])
)
gpt_4o = Model(
name = 'gpt-4o',
base_provider = 'openai',
best_provider = IterListProvider([
You, Liaobots
])
)
gpt_4_turbo = Model(
name = 'gpt-4-turbo',
base_provider = 'openai',
@ -117,22 +112,46 @@ gigachat = Model(
best_provider = GigaChat
)
meta = Model(
name = "meta",
gigachat_plus = Model(
name = 'GigaChat-Plus',
base_provider = 'gigachat',
best_provider = GigaChat
)
gigachat_pro = Model(
name = 'GigaChat-Pro',
base_provider = 'gigachat',
best_provider = GigaChat
)
llama2_7b = Model(
name = "meta-llama/Llama-2-7b-chat-hf",
base_provider = 'meta',
best_provider = RetryProvider([Llama, DeepInfra])
)
llama2_13b = Model(
name = "meta-llama/Llama-2-13b-chat-hf",
base_provider = 'meta',
best_provider = RetryProvider([Llama, DeepInfra])
)
llama2_70b = Model(
name = "meta-llama/Llama-2-70b-chat-hf",
base_provider = "meta",
best_provider = MetaAI
best_provider = RetryProvider([Llama, DeepInfra])
)
llama3_8b_instruct = Model(
name = "meta-llama/Meta-Llama-3-8B-Instruct",
base_provider = "meta",
best_provider = IterListProvider([DeepInfra, PerplexityLabs, Replicate])
best_provider = RetryProvider([Llama, DeepInfra, Replicate])
)
llama3_70b_instruct = Model(
name = "meta-llama/Meta-Llama-3-70B-Instruct",
base_provider = "meta",
best_provider = IterListProvider([DeepInfra, PerplexityLabs, Replicate])
best_provider = RetryProvider([Llama, DeepInfra])
)
codellama_34b_instruct = Model(
@ -144,30 +163,61 @@ codellama_34b_instruct = Model(
codellama_70b_instruct = Model(
name = "codellama/CodeLlama-70b-Instruct-hf",
base_provider = "meta",
best_provider = IterListProvider([DeepInfra, PerplexityLabs])
best_provider = RetryProvider([DeepInfra, PerplexityLabs])
)
# Mistral
mixtral_8x7b = Model(
name = "mistralai/Mixtral-8x7B-Instruct-v0.1",
base_provider = "huggingface",
best_provider = IterListProvider([DeepInfra, HuggingFace, PerplexityLabs])
best_provider = RetryProvider([DeepInfra, HuggingFace, PerplexityLabs])
)
mistral_7b = Model(
name = "mistralai/Mistral-7B-Instruct-v0.1",
base_provider = "huggingface",
best_provider = IterListProvider([HuggingChat, HuggingFace, PerplexityLabs])
best_provider = RetryProvider([HuggingChat, HuggingFace, PerplexityLabs])
)
mistral_7b_v02 = Model(
name = "mistralai/Mistral-7B-Instruct-v0.2",
base_provider = "huggingface",
best_provider = IterListProvider([DeepInfra, HuggingFace, PerplexityLabs])
best_provider = DeepInfra
)
mixtral_8x22b = Model(
name = "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
base_provider = "huggingface",
best_provider = DeepInfra
)
# Misc models
dolphin_mixtral_8x7b = Model(
name = "cognitivecomputations/dolphin-2.6-mixtral-8x7b",
base_provider = "huggingface",
best_provider = DeepInfra
)
lzlv_70b = Model(
name = "lizpreciatior/lzlv_70b_fp16_hf",
base_provider = "huggingface",
best_provider = DeepInfra
)
airoboros_70b = Model(
name = "deepinfra/airoboros-70b",
base_provider = "huggingface",
best_provider = DeepInfra
)
openchat_35 = Model(
name = "openchat/openchat_3.5",
base_provider = "huggingface",
best_provider = DeepInfra
)
# Bard
gemini = Model(
gemini = bard = palm = Model(
name = 'gemini',
base_provider = 'google',
best_provider = Gemini
@ -176,7 +226,7 @@ gemini = Model(
claude_v2 = Model(
name = 'claude-v2',
base_provider = 'anthropic',
best_provider = IterListProvider([Vercel])
best_provider = RetryProvider([Vercel])
)
claude_3_opus = Model(
@ -191,12 +241,6 @@ claude_3_sonnet = Model(
best_provider = You
)
claude_3_haiku = Model(
name = 'claude-3-haiku',
base_provider = 'anthropic',
best_provider = None
)
gpt_35_turbo_16k = Model(
name = 'gpt-3.5-turbo-16k',
base_provider = 'openai',
@ -236,7 +280,7 @@ gpt_4_32k_0613 = Model(
gemini_pro = Model(
name = 'gemini-pro',
base_provider = 'google',
best_provider = IterListProvider([GeminiPro, You])
best_provider = RetryProvider([GeminiProChat, You])
)
pi = Model(
@ -248,13 +292,13 @@ pi = Model(
dbrx_instruct = Model(
name = 'databricks/dbrx-instruct',
base_provider = 'mistral',
best_provider = IterListProvider([DeepInfra, PerplexityLabs])
best_provider = RetryProvider([DeepInfra, PerplexityLabs])
)
command_r_plus = Model(
name = 'CohereForAI/c4ai-command-r-plus',
base_provider = 'mistral',
best_provider = IterListProvider([HuggingChat])
best_provider = RetryProvider([HuggingChat, Cohere])
)
blackbox = Model(
@ -282,48 +326,62 @@ class ModelUtils:
'gpt-3.5-turbo-0613' : gpt_35_turbo_0613,
'gpt-3.5-turbo-16k' : gpt_35_turbo_16k,
'gpt-3.5-turbo-16k-0613' : gpt_35_turbo_16k_0613,
'gpt-3.5-long': gpt_35_long,
# gpt-4
'gpt-4o' : gpt_4o,
'gpt-4' : gpt_4,
'gpt-4-0613' : gpt_4_0613,
'gpt-4-32k' : gpt_4_32k,
'gpt-4-32k-0613' : gpt_4_32k_0613,
'gpt-4-turbo' : gpt_4_turbo,
"meta-ai": meta,
'llama3-8b': llama3_8b_instruct, # alias
# Llama
'llama2-7b' : llama2_7b,
'llama2-13b': llama2_13b,
'llama2-70b': llama2_70b,
'llama3-8b' : llama3_8b_instruct, # alias
'llama3-70b': llama3_70b_instruct, # alias
'llama3-8b-instruct' : llama3_8b_instruct,
'llama3-70b-instruct': llama3_70b_instruct,
'codellama-34b-instruct': codellama_34b_instruct,
'codellama-70b-instruct': codellama_70b_instruct,
# GigaChat
'gigachat' : gigachat,
'gigachat_plus': gigachat_plus,
'gigachat_pro' : gigachat_pro,
# Mistral Opensource
'mixtral-8x7b': mixtral_8x7b,
'mistral-7b': mistral_7b,
'mistral-7b-v02': mistral_7b_v02,
'mixtral-8x22b': mixtral_8x22b,
'dolphin-mixtral-8x7b': dolphin_mixtral_8x7b,
# google gemini
'gemini': gemini,
'gemini-pro': gemini_pro,
# anthropic
'claude-v2': claude_v2,
'claude-3-opus': claude_3_opus,
'claude-3-sonnet': claude_3_sonnet,
'claude-3-haiku': claude_3_haiku,
# reka core
'reka-core': reka_core,
'reka': reka_core,
'Reka Core': reka_core,
# other
'blackbox': blackbox,
'command-r+': command_r_plus,
'dbrx-instruct': dbrx_instruct,
'gigachat': gigachat,
'lzlv-70b': lzlv_70b,
'airoboros-70b': airoboros_70b,
'openchat_3.5': openchat_35,
'pi': pi
}

@ -30,13 +30,6 @@ if sys.platform == 'win32':
def get_running_loop(check_nested: bool) -> Union[AbstractEventLoop, None]:
try:
loop = asyncio.get_running_loop()
# Do not patch uvloop loop because its incompatible.
try:
import uvloop
if isinstance(loop, uvloop.Loop):
return loop
except (ImportError, ModuleNotFoundError):
pass
if check_nested and not hasattr(loop.__class__, "_nest_patched"):
try:
import nest_asyncio
@ -297,4 +290,4 @@ class ProviderModelMixin:
elif model not in cls.get_models() and cls.models:
raise ModelNotSupportedError(f"Model is not supported: {model} in: {cls.__name__}")
debug.last_model = model
return model
return model

@ -3,16 +3,18 @@ from __future__ import annotations
import asyncio
import random
from ..typing import Type, List, CreateResult, Messages, Iterator, AsyncResult
from .types import BaseProvider, BaseRetryProvider, ProviderType
from ..typing import Type, List, CreateResult, Messages, Iterator
from .types import BaseProvider, BaseRetryProvider
from .. import debug
from ..errors import RetryProviderError, RetryNoProviderError
class IterListProvider(BaseRetryProvider):
class RetryProvider(BaseRetryProvider):
def __init__(
self,
providers: List[Type[BaseProvider]],
shuffle: bool = True
shuffle: bool = True,
single_provider_retry: bool = False,
max_retries: int = 3,
) -> None:
"""
Initialize the BaseRetryProvider.
@ -24,6 +26,8 @@ class IterListProvider(BaseRetryProvider):
"""
self.providers = providers
self.shuffle = shuffle
self.single_provider_retry = single_provider_retry
self.max_retries = max_retries
self.working = True
self.last_provider: Type[BaseProvider] = None
@ -45,145 +49,15 @@ class IterListProvider(BaseRetryProvider):
Raises:
Exception: Any exception encountered during the completion process.
"""
exceptions = {}
started: bool = False
for provider in self.get_providers(stream):
self.last_provider = provider
try:
if debug.logging:
print(f"Using {provider.__name__} provider")
for token in provider.create_completion(model, messages, stream, **kwargs):
yield token
started = True
if started:
return
except Exception as e:
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
if started:
raise e
raise_exceptions(exceptions)
async def create_async(
self,
model: str,
messages: Messages,
**kwargs,
) -> str:
"""
Asynchronously create a completion using available providers.
Args:
model (str): The model to be used for completion.
messages (Messages): The messages to be used for generating completion.
Returns:
str: The result of the asynchronous completion.
Raises:
Exception: Any exception encountered during the asynchronous completion process.
"""
exceptions = {}
for provider in self.get_providers(False):
self.last_provider = provider
try:
if debug.logging:
print(f"Using {provider.__name__} provider")
return await asyncio.wait_for(
provider.create_async(model, messages, **kwargs),
timeout=kwargs.get("timeout", 60),
)
except Exception as e:
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
raise_exceptions(exceptions)
def get_providers(self, stream: bool) -> list[ProviderType]:
providers = [p for p in self.providers if p.supports_stream] if stream else self.providers
providers = [p for p in self.providers if stream and p.supports_stream] if stream else self.providers
if self.shuffle:
random.shuffle(providers)
return providers
async def create_async_generator(
self,
model: str,
messages: Messages,
stream: bool = True,
**kwargs
) -> AsyncResult:
exceptions = {}
started: bool = False
for provider in self.get_providers(stream):
self.last_provider = provider
try:
if debug.logging:
print(f"Using {provider.__name__} provider")
if not stream:
yield await provider.create_async(model, messages, **kwargs)
elif hasattr(provider, "create_async_generator"):
async for token in provider.create_async_generator(model, messages, stream=stream, **kwargs):
yield token
else:
for token in provider.create_completion(model, messages, stream, **kwargs):
yield token
started = True
if started:
return
except Exception as e:
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
if started:
raise e
raise_exceptions(exceptions)
class RetryProvider(IterListProvider):
def __init__(
self,
providers: List[Type[BaseProvider]],
shuffle: bool = True,
single_provider_retry: bool = False,
max_retries: int = 3,
) -> None:
"""
Initialize the BaseRetryProvider.
Args:
providers (List[Type[BaseProvider]]): List of providers to use.
shuffle (bool): Whether to shuffle the providers list.
single_provider_retry (bool): Whether to retry a single provider if it fails.
max_retries (int): Maximum number of retries for a single provider.
"""
super().__init__(providers, shuffle)
self.single_provider_retry = single_provider_retry
self.max_retries = max_retries
def create_completion(
self,
model: str,
messages: Messages,
stream: bool = False,
**kwargs,
) -> CreateResult:
"""
Create a completion using available providers, with an option to stream the response.
Args:
model (str): The model to be used for completion.
messages (Messages): The messages to be used for generating completion.
stream (bool, optional): Flag to indicate if the response should be streamed. Defaults to False.
Yields:
CreateResult: Tokens or results from the completion.
Raises:
Exception: Any exception encountered during the completion process.
"""
if self.single_provider_retry:
exceptions = {}
started: bool = False
provider = self.providers[0]
if self.single_provider_retry and len(providers) == 1:
provider = providers[0]
self.last_provider = provider
for attempt in range(self.max_retries):
try:
@ -191,7 +65,7 @@ class RetryProvider(IterListProvider):
print(f"Using {provider.__name__} provider (attempt {attempt + 1})")
for token in provider.create_completion(model, messages, stream, **kwargs):
yield token
started = True
started = True
if started:
return
except Exception as e:
@ -200,9 +74,25 @@ class RetryProvider(IterListProvider):
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
if started:
raise e
raise_exceptions(exceptions)
else:
yield from super().create_completion(model, messages, stream, **kwargs)
for provider in providers:
self.last_provider = provider
try:
if debug.logging:
print(f"Using {provider.__name__} provider")
for token in provider.create_completion(model, messages, stream, **kwargs):
yield token
started = True
if started:
return
except Exception as e:
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
if started:
raise e
raise_exceptions(exceptions)
async def create_async(
self,
@ -220,10 +110,14 @@ class RetryProvider(IterListProvider):
Raises:
Exception: Any exception encountered during the asynchronous completion process.
"""
providers = self.providers
if self.shuffle:
random.shuffle(providers)
exceptions = {}
if self.single_provider_retry:
provider = self.providers[0]
if self.single_provider_retry and len(providers) == 1:
provider = providers[0]
self.last_provider = provider
for attempt in range(self.max_retries):
try:
@ -237,9 +131,22 @@ class RetryProvider(IterListProvider):
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
raise_exceptions(exceptions)
else:
return await super().create_async(model, messages, **kwargs)
for provider in providers:
self.last_provider = provider
try:
if debug.logging:
print(f"Using {provider.__name__} provider")
return await asyncio.wait_for(
provider.create_async(model, messages, **kwargs),
timeout=kwargs.get("timeout", 60),
)
except Exception as e:
exceptions[provider.__name__] = e
if debug.logging:
print(f"{provider.__name__}: {e.__class__.__name__}: {e}")
raise_exceptions(exceptions)
class IterProvider(BaseRetryProvider):
__name__ = "IterProvider"

@ -14,7 +14,7 @@ else:
SHA256 = NewType('sha_256_hash', str)
CreateResult = Iterator[str]
AsyncResult = AsyncIterator[str]
Messages = List[Dict[str, Union[str,List[Dict[str,Union[str,Dict[str,str]]]]]]]
Messages = List[Dict[str, str]]
Cookies = Dict[str, str]
ImageType = Union[str, bytes, IO, Image, None]

@ -1,5 +1,5 @@
cp -r * /var/win/shared/
cp -r projects/windows/* /var/win/shared/
cp -r windows/* /var/win/shared/
cp setup.py /var/win/shared/
cp README.md /var/win/shared/
#git clone https://github.com/pyinstaller/pyinstaller/ /var/win/shared/pyinstaller

@ -8,12 +8,15 @@ ssl.create_default_context = partial(
cafile=certifi.where()
)
from g4f.gui.run import run_gui_args, gui_parser
from g4f.gui.webview import run_webview
from g4f.gui.run import gui_parser
import g4f.debug
g4f.debug.version_check = False
g4f.debug.version = "0.3.1.7"
g4f.debug.version = "0.2.8.0"
if __name__ == "__main__":
parser = gui_parser()
args = parser.parse_args()
run_gui_args(args)
if args.debug:
g4f.debug.logging = True
run_webview(args.debug)

@ -6,7 +6,7 @@ a = Analysis(
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hiddenimports=['plyer.platforms.linux.filechooser', 'plyer.platforms.win.filechooser'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
@ -42,4 +42,4 @@ coll = COLLECT(
upx=True,
upx_exclude=[],
name='g4f',
)
)
Loading…
Cancel
Save