community[patch]: fix model initialization bug for deepinfra (#25727)

### Description
adds an init method to ChatDeepInfra to set the model_name attribute
accordings to the argument
### Issue
currently, the model_name specified by the user during initialization of
the ChatDeepInfra class is never set. Therefore, it always chooses the
default model (meta-llama/Llama-2-70b-chat-hf, however probably since
this is deprecated it always uses meta-llama/Llama-3-70b-Instruct). We
stumbled across this issue and fixed it as proposed in this pull
request. Feel free to change the fix according to your coding guidelines
and style, this is just a proposal and we want to draw attention to this
problem.
### Dependencies
no additional dependencies required

Feel free to contact me or @timo282 and @finitearth if you have any
questions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
This commit is contained in:
Moritz Schlager 2024-08-28 11:02:35 +02:00 committed by GitHub
parent a052173b55
commit 555f97becb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 16 additions and 0 deletions

View File

@ -222,6 +222,11 @@ class ChatDeepInfra(BaseChatModel):
streaming: bool = False
max_retries: int = 1
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""

View File

@ -0,0 +1,11 @@
from langchain_community.chat_models import ChatDeepInfra
def test_deepinfra_model_name_param() -> None:
llm = ChatDeepInfra(model_name="foo") # type: ignore[call-arg]
assert llm.model_name == "foo"
def test_deepinfra_model_param() -> None:
llm = ChatDeepInfra(model="foo")
assert llm.model_name == "foo"