langchain/libs/experimental/tests/unit_tests/chat_models
Nuno Campos 8329f81072
Use pytest asyncio auto mode (#13643)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-11-21 15:00:13 +00:00
..
__init__.py EXPERIMENTAL Generic LLM wrapper to support chat model interface with configurable chat prompt format (#8295) 2023-11-17 16:32:13 -08:00
test_llm_wrapper_llama2chat.py Use pytest asyncio auto mode (#13643) 2023-11-21 15:00:13 +00:00
test_llm_wrapper_orca.py EXPERIMENTAL Generic LLM wrapper to support chat model interface with configurable chat prompt format (#8295) 2023-11-17 16:32:13 -08:00
test_llm_wrapper_vicuna.py EXPERIMENTAL Generic LLM wrapper to support chat model interface with configurable chat prompt format (#8295) 2023-11-17 16:32:13 -08:00