You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/.gitignore

155 lines
2.0 KiB
Plaintext

.vs/
.vscode/
.idea/
2 years ago
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
notebooks/
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.envrc
2 years ago
.venv
.venvs
2 years ago
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
# Wandb directory
wandb/
# asdf tool versions
.tool-versions
/.ruff_cache/
Fix notebook example (#3142) The following calls were throwing an exception: https://github.com/hwchase17/langchain/blob/575b717d108984676e25afd0910ccccfdaf9693d/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb?short_path=4b3386c#L192 https://github.com/hwchase17/langchain/blob/575b717d108984676e25afd0910ccccfdaf9693d/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb?short_path=4b3386c#L239 Exception: ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[14], line 1 ----> 1 chain_sota = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_sota, input_key="question") File ~/github/langchain/venv/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py:89, in BaseRetrievalQA.from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs) 85 _chain_type_kwargs = chain_type_kwargs or {} 86 combine_documents_chain = load_qa_chain( 87 llm, chain_type=chain_type, **_chain_type_kwargs 88 ) ---> 89 return cls(combine_documents_chain=combine_documents_chain, **kwargs) File ~/github/langchain/venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for RetrievalQA retriever instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever) ``` The vectorstores had to be converted to retrievers: `vectorstore_sota.as_retriever()` and `vectorstore_pg.as_retriever()`. The PR also: - adds the file `paul_graham_essay.txt` referenced by this notebook - adds to gitignore *.pkl and *.bin files that are generated by this notebook Interestingly enough, the performance of the prediction greatly increased (new version of langchain or ne version of OpenAI models since the last run of the notebook): from 19/33 correct to 28/33 correct!
1 year ago
*.pkl
*.bin
# integration test artifacts
data_map*
\[('_type', 'fake'), ('stop', None)]
# Replit files
*replit*