- The content covered the basics of LLMs and how they are used in everyday practice.
MAIN POINTS:
1. LLMs are large language models, and typically use the transformer architecture.
2. LLMs used to be used for story generation, but they're now used for many AI applications.
3. They are vulnerable to hallucination if not configured correctly, so be careful.
TAKEAWAYS:
1. It's possible to use LLMs for multiple AI use cases.
2. It's important to validate that the results you're receiving are correct.
2. The field of AI is moving faster than ever as a result of GenAI breakthroughs.
```
# Server Mode
1. Running `fabric --server --domain [domain] --port [port]` will start a Gunicorn server, allowing you to create a personal instance of Fabric. This server uses both traditional API endpoints and websockets for an enhanced experience. Use cases include iPhone shortcuts and creating an API for your own web site. The server is Gunicorn with Python flask.
2. Update the JWT in the `config.yaml` file.
# Remote mode
1. If you make a `config.yaml` file in the directory root, the tool will now be in remote mode. Instead of directly querying OpenAI, you can query a remote Fabric server (including your own server if you have one configured).
2. NOTE: if you are accessing a server behind SSL (https) you need to change `self.summarizestream = f"ws://{domain}:{port}"` to `self.summarizestream = f"wss://{domain}:{port}"`
## Contributing
We welcome contributions to Fabric, including improvements and feature additions to this client.
## Credits
The `fabric` client was created by Jonathan Dunn (@xssdoctor).