trustllms

pull/357/head
Elvis Saravia 4 months ago
parent d6449e7f35
commit 39821b9155

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

@ -5,6 +5,7 @@
"applications": "Applications",
"models": "Models",
"risks": "Risks & Misuses",
"research": "LLM Research Findings",
"papers": "Papers",
"tools": "Tools",
"notebooks": "Notebooks",

@ -1,4 +1,4 @@
# Models
# Model Prompting Guides
import { Callout } from 'nextra-theme-docs'
import {Cards, Card} from 'nextra-theme-docs'

@ -0,0 +1,30 @@
# LLM Research Findings
In this section, we regularly highlight miscellaneous and interesting research findings about how to better work with large language models (LLMs). It include new tips, insights and developments around important LLM research areas such as scaling, agents, efficiency, hallucination, architectures, prompt injection, and much more.
LLM research and AI research in general is moving fast so we hope that this resource can help both researchers and developers stay ahead of important developments. We also welcome contributions to this section if you would like to highlight an exciting finding about your research or experiments.
## Sleeper Agents
This is just a random text.
Date:
Reference:
## Tipping ChatGPT
This is just a random text.
Date:
Reference:
## Sleeper Agents
This is just a random text.
Date:
Reference:

@ -0,0 +1,3 @@
{
"trustworthiness-in-llms": "Trustworthiness in LLMs"
}

@ -0,0 +1,63 @@
# Trustworthiness in LLMs
import {Screenshot} from 'components/screenshot'
import TRUSTLLM from '../../img/llms/trustllm.png'
import TRUSTLLM2 from '../../img/llms/trust-dimensions.png'
import TRUSTLLM3 from '../../img/llms/truthfulness-leaderboard.png'
Trustworthy LLMs are important to build applications in high-stake domains like health and finance. While LLMs like ChatGPT are very capable of producing human readable responses they don't guarantee trustworthy responses across dimensions like truthfulness, safety, and privacy, among others.
[Sun et al., 2024](https://arxiv.org/abs/2401.05561) recently proposes a comprehensive study of trustworthiness in LLMs, discussing challenges, benchmarks, evaluation, analysis of approaches, and future directions.
One of the greater challenges of taking current LLMs into production is trustworthiness. Their survey proposes a set of principles for trustworthy LLMs that span 8 dimensions, including a benchmark across 6 dimensions (truthfulness, safety, fairness, robustness, privacy, and machine ethics).
The author proposed the following benchmark to evaluate the trustworthiness of LLMs on six aspects:
<Screenshot src={TRUSTLLM} alt="A benchmark of trustworthy large language models" />
Below are the definitions of the eight identified dimensions of trustworthy LLMs.
<Screenshot src={TRUSTLLM2} alt="Dimensions of Trustworthy LLMs" />
## Findings
This work also presents a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. Below are the main findings from the evaluation:
- While proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, there are a few open-source models that are closing the gap.
- Models like GPT-4 and Llama 2 can reliably reject stereotypical statements and show enhanced resilience to adversarial attacks.
- Open-source models like Llama 2 perform closely to proprietary ones on trustworthiness without using any type of special moderation tool. It's also stated in the paper that some models, such as Llama 2, are overly calibrated towards trustworthiness which at times compromises their utility on several tasks and mistakenly treats benign prompts as harmful inputs to the model.
## Key Insights
Over the different trustworthiness dimensions investigated in the paper, here are the reported key insights:
- **Truthfulness**: LLMs often struggle with truthfulness due to training data noise, misinformation, or outdated information. LLMs with access to external knowledge sources show improved performance in truthfulness.
- **Safety**: Open-source LLMs generally lag behind proprietary models in safety aspects like jailbreak, toxicity, and misuse. There is a challenge in balancing safety measures without being overly cautious.
- **Fairness**: Most LLMs perform unsatisfactorily in recognizing stereotypes. Even advanced models like GPT-4 have only about 65% accuracy in this area.
- **Robustness**: There is significant variability in the robustness of LLMs, especially in open-ended and out-of-distribution tasks.
- **Privacy**: LLMs are aware of privacy norms, but their understanding and handling of private information vary widely. As an example, some models have shown information leakage when tested on the Enron Email Dataset.
- **Machine Ethics**: LLMs demonstrate a basic understanding of moral principles. However, they fall short in complex ethical scenarios.
## Trustworthiness Leaderboard for LLMs
The authors have also published a leaderboard [here](https://trustllmbenchmark.github.io/TrustLLM-Website/leaderboard.html). For example, the table below shows how the different models measure on the truthfulness dimension. As mentioned on their website, "More trustworthy LLMs are expected to have a higher value of the metrics with ↑ and a lower value with ↓".
<Screenshot src={TRUSTLLM3} alt="Trustworthiness Leaderboard for LLMs" />
## Code
You can also find a GitHub repository with a complete evaluation kit for testing the trustworthiness of LLMs across the different dimensions.
Code: https://github.com/HowieHwong/TrustLLM
## References
Image Source / Paper: [TrustLLM: Trustworthiness in Large Language Models](https://arxiv.org/abs/2401.05561) (10 Jan 2024)
Loading…
Cancel
Save