mirror of https://github.com/hwchase17/langchain
Add verbose parameter for llamacpp (#7253)
**Title:** Add verbose parameter for llamacpp **Description:** This pull request adds a 'verbose' parameter to the llamacpp module. The 'verbose' parameter, when set to True, will enable the output of detailed logs during the execution of the Llama model. This added parameter can aid in debugging and understanding the internal processes of the module. The verbose parameter is a boolean that prints verbose output to stderr when set to True. By default, the verbose parameter is set to True but can be toggled off if less output is desired. This new parameter has been added to the `validate_environment` method of the `LlamaCpp` class which initializes the `llama_cpp.Llama` API: ```python class LlamaCpp(LLM): ... @root_validator() def validate_environment(cls, values: Dict) -> Dict: ... model_param_names = [ ... "verbose", # New verbose parameter added ] ... values["client"] = Llama(model_path, **model_params) ... ``` --------- Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>pull/7123/head
parent
34a2755a54
commit
c9a0f24646
Loading…
Reference in New Issue