diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 0a16234f..040d2dff 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -70,3 +70,8 @@ Riccardo Giovanetti ([@Harvester62](https://github.com/Harvester62))
E-mail: riccardo.giovanetti@gmail.com
Discord: `@harvester62` - it\_IT translation + +Jack ([@wuodoo](https://github.com/wuodoo))
+E-mail: 2296103047@qq.com>
+Discord: `@mikage` +- zh\_CN translation diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt index 70166b99..efdc5f87 100644 --- a/gpt4all-chat/CMakeLists.txt +++ b/gpt4all-chat/CMakeLists.txt @@ -235,6 +235,10 @@ if (GPT4ALL_TRANSLATIONS) ${CMAKE_SOURCE_DIR}/translations/gpt4all_en.ts ${CMAKE_SOURCE_DIR}/translations/gpt4all_es_MX.ts ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_CN.ts + ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_TW.ts + ${CMAKE_SOURCE_DIR}/translations/gpt4all_ro_RO.ts + ${CMAKE_SOURCE_DIR}/translations/gpt4all_it_IT.ts + ${CMAKE_SOURCE_DIR}/translations/gpt4all_pt_BR.ts ) endif() diff --git a/gpt4all-chat/translations/gpt4all_en.ts b/gpt4all-chat/translations/gpt4all_en.ts index f67221b1..e74a45e9 100644 --- a/gpt4all-chat/translations/gpt4all_en.ts +++ b/gpt4all-chat/translations/gpt4all_en.ts @@ -79,316 +79,350 @@ AddModelView - - + + ← Existing Models - - + + Explore Models - - + + Discover and download models by keyword search... - - + + Text field for discovering and filtering downloadable models - - + + Initiate model discovery and filtering - - + + Triggers discovery and filtering of models - - + + Default - - + + Likes - - + + Downloads - - + + Recent - - + + Asc - - + + Desc - - + + None - - + + Searching · %1 - - + + Sort by: %1 - - + + Sort dir: %1 - - + + Limit: %1 - - + + Network error: could not retrieve %1 - - - - + + + + Busy indicator - - + + Displayed when the models request is ongoing - - + + Model file - - + + Model file to be downloaded - - + + Description - - + + File description - - + + Cancel - - + + Resume - - + + Download - - + + Stop/restart/start the download - - + + Remove - - + + Remove model from filesystem - - - - + + + + Install - - + + Install online model - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB - - - - + + + + ? - - + + Describes an error that occurred when downloading - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> - - + + Error for incompatible hardware - - + + Download progressBar - - + + Shows the progress made in the download - - + + Download speed - - + + Download speed in bytes/kilobytes/megabytes per second - - + + Calculating... - - + + + + + + Whether the file hash is being calculated - - + + Displayed when the file hash is being calculated - - + + enter $API_KEY - - + + File size - - + + RAM required - - + + Parameters - - + + Quant - - + + Type @@ -456,224 +490,238 @@ - - + + Dark - - + + Light - - + + LegacyDark - - + + Font Size - - + + The size of text in the application. - - + + Small - - + + Medium - - + + Large - - + + Language and Locale - - + + The language and locale you wish to use. - - + + + System Locale + + + + + Device - - - The compute device used for text generation. "Auto" uses Vulkan or Metal. + + + The compute device used for text generation. + + + + + + + + Application default - - + + Default Model - - + + The preferred model for new chats. Also used as the local server fallback. - - + + Suggestion Mode - - + + Generate suggested follow-up questions at the end of responses. - - + + When chatting with LocalDocs - - + + Whenever possible - - + + Never - - + + Download Path - - + + Where to store local models and the LocalDocs database. - - + + Browse - - + + Choose where to save model files - - + + Enable Datalake - - + + Send chats and feedback to the GPT4All Open-Source Datalake. - - + + Advanced - - + + CPU Threads - - + + The number of CPU threads used for inference and embedding. - - + + Save Chat Context - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - - + + Enable Local Server - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. - - + + API Server Port - - + + The port to use for the local server. Requires restart. - - + + Check For Updates - - + + Manually check for an update to GPT4All. - - + + Updates @@ -692,6 +740,19 @@ + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + ChatDrawer @@ -900,9 +961,9 @@ - + - + LocalDocs @@ -919,191 +980,191 @@ - - + + Load the default model - - + + Loads the default model which can be changed in settings - - + + No Model Installed - - + + GPT4All requires that you install at least one model to get started - - + + Install a Model - - + + Shows the add model view - - + + Conversation with the model - - + + prompt / response pairs from the conversation - - + + GPT4All - - + + You - - + + recalculating context ... - - + + response stopped ... - - + + processing ... - - + + generating response ... - - + + generating questions ... - - - - + + + + Copy - - + + Copy Message - - + + Disable markdown - - + + Enable markdown - - + + Thumbs up - - + + Gives a thumbs up to the response - - + + Thumbs down - - + + Opens thumbs down dialog - - + + %1 Sources - - + + Suggested follow-ups - - + + Erase and reset chat session - - + + Copy chat session to clipboard - - + + Redo last chat response - - + + Stop generating - - + + Stop the current response generation - - + + Reloads the model @@ -1115,9 +1176,9 @@ model to get started - + - + Reload · %1 @@ -1128,68 +1189,68 @@ model to get started - - + + Load · %1 (default) → - - + + retrieving localdocs: %1 ... - - + + searching localdocs: %1 ... - - + + Send a message... - - + + Load a model to continue... - - + + Send messages/prompts to the model - - + + Cut - - + + Paste - - + + Select All - - + + Send message - - + + Sends the message/prompt contained in textfield to the model @@ -1197,14 +1258,14 @@ model to get started CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results - - + + %n file(s) @@ -1212,8 +1273,8 @@ model to get started - - + + %n word(s) @@ -1221,24 +1282,62 @@ model to get started - - + + Updating - - + + + Add Docs - - + + Select a collection to make it available to the chat model. + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1506,122 +1605,122 @@ model to get started - - - ERROR: The LocalDocs database is not valid. + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. - - + + No Collections Installed - - + + Install a collection of local documents to get started using this feature - - + + + Add Doc Collection - - + + Shows the add model view - - + + Indexing progressBar - - + + Shows the progress made in the indexing - - + + ERROR - - + + INDEXING - - + + EMBEDDING - - + + REQUIRES UPDATE - - + + READY - - + + INSTALLING - - + + Indexing in progress - - + + Embedding in progress - - + + This collection requires an update after version change - - + + Automatically reindexes upon changes to the folder - - + + Installation in progress - - + + % - - + + %n file(s) @@ -1629,8 +1728,8 @@ model to get started - - + + %n word(s) @@ -1638,32 +1737,32 @@ model to get started - - + + Remove - - + + Rebuild - - + + Reindex this folder from scratch. This is slow and usually not needed. - - + + Update - - + + Update the collection to the new version. This is a slow operation. @@ -1671,47 +1770,67 @@ model to get started ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> - + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> @@ -1797,181 +1916,181 @@ model to get started - - + + Suggested FollowUp Prompt - - + + Prompt used to generate suggested follow-up questions. - - + + Context Length - - + + Number of input and output tokens the model sees. - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. - - + + Temperature - - + + Randomness of model output. Higher -> more variation. - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. - - + + Top-P - - + + Nucleus Sampling factor. Lower -> more predicatable. - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. - - + + Min-P - - + + Minimum token probability. Higher -> more predictable. - - + + Sets the minimum relative probability for a token to be considered. - - + + Top-K - - + + Size of selection pool for tokens. - - + + Only the top K most likely tokens will be chosen from. - - + + Max Length - - + + Maximum response length, in tokens. - - + + Prompt Batch Size - - + + The batch size used for prompt processing. - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. - - + + Repeat Penalty - - + + Repetition penalty factor. Set to 1 to disable. - - + + Repeat Penalty Tokens - - + + Number of previous tokens used for penalty. - - + + GPU Layers - - + + Number of model layers to load into VRAM. - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -1981,230 +2100,264 @@ NOTE: Does not take effect until you reload the model. ModelsView - - + + No Models Installed - - + + Install a model to get started using GPT4All - - - - + + + + + Add Model - - + + Shows the add model view - - + + Installed Models - - + + Locally installed chat models - - + + Model file - - + + Model file to be downloaded - - + + Description - - + + File description - - + + Cancel - - + + Resume - - + + Stop/restart/start the download - - + + Remove - - + + Remove model from filesystem - - - - + + + + Install - - + + Install online model - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB - - + + ? - - + + Describes an error that occurred when downloading - - + + Error for incompatible hardware - - + + Download progressBar - - + + Shows the progress made in the download - - + + Download speed - - + + Download speed in bytes/kilobytes/megabytes per second - - + + Calculating... - - + + + + + + Whether the file hash is being calculated - - + + Busy indicator - - + + Displayed when the file hash is being calculated - - + + enter $API_KEY - - + + File size - - + + RAM required - - + + Parameters - - + + Quant - - + + Type @@ -2363,14 +2516,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O - - QObject - - - Default - - - SettingsView diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts index 4a057166..e1cdb7c3 100644 --- a/gpt4all-chat/translations/gpt4all_es_MX.ts +++ b/gpt4all-chat/translations/gpt4all_es_MX.ts @@ -85,210 +85,210 @@ AddModelView - - + + ← Existing Models ← Modelos existentes - - + + Explore Models Explorar modelos - - + + Discover and download models by keyword search... Descubre y descarga modelos mediante búsqueda por palabras clave... - - + + Text field for discovering and filtering downloadable models Campo de texto para descubrir y filtrar modelos descargables - - + + Initiate model discovery and filtering Iniciar descubrimiento y filtrado de modelos - - + + Triggers discovery and filtering of models Activa el descubrimiento y filtrado de modelos - - + + Default Predeterminado - - + + Likes Me gusta - - + + Downloads Descargas - - + + Recent Reciente - - + + Asc Asc - - + + Desc Desc - - + + None Ninguno - - + + Searching · %1 Buscando · %1 - - + + Sort by: %1 Ordenar por: %1 - - + + Sort dir: %1 Dirección de ordenamiento: %1 - - + + Limit: %1 Límite: %1 - - + + Network error: could not retrieve %1 Error de red: no se pudo recuperar %1 - - - - + + + + Busy indicator Indicador de ocupado - - + + Displayed when the models request is ongoing Se muestra cuando la solicitud de modelos está en curso - - + + Model file Archivo del modelo - - + + Model file to be downloaded Archivo del modelo a descargar - - + + Description Descripción - - + + File description Descripción del archivo - - + + Cancel Cancelar - - + + Resume Reanudar - - + + Download Descargar - - + + Stop/restart/start the download Detener/reiniciar/iniciar la descarga - - + + Remove Eliminar - - + + Remove model from filesystem Eliminar modelo del sistema de archivos - - - - + + + + Install Instalar - - + + Install online model Instalar modelo en línea - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> @@ -302,22 +302,22 @@ (%2).</strong></font> - - + + %1 GB %1 GB - - - - + + + + ? ? - - + + Describes an error that occurred when downloading Describe un error que ocurrió durante la descarga @@ -328,88 +328,122 @@ href="#error">Error</a></strong></font> - - + + Error for incompatible hardware Error por hardware incompatible - - + + Download progressBar Barra de progreso de descarga - - + + Shows the progress made in the download Muestra el progreso realizado en la descarga - - + + Download speed Velocidad de descarga - - + + Download speed in bytes/kilobytes/megabytes per second Velocidad de descarga en bytes/kilobytes/megabytes por segundo - - + + Calculating... Calculando... - - + + + + + + Whether the file hash is being calculated Si se está calculando el hash del archivo - - + + Displayed when the file hash is being calculated Se muestra cuando se está calculando el hash del archivo - - + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY ingrese $API_KEY - - + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + File size Tamaño del archivo - - + + RAM required RAM requerida - - + + Parameters Parámetros - - + + Quant Cuantificación - - + + Type Tipo @@ -485,38 +519,44 @@ El esquema de colores de la aplicación. - - + + Dark Oscuro - - + + Light Claro - - + + LegacyDark Oscuro legado - - + + Font Size Tamaño de fuente - - + + The size of text in the application. El tamaño del texto en la aplicación. - - + + + System Locale + + + + + Device Dispositivo @@ -539,153 +579,161 @@ - - + + Small - - + + Medium - - + + Large - - + + Language and Locale - - + + The language and locale you wish to use. - - - The compute device used for text generation. "Auto" uses Vulkan or Metal. + + + The compute device used for text generation. + + + + + + + + Application default - - + + Default Model Modelo predeterminado - - + + The preferred model for new chats. Also used as the local server fallback. El modelo preferido para nuevos chats. También se utiliza como respaldo del servidor local. - - + + Suggestion Mode Modo de sugerencia - - + + Generate suggested follow-up questions at the end of responses. Generar preguntas de seguimiento sugeridas al final de las respuestas. - - + + When chatting with LocalDocs Al chatear con LocalDocs - - + + Whenever possible Siempre que sea posible - - + + Never Nunca - - + + Download Path Ruta de descarga - - + + Where to store local models and the LocalDocs database. Dónde almacenar los modelos locales y la base de datos de LocalDocs. - - + + Browse Explorar - - + + Choose where to save model files Elegir dónde guardar los archivos del modelo - - + + Enable Datalake Habilitar Datalake - - + + Send chats and feedback to the GPT4All Open-Source Datalake. Enviar chats y comentarios al Datalake de código abierto de GPT4All. - - + + Advanced Avanzado - - + + CPU Threads Hilos de CPU - - + + The number of CPU threads used for inference and embedding. El número de hilos de CPU utilizados para inferencia e incrustación. - - + + Save Chat Context Guardar contexto del chat - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. @@ -696,8 +744,8 @@ ADVERTENCIA: Usa ~2GB por chat. - - + + Enable Local Server Habilitar servidor local @@ -708,32 +756,32 @@ en un mayor uso de recursos. - - + + API Server Port Puerto del servidor API - - + + The port to use for the local server. Requires restart. El puerto a utilizar para el servidor local. Requiere reinicio. - - + + Check For Updates Buscar actualizaciones - - + + Manually check for an update to GPT4All. Buscar manualmente una actualización para GPT4All. - - + + Updates Actualizaciones @@ -752,6 +800,19 @@ Chat del servidor + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + ChatDrawer @@ -960,9 +1021,9 @@ - + - + LocalDocs DocumentosLocales @@ -979,20 +1040,20 @@ agregar colecciones de documentos al chat - - + + Load the default model Cargar el modelo predeterminado - - + + Loads the default model which can be changed in settings Carga el modelo predeterminado que se puede cambiar en la configuración - - + + No Model Installed No hay modelo instalado @@ -1009,173 +1070,173 @@ - - + + GPT4All requires that you install at least one model to get started - - + + Install a Model Instalar un modelo - - + + Shows the add model view Muestra la vista de agregar modelo - - + + Conversation with the model Conversación con el modelo - - + + prompt / response pairs from the conversation pares de pregunta / respuesta de la conversación - - + + GPT4All GPT4All - - + + You - - + + recalculating context ... recalculando contexto ... - - + + response stopped ... respuesta detenida ... - - + + processing ... procesando ... - - + + generating response ... generando respuesta ... - - + + generating questions ... generando preguntas ... - - - - + + + + Copy Copiar - - + + Copy Message Copiar mensaje - - + + Disable markdown Desactivar markdown - - + + Enable markdown Activar markdown - - + + Thumbs up Me gusta - - + + Gives a thumbs up to the response Da un me gusta a la respuesta - - + + Thumbs down No me gusta - - + + Opens thumbs down dialog Abre el diálogo de no me gusta - - + + %1 Sources %1 Fuentes - - + + Suggested follow-ups Seguimientos sugeridos - - + + Erase and reset chat session Borrar y reiniciar sesión de chat - - + + Copy chat session to clipboard Copiar sesión de chat al portapapeles - - + + Redo last chat response Rehacer última respuesta del chat - - + + Stop generating Detener generación - - + + Stop the current response generation Detener la generación de la respuesta actual - - + + Reloads the model Recarga el modelo @@ -1212,9 +1273,9 @@ model to get started - + - + Reload · %1 Recargar · %1 @@ -1225,68 +1286,68 @@ model to get started Cargando · %1 - - + + Load · %1 (default) → Cargar · %1 (predeterminado) → - - + + retrieving localdocs: %1 ... recuperando documentos locales: %1 ... - - + + searching localdocs: %1 ... buscando en documentos locales: %1 ... - - + + Send a message... Enviar un mensaje... - - + + Load a model to continue... Carga un modelo para continuar... - - + + Send messages/prompts to the model Enviar mensajes/indicaciones al modelo - - + + Cut Cortar - - + + Paste Pegar - - + + Select All Seleccionar todo - - + + Send message Enviar mensaje - - + + Sends the message/prompt contained in textfield to the model Envía el mensaje/indicación contenido en el campo de texto al modelo @@ -1294,15 +1355,15 @@ model to get started CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results Advertencia: buscar en colecciones mientras se indexan puede devolver resultados incompletos - - + + %n file(s) %n archivo @@ -1310,8 +1371,8 @@ model to get started - - + + %n word(s) %n palabra @@ -1319,24 +1380,62 @@ model to get started - - + + Updating Actualizando - - + + + Add Docs + Agregar documentos - - + + Select a collection to make it available to the chat model. Seleccione una colección para hacerla disponible al modelo de chat. + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1659,123 +1758,127 @@ model to get started + Agregar colección - - ERROR: The LocalDocs database is not valid. - ERROR: La base de datos de DocumentosLocales no es válida. + ERROR: La base de datos de DocumentosLocales no es válida. + + + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + - - + + No Collections Installed No hay colecciones instaladas - - + + Install a collection of local documents to get started using this feature Instala una colección de documentos locales para comenzar a usar esta función - - + + + Add Doc Collection + Agregar colección de documentos - - + + Shows the add model view Muestra la vista de agregar modelo - - + + Indexing progressBar Barra de progreso de indexación - - + + Shows the progress made in the indexing Muestra el progreso realizado en la indexación - - + + ERROR ERROR - - + + INDEXING INDEXANDO - - + + EMBEDDING INCRUSTANDO - - + + REQUIRES UPDATE REQUIERE ACTUALIZACIÓN - - + + READY LISTO - - + + INSTALLING INSTALANDO - - + + Indexing in progress Indexación en progreso - - + + Embedding in progress Incrustación en progreso - - + + This collection requires an update after version change Esta colección requiere una actualización después del cambio de versión - - + + Automatically reindexes upon changes to the folder Reindexación automática al cambiar la carpeta - - + + Installation in progress Instalación en progreso - - + + % % - - + + %n file(s) %n archivo @@ -1783,8 +1886,8 @@ model to get started - - + + %n word(s) %n palabra @@ -1792,33 +1895,33 @@ model to get started - - + + Remove Eliminar - - + + Rebuild Reconstruir - - + + Reindex this folder from scratch. This is slow and usually not needed. Reindexar esta carpeta desde cero. Esto es lento y generalmente no es necesario. - - + + Update Actualizar - - + + Update the collection to the new version. This is a slow operation. Actualizar la colección a la nueva versión. Esta es una operación lenta. @@ -1848,47 +1951,67 @@ model to get started OpenAI</strong><br> %1 - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> - + <strong>Mistral Tiny model</strong><br> %1 <strong>Modelo Mistral Tiny</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Modelo Mistral Small</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Modelo Mistral Medium</strong><br> %1 - + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> @@ -2007,26 +2130,26 @@ model to get started Indicación utilizada para generar automáticamente nombres de chat. - - + + Suggested FollowUp Prompt Indicación de seguimiento sugerida - - + + Prompt used to generate suggested follow-up questions. Indicación utilizada para generar preguntas de seguimiento sugeridas. - - + + Context Length Longitud del contexto - - + + Number of input and output tokens the model sees. Número de tokens de entrada y salida que el modelo ve. @@ -2041,14 +2164,14 @@ model to get started NOTA: No tiene efecto hasta que recargues el modelo. - - + + Temperature Temperatura - - + + Randomness of model output. Higher -> more variation. Aleatoriedad de la salida del modelo. Mayor -> más variación. @@ -2059,14 +2182,14 @@ model to get started NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles. - - + + Top-P Top-P - - + + Nucleus Sampling factor. Lower -> more predicatable. Factor de muestreo de núcleo. Menor -> más predecible. @@ -2090,98 +2213,98 @@ model to get started - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. - - + + Min-P Min-P - - + + Minimum token probability. Higher -> more predictable. Probabilidad mínima del token. Mayor -> más predecible. - - + + Sets the minimum relative probability for a token to be considered. Establece la probabilidad relativa mínima para que un token sea considerado. - - + + Top-K Top-K - - + + Size of selection pool for tokens. Tamaño del grupo de selección para tokens. - - + + Only the top K most likely tokens will be chosen from. Solo se elegirán los K tokens más probables. - - + + Max Length Longitud máxima - - + + Maximum response length, in tokens. Longitud máxima de respuesta, en tokens. - - + + Prompt Batch Size Tamaño del lote de indicaciones - - + + The batch size used for prompt processing. El tamaño del lote utilizado para el procesamiento de indicaciones. - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2195,38 +2318,38 @@ NOTE: Does not take effect until you reload the model. RAM. - - + + Repeat Penalty Penalización por repetición - - + + Repetition penalty factor. Set to 1 to disable. Factor de penalización por repetición. Establecer a 1 para desactivar. - - + + Repeat Penalty Tokens Tokens de penalización por repetición - - + + Number of previous tokens used for penalty. Número de tokens anteriores utilizados para la penalización. - - + + GPU Layers Capas de GPU - - + + Number of model layers to load into VRAM. Número de capas del modelo a cargar en la VRAM. @@ -2245,108 +2368,108 @@ NOTE: Does not take effect until you reload the model. ModelsView - - + + No Models Installed No hay modelos instalados - - + + Install a model to get started using GPT4All Instala un modelo para empezar a usar GPT4All - - - - + + + + + Add Model + Agregar modelo - - + + Shows the add model view Muestra la vista de agregar modelo - - + + Installed Models Modelos instalados - - + + Locally installed chat models Modelos de chat instalados localmente - - + + Model file Archivo del modelo - - + + Model file to be downloaded Archivo del modelo a descargar - - + + Description Descripción - - + + File description Descripción del archivo - - + + Cancel Cancelar - - + + Resume Reanudar - - + + Stop/restart/start the download Detener/reiniciar/iniciar la descarga - - + + Remove Eliminar - - + + Remove model from filesystem Eliminar modelo del sistema de archivos - - - - + + + + Install Instalar - - + + Install online model Instalar modelo en línea @@ -2365,124 +2488,158 @@ NOTE: Does not take effect until you reload the model. disponible (%2).</strong></font> - - + + %1 GB %1 GB - - + + ? ? - - + + Describes an error that occurred when downloading Describe un error que ocurrió durante la descarga - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - - + + Error for incompatible hardware Error por hardware incompatible - - + + Download progressBar Barra de progreso de descarga - - + + Shows the progress made in the download Muestra el progreso realizado en la descarga - - + + Download speed Velocidad de descarga - - + + Download speed in bytes/kilobytes/megabytes per second Velocidad de descarga en bytes/kilobytes/megabytes por segundo - - + + Calculating... Calculando... - - + + + + + + Whether the file hash is being calculated Si se está calculando el hash del archivo - - + + Busy indicator Indicador de ocupado - - + + Displayed when the file hash is being calculated Se muestra cuando se está calculando el hash del archivo - - + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY ingrese $API_KEY - - + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + File size Tamaño del archivo - - + + RAM required RAM requerida - - + + Parameters Parámetros - - + + Quant Cuantificación - - + + Type Tipo @@ -2678,9 +2835,8 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O QObject - Default - Predeterminado + Predeterminado diff --git a/gpt4all-chat/translations/gpt4all_it.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts similarity index 80% rename from gpt4all-chat/translations/gpt4all_it.ts rename to gpt4all-chat/translations/gpt4all_it_IT.ts index 3b09b115..16c2e802 100644 --- a/gpt4all-chat/translations/gpt4all_it.ts +++ b/gpt4all-chat/translations/gpt4all_it_IT.ts @@ -79,316 +79,350 @@ AddModelView - - + + ← Existing Models ← Modelli esistenti - - + + Explore Models Esplora modelli - - + + Discover and download models by keyword search... Scopri e scarica i modelli tramite ricerca per parole chiave... - - + + Text field for discovering and filtering downloadable models Campo di testo per scoprire e filtrare i modelli scaricabili - - + + Initiate model discovery and filtering Avvia rilevamento e filtraggio dei modelli - - + + Triggers discovery and filtering of models Attiva la scoperta e il filtraggio dei modelli - - + + Default Predefinito - - + + Likes Mi piace - - + + Downloads Scaricamenti - - + + Recent Recenti - - + + Asc Asc - - + + Desc Disc - - + + None Niente - - + + Searching · %1 Ricerca · %1 - - + + Sort by: %1 Ordina per: %1 - - + + Sort dir: %1 Direzione ordinamento: %1 - - + + Limit: %1 Limite: %1 - - + + Network error: could not retrieve %1 Errore di rete: impossibile recuperare %1 - - - - + + + + Busy indicator Indicatore di occupato - - + + Displayed when the models request is ongoing Visualizzato quando la richiesta dei modelli è in corso - - + + Model file File del modello - - + + Model file to be downloaded File del modello da scaricare - - + + Description Descrizione - - + + File description Descrizione del file - - + + Cancel Annulla - - + + Resume Riprendi - - + + Download Scarica - - + + Stop/restart/start the download Arresta/riavvia/avvia il download - - + + Remove Rimuovi - - + + Remove model from filesystem Rimuovi il modello dal sistema dei file - - - - + + + + Install Installa - - + + Install online model Installa il modello online - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">AVVERTENZA: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB - - - - + + + + ? - - + + Describes an error that occurred when downloading Descrive un errore che si è verificato durante lo scaricamento - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Errore</a></strong></font> - - + + Error for incompatible hardware Errore per hardware incompatibile - - + + Download progressBar barra di avanzamento dello scaricamento - - + + Shows the progress made in the download Mostra lo stato di avanzamento dello scaricamento - - + + Download speed Velocità di scaricamento - - + + Download speed in bytes/kilobytes/megabytes per second Velocità di scaricamento in byte/kilobyte/megabyte al secondo - - + + Calculating... Calcolo in corso... - - + + + + + + Whether the file hash is being calculated Se viene calcolato l'hash del file - - + + Displayed when the file hash is being calculated Visualizzato durante il calcolo dell'hash del file - - + + enter $API_KEY Inserire $API_KEY - - + + File size Dimensione del file - - + + RAM required RAM richiesta - - + + Parameters Parametri - - + + Quant Quant - - + + Type Tipo @@ -456,195 +490,243 @@ La combinazione di colori dell'applicazione. - - + + Dark Scuro - - + + Light Chiaro - - + + LegacyDark Scuro Legacy - - + + Font Size Dimensioni del Font - - + + The size of text in the application. La dimensione del testo nell'applicazione. - - + + + Small + + + + + + Medium + + + + + + Large + + + + + + Language and Locale + + + + + + The language and locale you wish to use. + + + + + + System Locale + + + + + Device Dispositivo - - The compute device used for text generation. "Auto" uses Vulkan or Metal. - Il dispositivo di calcolo utilizzato per la generazione del testo. "Auto" utilizza Vulkan o Metal. + Il dispositivo di calcolo utilizzato per la generazione del testo. "Auto" utilizza Vulkan o Metal. + + + + + The compute device used for text generation. + + + + + + + + Application default + - - + + Default Model Modello predefinito - - + + The preferred model for new chats. Also used as the local server fallback. Il modello preferito per le nuove chat. Utilizzato anche come ripiego del server locale. - - + + Suggestion Mode Modalità suggerimento - - + + Generate suggested follow-up questions at the end of responses. Genera domande di approfondimento suggerite alla fine delle risposte. - - + + When chatting with LocalDocs Quando chatti con LocalDocs - - + + Whenever possible Quando possibile - - + + Never Mai - - + + Download Path Percorso di scarico - - + + Where to store local models and the LocalDocs database. Dove archiviare i modelli locali e il database LocalDocs. - - + + Browse Esplora - - + + Choose where to save model files Scegli dove salvare i file del modello - - + + Enable Datalake Abilita Datalake - - + + Send chats and feedback to the GPT4All Open-Source Datalake. Invia chat e commenti al Datalake open source GPT4All. - - + + Advanced Avanzate - - + + CPU Threads Thread della CPU Tread CPU - - + + The number of CPU threads used for inference and embedding. Il numero di thread della CPU utilizzati per l'inferenza e l'incorporamento. - - + + Save Chat Context Salva il contesto della chat - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat. - - + + Enable Local Server Abilita server locale - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. Esporre un server compatibile con OpenAI a localhost. ATTENZIONE: comporta un maggiore utilizzo delle risorse. - - + + API Server Port Porta del server API - - + + The port to use for the local server. Requires restart. La porta da utilizzare per il server locale. Richiede il riavvio. - - + + Check For Updates Controlla gli aggiornamenti - - + + Manually check for an update to GPT4All. Verifica manualmente la presenza di un aggiornamento a GPT4All. - - + + Updates Aggiornamenti @@ -663,6 +745,19 @@ Chatta del server + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + ChatDrawer @@ -871,9 +966,9 @@ - + - + LocalDocs @@ -890,192 +985,192 @@ aggiungi raccolte di documenti alla chat - - + + Load the default model Carica il modello predefinito - - + + Loads the default model which can be changed in settings Carica il modello predefinito che può essere modificato nelle impostazioni - - + + No Model Installed Nessun modello installato - - + + GPT4All requires that you install at least one model to get started GPT4All richiede l'installazione di almeno un modello per iniziare - - + + Install a Model Installa un modello - - + + Shows the add model view Mostra la vista aggiungi modello - - + + Conversation with the model Conversazione con il modello - - + + prompt / response pairs from the conversation coppie prompt/risposta dalla conversazione - - + + GPT4All - - + + You Tu - - + + recalculating context ... ricalcolo contesto ... - - + + response stopped ... risposta interrotta ... - - + + processing ... elaborazione ... - - + + generating response ... generazione risposta ... - - + + generating questions ... generarzione domande ... - - - - + + + + Copy Copia - - + + Copy Message Copia messaggio - - + + Disable markdown Disabilita Markdown - - + + Enable markdown Abilita Markdown - - + + Thumbs up Mi piace - - + + Gives a thumbs up to the response Dà un mi piace alla risposta - - + + Thumbs down Non mi piace - - + + Opens thumbs down dialog Apre la finestra di dialogo "Non mi piace" - - + + %1 Sources %1 Fonti - - + + Suggested follow-ups Seguiti suggeriti - - + + Erase and reset chat session Cancella e ripristina la sessione di chat - - + + Copy chat session to clipboard Copia la sessione di chat negli appunti - - + + Redo last chat response Riesegui l'ultima risposta della chat - - + + Stop generating Interrompi la generazione - - + + Stop the current response generation Arresta la generazione della risposta corrente - - + + Reloads the model Ricarica il modello @@ -1087,9 +1182,9 @@ modello per iniziare - + - + Reload · %1 Ricarica · %1 @@ -1100,68 +1195,68 @@ modello per iniziare Caricamento · %1 - - + + Load · %1 (default) → Carica · %1 (predefinito) → - - + + retrieving localdocs: %1 ... recupero documenti locali: %1 ... - - + + searching localdocs: %1 ... ricerca in documenti locali: %1 ... - - + + Send a message... Manda un messaggio... - - + + Load a model to continue... Carica un modello per continuare... - - + + Send messages/prompts to the model Invia messaggi/prompt al modello - - + + Cut Taglia - - + + Paste Incolla - - + + Select All Seleziona tutto - - + + Send message Invia messaggio - - + + Sends the message/prompt contained in textfield to the model Invia il messaggio/prompt contenuto nel campo di testo al modello @@ -1169,14 +1264,14 @@ modello per iniziare CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results Attenzione: la ricerca nelle raccolte durante l'indicizzazione può restituire risultati incompleti - - + + %n file(s) %n file @@ -1184,8 +1279,8 @@ modello per iniziare - - + + %n word(s) @@ -1193,24 +1288,62 @@ modello per iniziare - - + + Updating In aggiornamento - - + + + Add Docs + Aggiungi documenti - - + + Select a collection to make it available to the chat model. Seleziona una raccolta per renderla disponibile al modello in chat. + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1479,122 +1612,126 @@ modello per iniziare + Aggiungi raccolta - - ERROR: The LocalDocs database is not valid. - ERRORE: il database di LocalDocs non è valido. + ERRORE: il database di LocalDocs non è valido. - - + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + + + + + No Collections Installed Nessuna raccolta installata - - + + Install a collection of local documents to get started using this feature Installa una raccolta di documenti locali per iniziare a utilizzare questa funzionalità - - + + + Add Doc Collection + Aggiungi raccolta di documenti - - + + Shows the add model view Mostra la vista aggiungi modello - - + + Indexing progressBar Barra di avanzamento dell'indicizzazione - - + + Shows the progress made in the indexing Mostra lo stato di avanzamento dell'indicizzazione - - + + ERROR ERRORE - - + + INDEXING INDICIZZAZIONE - - + + EMBEDDING INCORPORAMENTO - - + + REQUIRES UPDATE RICHIEDE AGGIORNAMENTO - - + + READY PRONTO - - + + INSTALLING INSTALLAZIONE - - + + Indexing in progress Indicizzazione in corso - - + + Embedding in progress Incorporamento in corso - - + + This collection requires an update after version change Questa raccolta richiede un aggiornamento dopo il cambio di versione - - + + Automatically reindexes upon changes to the folder Reindicizza automaticamente in caso di modifiche alla cartella - - + + Installation in progress Installazione in corso - - + + % % - - + + %n file(s) %n file @@ -1602,8 +1739,8 @@ modello per iniziare - - + + %n word(s) %n parola @@ -1611,32 +1748,32 @@ modello per iniziare - - + + Remove Rimuovi - - + + Rebuild Ricostruisci - - + + Reindex this folder from scratch. This is slow and usually not needed. Reindicizzare questa cartella da zero. Lento e di solito non necessario. - - + + Update Aggiorna - - + + Update the collection to the new version. This is a slow operation. Aggiorna la raccolta alla nuova versione. Questa è un'operazione lenta. @@ -1644,47 +1781,67 @@ modello per iniziare ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>Richiede una chiave API OpenAI personale.</li><li>ATTENZIONE: invierà le tue chat a OpenAI!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con OpenAI</li><li>Puoi richiedere una chiave API <a href="https://platform.openai.com/account/api-keys">qui.</a> </li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Anche se paghi OpenAI per ChatGPT-4 questo non garantisce l'accesso alla chiave API. Contatta OpenAI per maggiori informazioni. - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Richiede una chiave API Mistral personale.</li><li>ATTENZIONE: invierà le tue chat a Mistral!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con Mistral</li><li>Puoi richiedere una chiave API <a href="https://console.mistral.ai/user/api-keys">qui</a>. </li> - + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Creato da %1.</strong><br><ul><li>Pubblicato il %2.<li>Questo modello ha %3 Mi piace.<li>Questo modello ha %4 download.<li>Altro informazioni possono essere trovate <a href="https://huggingface.co/%5">qui.</a></ul> @@ -1770,32 +1927,32 @@ modello per iniziare Prompt utilizzato per generare automaticamente nomi di chat. - - + + Suggested FollowUp Prompt Prompt di proseguimento suggerito - - + + Prompt used to generate suggested follow-up questions. Prompt utilizzato per generare domande di proseguimento suggerite. - - + + Context Length Lunghezza del contesto - - + + Number of input and output tokens the model sees. Numero di token di input e output visualizzati dal modello. - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1804,152 +1961,152 @@ L'utilizzo di un contesto maggiore rispetto a quello su cui è stato addest NOTA: non ha effetto finché non si ricarica il modello. - - + + Temperature Temperatura - - + + Randomness of model output. Higher -> more variation. Casualità dell'uscita del modello. Più alto -> più variazione. - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. La temperatura aumenta le possibilità di scegliere token meno probabili. NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedibili. - - + + Top-P - - + + Nucleus Sampling factor. Lower -> more predicatable. Fattore di campionamento del nucleo. Inferiore -> più prevedibile. - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Possono essere scelti solo i token più probabili fino ad una probabilità totale di top_p. NOTA: impedisce la scelta di token altamente improbabili. - - + + Min-P - - + + Minimum token probability. Higher -> more predictable. Probabilità minima del token. Più alto -> più prevedibile. - - + + Sets the minimum relative probability for a token to be considered. Imposta la probabilità relativa minima che un token venga considerato. - - + + Top-K - - + + Size of selection pool for tokens. Dimensione del pool di selezione per i token. - - + + Only the top K most likely tokens will be chosen from. Solo i token Top-K più probabili verranno scelti. - - + + Max Length Lunghezza massima - - + + Maximum response length, in tokens. Lunghezza massima della risposta, in token. - - + + Prompt Batch Size Dimensioni del lotto di prompt - - + + The batch size used for prompt processing. La dimensione del lotto usata per l'elaborazione dei prompt. - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Quantità di token del prompt da elaborare contemporaneamente. NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM. - - + + Repeat Penalty Penalità di ripetizione - - + + Repetition penalty factor. Set to 1 to disable. Fattore di penalità di ripetizione. Impostare su 1 per disabilitare. - - + + Repeat Penalty Tokens Token di penalità ripetizione - - + + Number of previous tokens used for penalty. Numero di token precedenti utilizzati per la penalità. - - + + GPU Layers Livelli GPU - - + + Number of model layers to load into VRAM. Numero di livelli del modello da caricare nella VRAM. - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -1961,230 +2118,264 @@ NOTA: non ha effetto finché non si ricarica il modello. ModelsView - - + + No Models Installed Nessun modello installato - - + + Install a model to get started using GPT4All Installa un modello per iniziare a utilizzare GPT4All - - - - + + + + + Add Model + Aggiungi Modello - - + + Shows the add model view Mostra la vista aggiungi modello - - + + Installed Models Modelli installati - - + + Locally installed chat models Modelli per chat installati localmente - - + + Model file File del modello - - + + Model file to be downloaded File del modello da scaricare - - + + Description Descrizione - - + + File description Descrizione del file - - + + Cancel Annulla - - + + Resume Riprendi - - + + Stop/restart/start the download Arresta/riavvia/avvia il download - - + + Remove Rimuovi - - + + Remove model from filesystem Rimuovi il modello dal sistema dei file - - - - + + + + Install Installa - - + + Install online model Installa il modello online - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Errore</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">AVVERTENZA: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB - - + + ? - - + + Describes an error that occurred when downloading Descrive un errore che si è verificato durante lo scaricamento - - + + Error for incompatible hardware Errore per hardware incompatibile - - + + Download progressBar barra di avanzamento dello scaricamento - - + + Shows the progress made in the download Mostra lo stato di avanzamento dello scaricamento - - + + Download speed Velocità di scaricamento - - + + Download speed in bytes/kilobytes/megabytes per second Velocità di scaricamento in byte/kilobyte/megabyte al secondo - - + + Calculating... Calcolo in corso... - - + + + + + + Whether the file hash is being calculated Se viene calcolato l'hash del file - - + + Busy indicator Indicatore di occupato - - + + Displayed when the file hash is being calculated Visualizzato durante il calcolo dell'hash del file - - + + enter $API_KEY Inserire $API_KEY - - + + File size Dimensione del file - - + + RAM required RAM richiesta - - + + Parameters Parametri - - + + Quant Quant - - + + Type Tipo diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts index 97d81216..ffd8c52c 100644 --- a/gpt4all-chat/translations/gpt4all_pt_BR.ts +++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts @@ -79,316 +79,350 @@ AddModelView - - + + ← Existing Models ← Meus Modelos - - + + Explore Models Descobrir Modelos - - + + Discover and download models by keyword search... Pesquisar modelos... - - + + Text field for discovering and filtering downloadable models Campo de texto para descobrir e filtrar modelos para download - - + + Initiate model discovery and filtering Pesquisar e filtrar modelos - - + + Triggers discovery and filtering of models Aciona a descoberta e filtragem de modelos - - + + Default Padrão - - + + Likes Curtidas - - + + Downloads Downloads - - + + Recent Recentes - - + + Asc Asc - - + + Desc Desc - - + + None Nenhum - - + + Searching · %1 Pesquisando · %1 - - + + Sort by: %1 Ordenar por: %1 - - + + Sort dir: %1 Ordenar diretório: %1 - - + + Limit: %1 Limite: %1 - - + + Network error: could not retrieve %1 Erro de rede: não foi possível obter %1 - - - - + + + + Busy indicator Indicador de processamento - - + + Displayed when the models request is ongoing xibido enquanto os modelos estão sendo carregados - - + + Model file Arquivo do modelo - - + + Model file to be downloaded Arquivo do modelo a ser baixado - - + + Description Descrição - - + + File description Descrição do arquivo - - + + Cancel Cancelar - - + + Resume Retomar - - + + Download Baixar - - + + Stop/restart/start the download Parar/reiniciar/iniciar o download - - + + Remove Remover - - + + Remove model from filesystem Remover modelo do sistema - - - - + + + + Install Instalar - - + + Install online model Instalar modelo online - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">ATENÇÃO: Este modelo não é recomendado para seu hardware. Ele exige mais memória (%1 GB) do que seu sistema possui (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB %1 GB - - - - + + + + ? ? - - + + Describes an error that occurred when downloading Mostra informações sobre o erro no download - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Erro</a></strong></font> - - + + Error for incompatible hardware Aviso: Hardware não compatível - - + + Download progressBar Progresso do download - - + + Shows the progress made in the download Mostra o progresso do download - - + + Download speed Velocidade de download - - + + Download speed in bytes/kilobytes/megabytes per second Velocidade de download em bytes/kilobytes/megabytes por segundo - - + + Calculating... Calculando... - - + + + + + + Whether the file hash is being calculated Quando o hash do arquivo está sendo calculado - - + + Displayed when the file hash is being calculated Exibido durante o cálculo do hash do arquivo - - + + enter $API_KEY inserir $API_KEY - - + + File size Tamanho do arquivo - - + + RAM required RAM necessária - - + + Parameters Parâmetros - - + + Quant Quant - - + + Type Tipo @@ -462,224 +496,242 @@ Esquema de cores. - - + + Dark Modo Escuro - - + + Light Modo Claro - - + + LegacyDark Modo escuro (legado) - - + + Font Size Tamanho da Fonte - - + + The size of text in the application. Tamanho do texto. - - + + Small Pequena - - + + Medium Médio - - + + Large Grande - - + + Language and Locale Idioma e Região - - + + The language and locale you wish to use. Selecione seu idioma e região. - - + + + System Locale + + + + + Device Processador - - The compute device used for text generation. "Auto" uses Vulkan or Metal. - Processador usado para gerar texto. (Automático: Vulkan ou Metal). + Processador usado para gerar texto. (Automático: Vulkan ou Metal). - - + + + The compute device used for text generation. + + + + + + + + Application default + + + + + Default Model Modelo Padrão - - + + The preferred model for new chats. Also used as the local server fallback. Modelo padrão para novos chats e em caso de falha do modelo principal. - - + + Suggestion Mode Modo de sugestões - - + + Generate suggested follow-up questions at the end of responses. Sugerir perguntas após as respostas. - - + + When chatting with LocalDocs Ao conversar com o LocalDocs - - + + Whenever possible Sempre que possível - - + + Never Nunca - - + + Download Path Diretório de Download - - + + Where to store local models and the LocalDocs database. Pasta para modelos e banco de dados do LocalDocs. - - + + Browse Procurar - - + + Choose where to save model files Local para armazenar os modelos - - + + Enable Datalake Habilitar Datalake - - + + Send chats and feedback to the GPT4All Open-Source Datalake. Contribua para o Datalake de código aberto do GPT4All. - - + + Advanced Avançado - - + + CPU Threads Threads de CPU - - + + The number of CPU threads used for inference and embedding. Quantidade de núcleos (threads) do processador usados para processar e responder às suas perguntas. - - + + Save Chat Context Salvar Histórico do Chat - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat). - - + + Enable Local Server Ativar Servidor Local - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. Ativar servidor local compatível com OpenAI (uso de recursos elevado). - - + + API Server Port Porta da API - - + + The port to use for the local server. Requires restart. Porta de acesso ao servidor local. (requer reinicialização). - - + + Check For Updates Procurar por Atualizações - - + + Manually check for an update to GPT4All. Verifica se há novas atualizações para o GPT4All. - - + + Updates Atualizações @@ -698,6 +750,19 @@ Chat com o Servidor + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + ChatDrawer @@ -906,9 +971,9 @@ - + - + LocalDocs LocalDocs @@ -925,192 +990,192 @@ Adicionar Coleção de Documentos - - + + Load the default model Carregar o modelo padrão - - + + Loads the default model which can be changed in settings Carrega o modelo padrão (personalizável nas configurações) - - + + No Model Installed Nenhum Modelo Instalado - - + + GPT4All requires that you install at least one model to get started O GPT4All precisa de pelo menos um modelo modelo instalado para funcionar - - + + Install a Model Instalar um Modelo - - + + Shows the add model view Mostra a visualização para adicionar modelo - - + + Conversation with the model Conversa com o modelo - - + + prompt / response pairs from the conversation Pares de pergunta/resposta da conversa - - + + GPT4All GPT4All - - + + You Você - - + + recalculating context ... recalculando contexto... - - + + response stopped ... resposta interrompida... - - + + processing ... processando... - - + + generating response ... gerando resposta... - - + + generating questions ... gerando perguntas... - - - - + + + + Copy Copiar - - + + Copy Message Copiar Mensagem - - + + Disable markdown Desativar markdown - - + + Enable markdown Ativar markdown - - + + Thumbs up Resposta boa - - + + Gives a thumbs up to the response Curte a resposta - - + + Thumbs down Resposta ruim - - + + Opens thumbs down dialog Abrir diálogo de joinha para baixo - - + + %1 Sources %1 Origens - - + + Suggested follow-ups Perguntas relacionadas - - + + Erase and reset chat session Apagar e redefinir sessão de chat - - + + Copy chat session to clipboard Copiar histórico da conversa - - + + Redo last chat response Refazer última resposta - - + + Stop generating Parar de gerar - - + + Stop the current response generation Parar a geração da resposta atual - - + + Reloads the model Recarrega modelo @@ -1122,9 +1187,9 @@ modelo instalado para funcionar - + - + Reload · %1 Recarregar · %1 @@ -1135,68 +1200,68 @@ modelo instalado para funcionar Carregando · %1 - - + + Load · %1 (default) → Carregar · %1 (padrão) → - - + + retrieving localdocs: %1 ... Recuperando dados em LocalDocs: %1 ... - - + + searching localdocs: %1 ... Buscando em LocalDocs: %1 ... - - + + Send a message... Enviar uma mensagem... - - + + Load a model to continue... Carregue um modelo para continuar... - - + + Send messages/prompts to the model Enviar mensagens/prompts para o modelo - - + + Cut Recortar - - + + Paste Colar - - + + Select All Selecionar tudo - - + + Send message Enviar mensagem - - + + Sends the message/prompt contained in textfield to the model Envia a mensagem/prompt contida no campo de texto para o modelo @@ -1204,14 +1269,14 @@ modelo instalado para funcionar CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results Aviso: pesquisar coleções durante a indexação pode retornar resultados incompletos - - + + %n file(s) %n arquivo(s) @@ -1219,8 +1284,8 @@ modelo instalado para funcionar - - + + %n word(s) %n palavra(s) @@ -1228,24 +1293,62 @@ modelo instalado para funcionar - - + + Updating Atualizando - - + + + Add Docs + Adicionar Documentos - - + + Select a collection to make it available to the chat model. Selecione uma coleção para disponibilizá-la ao modelo de chat. + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1513,122 +1616,126 @@ modelo instalado para funcionar + Adicionar Coleção - - ERROR: The LocalDocs database is not valid. - ERRO: O banco de dados do LocalDocs não é válido. + ERRO: O banco de dados do LocalDocs não é válido. + + + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + - - + + No Collections Installed Nenhuma Coleção Instalada - - + + Install a collection of local documents to get started using this feature Instale uma coleção de documentos locais para começar a usar este recurso - - + + + Add Doc Collection + Adicionar Coleção de Documentos - - + + Shows the add model view Mostra a visualização para adicionar modelo - - + + Indexing progressBar Barra de progresso de indexação - - + + Shows the progress made in the indexing Mostra o progresso da indexação - - + + ERROR ERRO - - + + INDEXING INDEXANDO - - + + EMBEDDING INCORPORANDO - - + + REQUIRES UPDATE REQUER ATUALIZAÇÃO - - + + READY PRONTO - - + + INSTALLING INSTALANDO - - + + Indexing in progress Indexação em andamento - - + + Embedding in progress Incorporação em andamento - - + + This collection requires an update after version change Esta coleção requer uma atualização após a mudança de versão - - + + Automatically reindexes upon changes to the folder Reindexa automaticamente após alterações na pasta - - + + Installation in progress Instalação em andamento - - + + % % - - + + %n file(s) %n arquivo(s) @@ -1636,8 +1743,8 @@ modelo instalado para funcionar - - + + %n word(s) %n palavra(s) @@ -1645,32 +1752,32 @@ modelo instalado para funcionar - - + + Remove Remover - - + + Rebuild Reconstruir - - + + Reindex this folder from scratch. This is slow and usually not needed. Reindexar esta pasta do zero. Esta operação é muito lenta e geralmente não é necessária. - - + + Update Atualizar - - + + Update the collection to the new version. This is a slow operation. Atualizar a coleção para a nova versão. Esta operação pode demorar. @@ -1678,47 +1785,67 @@ modelo instalado para funcionar ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>É necessária uma chave de API da OpenAI.</li><li>AVISO: Seus chats serão enviados para a OpenAI!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a OpenAI</li><li>Você pode solicitar uma chave de API <a href="https://platform.openai.com/account/api-keys">aqui.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>Modelo ChatGPT GPT-3.5 Turbo da OpenAI</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Modelo ChatGPT GPT-4 da OpenAI</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 <strong>Modelo Mistral Tiny</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Modelo Mistral Small</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Modelo Mistral Medium</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Mesmo que você pague pelo ChatGPT-4 da OpenAI, isso não garante acesso à chave de API. Contate a OpenAI para mais informações. - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>É necessária uma chave de API da Mistral.</li><li>AVISO: Seus chats serão enviados para a Mistral!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a Mistral</li><li>Você pode solicitar uma chave de API <a href="https://console.mistral.ai/user/api-keys">aqui</a>.</li> - + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Criado por %1.</strong><br><ul><li>Publicado em %2.<li>Este modelo tem %3 curtidas.<li>Este modelo tem %4 downloads.<li>Mais informações podem ser encontradas <a href="https://huggingface.co/%5">aqui.</a></ul> @@ -1804,32 +1931,32 @@ modelo instalado para funcionar Prompt usado para gerar automaticamente nomes de chats. - - + + Suggested FollowUp Prompt Prompt de Sugestão de Acompanhamento - - + + Prompt used to generate suggested follow-up questions. Prompt usado para gerar sugestões de perguntas. - - + + Context Length Tamanho do Contexto - - + + Number of input and output tokens the model sees. Tamanho da Janela de Contexto. - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1838,152 +1965,152 @@ Usar mais contexto do que o modelo foi treinado pode gerar resultados ruins. Obs.: Só entrará em vigor após recarregar o modelo. - - + + Temperature Temperatura - - + + Randomness of model output. Higher -> more variation. Aleatoriedade das respostas. Quanto maior, mais variadas. - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. Aumenta a chance de escolher tokens menos prováveis. Obs.: Uma temperatura mais alta gera resultados mais criativos, mas menos previsíveis. - - + + Top-P Top-P - - + + Nucleus Sampling factor. Lower -> more predicatable. Amostragem por núcleo. Menor valor, respostas mais previsíveis. - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Apenas tokens com probabilidade total até o valor de top_p serão escolhidos. Obs.: Evita tokens muito improváveis. - - + + Min-P Min-P - - + + Minimum token probability. Higher -> more predictable. Probabilidade mínima do token. Quanto maior -> mais previsível. - - + + Sets the minimum relative probability for a token to be considered. Define a probabilidade relativa mínima para um token ser considerado. - - + + Top-K Top-K - - + + Size of selection pool for tokens. Número de tokens considerados na amostragem. - - + + Only the top K most likely tokens will be chosen from. Serão escolhidos apenas os K tokens mais prováveis. - - + + Max Length Comprimento Máximo - - + + Maximum response length, in tokens. Comprimento máximo da resposta, em tokens. - - + + Prompt Batch Size Tamanho do Lote de Processamento - - + + The batch size used for prompt processing. Tokens processados por lote. - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Quantidade de tokens de prompt para processar de uma vez. OBS.: Valores mais altos podem acelerar a leitura dos prompts, mas usarão mais RAM. - - + + Repeat Penalty Penalidade de Repetição - - + + Repetition penalty factor. Set to 1 to disable. Penalidade de Repetição (1 para desativar). - - + + Repeat Penalty Tokens Tokens para penalizar repetição - - + + Number of previous tokens used for penalty. Número de tokens anteriores usados para penalidade. - - + + GPU Layers Camadas na GPU - - + + Number of model layers to load into VRAM. Camadas Carregadas na GPU. - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -1995,230 +2122,264 @@ Obs.: Só entrará em vigor após recarregar o modelo. ModelsView - - + + No Models Installed Nenhum Modelo Instalado - - + + Install a model to get started using GPT4All Instale um modelo para começar a usar o GPT4All - - - - + + + + + Add Model + Adicionar Modelo - - + + Shows the add model view Mostra a visualização para adicionar modelo - - + + Installed Models Modelos Instalados - - + + Locally installed chat models Modelos de chat instalados localmente - - + + Model file Arquivo do modelo - - + + Model file to be downloaded Arquivo do modelo a ser baixado - - + + Description Descrição - - + + File description Descrição do arquivo - - + + Cancel Cancelar - - + + Resume Retomar - - + + Stop/restart/start the download Parar/reiniciar/iniciar o download - - + + Remove Remover - - + + Remove model from filesystem Remover modelo do sistema de arquivos - - - - + + + + Install Instalar - - + + Install online model Instalar modelo online - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Erro</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">AVISO: Não recomendado para seu hardware. O modelo requer mais memória (%1 GB) do que seu sistema tem disponível (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB %1 GB - - + + ? ? - - + + Describes an error that occurred when downloading Descreve um erro que ocorreu durante o download - - + + Error for incompatible hardware Erro para hardware incompatível - - + + Download progressBar Barra de progresso do download - - + + Shows the progress made in the download Mostra o progresso do download - - + + Download speed Velocidade de download - - + + Download speed in bytes/kilobytes/megabytes per second Velocidade de download em bytes/kilobytes/megabytes por segundo - - + + Calculating... Calculando... - - + + + + + + Whether the file hash is being calculated Se o hash do arquivo está sendo calculado - - + + Busy indicator Indicador de ocupado - - + + Displayed when the file hash is being calculated Exibido quando o hash do arquivo está sendo calculado - - + + enter $API_KEY inserir $API_KEY - - + + File size Tamanho do arquivo - - + + RAM required RAM necessária - - + + Parameters Parâmetros - - + + Quant Quant - - + + Type Tipo @@ -2384,9 +2545,8 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake QObject - Default - Padrão + Padrão diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts index 7c14a72a..0137cfc9 100644 --- a/gpt4all-chat/translations/gpt4all_ro_RO.ts +++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts @@ -1,572 +1,474 @@ - - - AddCollectionView - - - - ← Existing Collections - ← Colecţiile curente - - - - - Add Document Collection - Adauga o Colecţie de documente - - - - - Add a folder containing plain text files, PDFs, or Markdown. Configure + + + AddCollectionView + + + + ← Existing Collections + ← Colecţiile curente + + + + + Add Document Collection + Adauga o Colecţie de documente + + + Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings. - Adaugă un folder care conţine fişiere in format text-simplu, PDF sau Markdown. - Extensii suplimentare pot fi specificate în Configurare. - - - - - Please choose a directory - Selectează un folder/director - - - - - Name - Denumire - - - - - Collection name... - Denumirea Colecţiei... - - - - - Name of the collection to add (Required) - Denumirea Colecţiei de adăugat (necesar) - - - - - Folder - Folder - - - - - Folder path... - Calea spre folder... - - - - - Folder path to documents (Required) - Calea spre documente (necesar) - - - - - Browse - Căutare - - - - - Create Collection - Creează o Colecţie - - - - AddModelView - - - - ← Existing Models - ← Modele existente/accesibile - - - - - Explore Models - Caută modele - - - - - Discover and download models by keyword search... - Caută şi descarcă modele după un cuvant-cheie... - - - - - Text field for discovering and filtering downloadable models - Câmp-text pentru căutarea şi filtrarea modelelor ce pot fi descărcate - - - - - Initiate model discovery and filtering - Initiază căutarea şi filtrarea modelelor - - - - - Triggers discovery and filtering of models - Activează căutarea şi filtrarea modelelor - - - - - Default - Implicit - - - - - Likes - Likes (Îmi Place) - - - - - Downloads - Download-uri - - - - - Recent - Recent/e - - - - - Asc - Asc (A->Z) - - - - - Desc - Desc (Z->A) - - - - - None - Niciunul - - - - - Searching · %1 - Căutare · %1 - - - - - Sort by: %1 - Ordonare după: %1 - - - - - Sort dir: %1 - Sensul ordonării: %1 - - - - - Limit: %1 - Límită: %1 - - - - - Network error: could not retrieve %1 - Eroare de reţea: nu se poate prelua %1 - - - - - - - Busy indicator - Indicator de activitate - - - - - Displayed when the models request is ongoing - Se afişează în timpul solicitării modelului - - - - - Model file - Fişierul modelului - - - - - Model file to be downloaded - Fişierul modelului de descărcat - - - - - Description - Descriere - - - - - File description - Descrierea fişierului - - - - - Cancel - Anulare - - - - - Resume - Continuare - - - - - Download - Download - - - - - Stop/restart/start the download - Opreşte/Reporneşte/Începe descărcarea - - - - - Remove - Elimină - - - - - Remove model from filesystem - Şterge modelul din sistemul de fişiere - - - - - - - Install - Instalare - - - - - Install online model - Instalează un model prin reţea - - - - - <strong><font size="2">WARNING: Not recommended for your + Adaugă un folder care conţine fişiere in format text-simplu, PDF sau Markdown. + Extensii suplimentare pot fi specificate în Configurare. + + + + + Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings. + + + + + + Please choose a directory + Selectează un folder/director + + + + + Name + Denumire + + + + + Collection name... + Denumirea Colecţiei... + + + + + Name of the collection to add (Required) + Denumirea Colecţiei de adăugat (necesar) + + + + + Folder + Folder + + + + + Folder path... + Calea spre folder... + + + + + Folder path to documents (Required) + Calea spre documente (necesar) + + + + + Browse + Căutare + + + + + Create Collection + Creează o Colecţie + + + + AddModelView + + + + ← Existing Models + ← Modele existente/accesibile + + + + + Explore Models + Caută modele + + + + + Discover and download models by keyword search... + Caută şi descarcă modele după un cuvant-cheie... + + + + + Text field for discovering and filtering downloadable models + Câmp-text pentru căutarea şi filtrarea modelelor ce pot fi descărcate + + + + + Initiate model discovery and filtering + Initiază căutarea şi filtrarea modelelor + + + + + Triggers discovery and filtering of models + Activează căutarea şi filtrarea modelelor + + + + + Default + Implicit + + + + + Likes + Likes (Îmi Place) + + + + + Downloads + Download-uri + + + + + Recent + Recent/e + + + + + Asc + Asc (A->Z) + + + + + Desc + Desc (Z->A) + + + + + None + Niciunul + + + + + Searching · %1 + Căutare · %1 + + + + + Sort by: %1 + Ordonare după: %1 + + + + + Sort dir: %1 + Sensul ordonării: %1 + + + + + Limit: %1 + Límită: %1 + + + + + Network error: could not retrieve %1 + Eroare de reţea: nu se poate prelua %1 + + + + + + + Busy indicator + Indicator de activitate + + + + + Displayed when the models request is ongoing + Se afişează în timpul solicitării modelului + + + + + Model file + Fişierul modelului + + + + + Model file to be downloaded + Fişierul modelului de descărcat + + + + + Description + Descriere + + + + + File description + Descrierea fişierului + + + + + Cancel + Anulare + + + + + Resume + Continuare + + + + + Download + Download + + + + + Stop/restart/start the download + Opreşte/Reporneşte/Începe descărcarea + + + + + Remove + Elimină + + + + + Remove model from filesystem + Şterge modelul din sistemul de fişiere + + + + + + + Install + Instalare + + + + + Install online model + Instalează un model prin reţea + + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - <strong><font size="2">ATENŢIE: Nerecomandat - pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem + <strong><font size="2">ATENŢIE: Nerecomandat + pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem (%2).</strong></font> - - - - - %1 GB - %1 GB - - - - - - - ? - ? - - - - - Describes an error that occurred when downloading - Descrie eroarea apăruta in timpul descărcarii - - - - - <strong><font size="1"><a + + + + + %1 GB + %1 GB + + + + + + + ? + ? + + + + + Describes an error that occurred when downloading + Descrie eroarea apăruta in timpul descărcarii + + + <strong><font size="1"><a href="#eroare">Eroare</a></strong></font> - <strong><font size="1"><a + <strong><font size="1"><a href="#eroare">Eroare</a></strong></font> - - - - - Error for incompatible hardware - Eroare: hardware incompatibil - - - - - Download progressBar - Progresia descărcării - - - - - Shows the progress made in the download - Afişează progresia descărcarii - - - - - Download speed - Viteza de download/descărcare - - - - - Download speed in bytes/kilobytes/megabytes per second - Viteza de download/descărcare în bytes/kilobytes/megabytes pe secundă - - - - - Calculating... - Calculare... - - - - - - - Whether the file hash is being calculated - Dacă se calculează hash-ul fişierului - - - - - Displayed when the file hash is being calculated - Se afişează când se calculează hash-ul fişierului - - - - - enter $API_KEY - introdu cheia $API_KEY - - - - - File size - Dimensiunea fişierului - - - - - RAM required - RAM necesară - - - - - Parameters - Parametri - - - - - Quant - Quant(ificare) - - - - - Type - Tip - - - - ApplicationSettings - - - - Application - Aplicaţie/Program - - - - - Network dialog - Reţea - - - - - opt-in to share feedback/conversations - optional: partajarea (share) de comentarii/conversatii - - - - - ERROR: Update system could not find the MaintenanceTool used<br> + + + + + Error for incompatible hardware + Eroare: hardware incompatibil + + + + + Download progressBar + Progresia descărcării + + + + + Shows the progress made in the download + Afişează progresia descărcarii + + + + + Download speed + Viteza de download/descărcare + + + + + Download speed in bytes/kilobytes/megabytes per second + Viteza de download/descărcare în bytes/kilobytes/megabytes pe secundă + + + + + Calculating... + Calculare... + + + + + + + + + + + Whether the file hash is being calculated + Dacă se calculează hash-ul fişierului + + + + + Displayed when the file hash is being calculated + Se afişează când se calculează hash-ul fişierului + + + + + ERROR: $API_KEY is empty. + + + + + + enter $API_KEY + introdu cheia $API_KEY + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + + File size + Dimensiunea fişierului + + + + + RAM required + RAM necesară + + + + + Parameters + Parametri + + + + + Quant + Quant(ificare) + + + + + Type + Tip + + + + ApplicationSettings + + + + Application + Aplicaţie/Program + + + + + Network dialog + Reţea + + + + + opt-in to share feedback/conversations + optional: partajarea (share) de comentarii/conversatii + + + ERROR: Update system could not find the MaintenanceTool used<br> to check for updates!<br><br> Did you install this application using the online installer? If so,<br> the MaintenanceTool executable should be located one directory<br> @@ -574,826 +476,766 @@ If you can't start it manually, then I'm afraid you'll have to<br> reinstall. - EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> - necesară căutării de versiuni noi!<br><br> - Ai instalat acest program folosind kitul online? Dacă da,<br> - atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> + EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> + necesară căutării de versiuni noi!<br><br> + Ai instalat acest program folosind kitul online? Dacă da,<br> + atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> - Dacă nu poate fi lansata manual, atunci programul trebuie reinstalat. - - - - - Error dialog - Eroare - - - - - Application Settings - Configurarea programului - - - - - General - General - - - - - Theme - Tema pentru interfaţă - - - - - The application color scheme. - Schema de culori a programului. - - - - - Dark - Întunecat - - - - - Light - Luminos - - - - - LegacyDark - Întunecat-vechi - - - - - Font Size - Dimensiunea textului - - - - - The size of text in the application. - Dimensiunea textului în program. - - - - - Device - Dispozitiv/Device - - - - - The compute device used for text generation. "Auto" uses Vulkan or + Dacă nu poate fi lansata manual, atunci programul trebuie reinstalat. + + + + + Error dialog + Eroare + + + + + Application Settings + Configurarea programului + + + + + General + General + + + + + Theme + Tema pentru interfaţă + + + + + The application color scheme. + Schema de culori a programului. + + + + + Dark + Întunecat + + + + + Light + Luminos + + + + + LegacyDark + Întunecat-vechi + + + + + Font Size + Dimensiunea textului + + + + + The size of text in the application. + Dimensiunea textului în program. + + + + + Device + Dispozitiv/Device + + + The compute device used for text generation. "Auto" uses Vulkan or Metal. - Dispozitivul de calcul utilizat pentru generarea de text. - "Auto" apelează la Vulkan sau la Metal. - - - - - Default Model - Modelul implicit - - - - - The preferred model for new chats. Also used as the local server fallback. - Modelul preferat pentru noile conversaţii. De asemenea, folosit ca rezervă pentru serverul local. - - - - - Suggestion Mode - Modul de sugerare - - - - - Generate suggested follow-up questions at the end of responses. - Generarea de întrebări pentru continuare, la finalul replicilor. - - - - - When chatting with LocalDocs - Când se discută cu LocalDocs - - - - - Whenever possible - Oricând este posibil - - - - - Never - Niciodată - - - - - Download Path - Calea pentru download/descărcare - - - - - Where to store local models and the LocalDocs database. - Unde să fie plasate modelele şi baza de date LocalDocs. - - - - - Browse - Căutare - - - - - Choose where to save model files - Selectează locul unde vor fi plasate fişierele modelelor - - - - - Enable Datalake - Activează DataLake - - - - - Send chats and feedback to the GPT4All Open-Source Datalake. - Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All. - - - - - Advanced - Avansate - - - - - CPU Threads - Thread-uri CPU - - - - - The number of CPU threads used for inference and embedding. - Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding. - - - - - Save Chat Context - Salvarea contextului conversaţiei - - - - - Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB + Dispozitivul de calcul utilizat pentru generarea de text. + "Auto" apelează la Vulkan sau la Metal. + + + + + ERROR: Update system could not find the MaintenanceTool used<br> + to check for updates!<br><br> + Did you install this application using the online installer? If so,<br> + the MaintenanceTool executable should be located one directory<br> + above where this application resides on your filesystem.<br><br> + If you can't start it manually, then I'm afraid you'll have to<br> + reinstall. + + + + + + Small + + + + + + Medium + + + + + + Large + + + + + + Language and Locale + + + + + + The language and locale you wish to use. + + + + + + System Locale + + + + + + The compute device used for text generation. + + + + + + + + Application default + + + + + + Default Model + Modelul implicit + + + + + The preferred model for new chats. Also used as the local server fallback. + Modelul preferat pentru noile conversaţii. De asemenea, folosit ca rezervă pentru serverul local. + + + + + Suggestion Mode + Modul de sugerare + + + + + Generate suggested follow-up questions at the end of responses. + Generarea de întrebări pentru continuare, la finalul replicilor. + + + + + When chatting with LocalDocs + Când se discută cu LocalDocs + + + + + Whenever possible + Oricând este posibil + + + + + Never + Niciodată + + + + + Download Path + Calea pentru download/descărcare + + + + + Where to store local models and the LocalDocs database. + Unde să fie plasate modelele şi baza de date LocalDocs. + + + + + Browse + Căutare + + + + + Choose where to save model files + Selectează locul unde vor fi plasate fişierele modelelor + + + + + Enable Datalake + Activează DataLake + + + + + Send chats and feedback to the GPT4All Open-Source Datalake. + Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All. + + + + + Advanced + Avansate + + + + + CPU Threads + Thread-uri CPU + + + + + The number of CPU threads used for inference and embedding. + Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding. + + + + + Save Chat Context + Salvarea contextului conversaţiei + + + + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. + + + + + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. + + + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - Salvează pe disc starea modelului pentru încărcare mai rapidă. - ATENŢIE: Consumă ~2GB/conversaţie. - - - - - Enable Local Server - Activează Serverul local - - - - - Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased + Salvează pe disc starea modelului pentru încărcare mai rapidă. + ATENŢIE: Consumă ~2GB/conversaţie. + + + + + Enable Local Server + Activează Serverul local + + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. - Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte + Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse. - - - - - API Server Port - Portul Serverului API - - - - - The port to use for the local server. Requires restart. - Portul utilizat pentru Serverul local. Necesită repornirea programului. - - - - - Check For Updates - Caută update-uri - - - - - Manually check for an update to GPT4All. - Caută manual update-uri pentru GPT4All. - - - - - Updates - Update-uri/Actualizări - - - - Chat - - - - New Chat - Chat/Conversaţie Nouă - - - - Server Chat - Chat/Conversaţie cu Serverul - - - - ChatDrawer - - - - Drawer - Sertar - - - - - Main navigation drawer - Sertarul principal de navigare - - - - - + New Chat - + Chat/Conversaţie nouă - - - - - Create a new chat - Creează o Conversaţie nouă - - - - - Select the current chat or edit the chat when in edit mode - Selectează conversaţia curentă sau editează conversaţia cand eşti în modul editare - - - - - Edit chat name - Editează denumirea conversaţiei - - - - - Save chat name - Salveazăa denumirea conversaţiei - - - - - Delete chat - Şterge conversaţia - - - - - Confirm chat deletion - CONFIRMĂ ştergerea conversaţiei - - - - - Cancel chat deletion - ANULEAZĂ ştergerea conversaţiei - - - - - List of chats - Lista conversaţiilor - - - - - List of chats in the drawer dialog - Lista conversaţiilor în secţiunea-sertar - - - - ChatListModel - - - TODAY - ASTĂZI - - - - THIS WEEK - SĂPTĂMÂNA ACEASTA - - - - THIS MONTH - LUNA ACEASTA - - - - LAST SIX MONTHS - ULTIMELE ŞASE LUNI - - - - THIS YEAR - ANUL ACESTA - - - - LAST YEAR - ANUL TRECUT - - - - ChatView - - - - <h3>Warning</h3><p>%1</p> - <h3>Atenţie</h3><p>%1</p> - - - - - Switch model dialog - Schimbarea modelului - - - - - Warn the user if they switch models, then context will be erased - Avertizează utilizatorul că la schimbarea modelului va fi şters contextul - - - - - Conversation copied to clipboard. - Conversaţia a fost plasată în Clipboard. - - - - - Code copied to clipboard. - Codul a fost plasat în Clipboard. - - - - - Chat panel - Secţiunea de chat - - - - - Chat panel with options - Secţiunea de chat cu opţiuni - - - - - Reload the currently loaded model - Reîncarca modelul curent - - - - - Eject the currently loaded model - Ejectează modelul curent - - - - - No model installed. - Niciun model instalat. - - - - - Model loading error. - Eroare la încarcarea modelului. - - - - - Waiting for model... - Se aşteaptă modelul... - - - - - Switching context... - Se schimbă contextul... - - - - - Choose a model... - Selectează un model... - - - - - Not found: %1 - Absent: %1 - - - - - The top item is the current model - Primul element este modelul curent - - - - - - - LocalDocs - LocalDocs - Documente Locale - - - - - Add documents - Adaugă documente - - - - - add collections of documents to the chat - adaugă Colecţii de documente la conversaţie - - - - - Load the default model - Încarcă modelul implicit - - - - - Loads the default model which can be changed in settings - Încarcă modelul implicit care poate fi stabilit în Configurare - - - - - No Model Installed - Niciun model instalat - - - - - GPT4All requires that you install at least one + + + + + API Server Port + Portul Serverului API + + + + + The port to use for the local server. Requires restart. + Portul utilizat pentru Serverul local. Necesită repornirea programului. + + + + + Check For Updates + Caută update-uri + + + + + Manually check for an update to GPT4All. + Caută manual update-uri pentru GPT4All. + + + + + Updates + Update-uri/Actualizări + + + + Chat + + + + New Chat + Chat/Conversaţie Nouă + + + + Server Chat + Chat/Conversaţie cu Serverul + + + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + + + ChatDrawer + + + + Drawer + Sertar + + + + + Main navigation drawer + Sertarul principal de navigare + + + + + + New Chat + + Chat/Conversaţie nouă + + + + + Create a new chat + Creează o Conversaţie nouă + + + + + Select the current chat or edit the chat when in edit mode + Selectează conversaţia curentă sau editează conversaţia cand eşti în modul editare + + + + + Edit chat name + Editează denumirea conversaţiei + + + + + Save chat name + Salveazăa denumirea conversaţiei + + + + + Delete chat + Şterge conversaţia + + + + + Confirm chat deletion + CONFIRMĂ ştergerea conversaţiei + + + + + Cancel chat deletion + ANULEAZĂ ştergerea conversaţiei + + + + + List of chats + Lista conversaţiilor + + + + + List of chats in the drawer dialog + Lista conversaţiilor în secţiunea-sertar + + + + ChatListModel + + + TODAY + ASTĂZI + + + + THIS WEEK + SĂPTĂMÂNA ACEASTA + + + + THIS MONTH + LUNA ACEASTA + + + + LAST SIX MONTHS + ULTIMELE ŞASE LUNI + + + + THIS YEAR + ANUL ACESTA + + + + LAST YEAR + ANUL TRECUT + + + + ChatView + + + + <h3>Warning</h3><p>%1</p> + <h3>Atenţie</h3><p>%1</p> + + + + + Switch model dialog + Schimbarea modelului + + + + + Warn the user if they switch models, then context will be erased + Avertizează utilizatorul că la schimbarea modelului va fi şters contextul + + + + + Conversation copied to clipboard. + Conversaţia a fost plasată în Clipboard. + + + + + Code copied to clipboard. + Codul a fost plasat în Clipboard. + + + + + Chat panel + Secţiunea de chat + + + + + Chat panel with options + Secţiunea de chat cu opţiuni + + + + + Reload the currently loaded model + Reîncarca modelul curent + + + + + Eject the currently loaded model + Ejectează modelul curent + + + + + No model installed. + Niciun model instalat. + + + + + Model loading error. + Eroare la încarcarea modelului. + + + + + Waiting for model... + Se aşteaptă modelul... + + + + + Switching context... + Se schimbă contextul... + + + + + Choose a model... + Selectează un model... + + + + + Not found: %1 + Absent: %1 + + + + + The top item is the current model + Primul element este modelul curent + + + + + + + LocalDocs + LocalDocs - Documente Locale + + + + + Add documents + Adaugă documente + + + + + add collections of documents to the chat + adaugă Colecţii de documente la conversaţie + + + + + Load the default model + Încarcă modelul implicit + + + + + Loads the default model which can be changed in settings + Încarcă modelul implicit care poate fi stabilit în Configurare + + + + + No Model Installed + Niciun model instalat + + + GPT4All requires that you install at least one model to get started - GPT4All necesită cel puţin un + GPT4All necesită cel puţin un model pentru a putea rula - - - - - Install a Model - Instalează un model - - - - - Shows the add model view - Afisează secţiunea de adăugare a unui model - - - - - Conversation with the model - Conversaţie cu modelul - - - - - prompt / response pairs from the conversation - perechi prompt/replică din conversaţie - - - - - GPT4All - GPT4All - - - - - You - Tu - - - - - recalculating context ... - se reface contextul... - - - - - response stopped ... - replică întreruptă... - - - - - processing ... - procesare... - - - - - generating response ... - se generează replica... - - - - - generating questions ... - se generează întrebari... - - - - - - - Copy - Copiere - - - - - Copy Message - Copiez mesajul - - - - - Disable markdown - Dezactivez markdown - - - - Enable markdown - Activez markdown - - - - - Thumbs up - Îmi Place - - - - - Gives a thumbs up to the response - Da un Îmi Place acestei replici - - - - - Thumbs down - Nu Îmi Place - - - - - Opens thumbs down dialog - Deschide reacţia Nu Îmi Place - - - - - %1 Sources - %1 Surse - - - - - Suggested follow-ups - Continuări sugerate - - - - - Erase and reset chat session - Şterge şi resetează sesiunea de chat - - - - - Copy chat session to clipboard - Copiez sesiunea de chat în Clipboard - - - - - Redo last chat response - Refacerea ultimei replici - - - - - Stop generating - Opreşte generarea - - - - - Stop the current response generation - Opreşte generarea replicii curente - - - - - Reloads the model - Reîncarc modelul - - - - - <h3>Encountered an error loading + + + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help + + + + + + GPT4All requires that you install at least one +model to get started + + + + + + Install a Model + Instalează un model + + + + + Shows the add model view + Afisează secţiunea de adăugare a unui model + + + + + Conversation with the model + Conversaţie cu modelul + + + + + prompt / response pairs from the conversation + perechi prompt/replică din conversaţie + + + + + GPT4All + GPT4All + + + + + You + Tu + + + + + recalculating context ... + se reface contextul... + + + + + response stopped ... + replică întreruptă... + + + + + processing ... + procesare... + + + + + generating response ... + se generează replica... + + + + + generating questions ... + se generează întrebari... + + + + + + + Copy + Copiere + + + + + Copy Message + Copiez mesajul + + + + + Disable markdown + Dezactivez markdown + + + + + Enable markdown + Activez markdown + + + + + Thumbs up + Îmi Place + + + + + Gives a thumbs up to the response + Da un Îmi Place acestei replici + + + + + Thumbs down + Nu Îmi Place + + + + + Opens thumbs down dialog + Deschide reacţia Nu Îmi Place + + + + + %1 Sources + %1 Surse + + + + + Suggested follow-ups + Continuări sugerate + + + + + Erase and reset chat session + Şterge şi resetează sesiunea de chat + + + + + Copy chat session to clipboard + Copiez sesiunea de chat în Clipboard + + + + + Redo last chat response + Refacerea ultimei replici + + + + + Stop generating + Opreşte generarea + + + + + Stop the current response generation + Opreşte generarea replicii curente + + + + + Reloads the model + Reîncarc modelul + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, @@ -1406,1570 +1248,1436 @@ href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help - <h3>EROARE la încărcarea + <h3>EROARE la încărcarea modelului:</h3><br><i>"%1"</i><br><br>Astfel - de erori pot apărea din mai multe cauze, dintre care cele mai comune - includ un format inadecvat al fişierului, un download incomplet sau întrerupt, - un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model. - Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are - un format si un tip compatibile; verifică dacă fişierul modelului este complet - în folderul dedicat - acest folder este afişat în secţiunea Configurare; - dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt - după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile - în pagina unde am plasat <a - href="https://docs.gpt4all.io/">documentaţia</a> pentru - interfaţa gráfică<li>poti parcurge <a + de erori pot apărea din mai multe cauze, dintre care cele mai comune + includ un format inadecvat al fişierului, un download incomplet sau întrerupt, + un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model. + Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are + un format si un tip compatibile; verifică dacă fişierul modelului este complet + în folderul dedicat - acest folder este afişat în secţiunea Configurare; + dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt + după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile + în pagina unde am plasat <a + href="https://docs.gpt4all.io/">documentaţia</a> pentru + interfaţa gráfică<li>poti parcurge <a href="https://discord.gg/4M2QFmTt2k">canalul nostru Discord</a> unde - se oferă ajutor - - - - - - - Reload · %1 - Reincarcare · %1 - - - - - Loading · %1 - Incarcare · %1 - - - - - Load · %1 (default) → - Incarca · %1 (implicit) → - - - - - retrieving localdocs: %1 ... - se preiau LocalDocs/Documentele Locale: %1 ... - - - - - searching localdocs: %1 ... - se cauta in LocalDocs: %1 ... - - - - - Send a message... - Trimite un mesaj... - - - - - Load a model to continue... - Incarca un model pentru a continua... - - - - - Send messages/prompts to the model - Trimite mesaje/prompt-uri către model - - - - - Cut - Decupare (Cut) - - - - - Paste - Alipire (Paste) - - - - - Select All - Selectez tot - - - - - Send message - Trimit mesajul - - - - - Sends the message/prompt contained in textfield to the model - Trimite modelului mesajul/prompt-ul din campul-text - - - - CollectionsDrawer - - - - Warning: searching collections while indexing can return incomplete results - Atenţie: căutarea în Colecţii in timp ce sunt Indexate poate cauza rezultate incomplete - - - - - %n file(s) - - %n fişier - %n fişiere - - - - - - %n word(s) - - %n cuvânt - %n cuvinte - - - - - - Updating - Actualizare - - - - - + Add Docs - + Adaug documente - - - - - Select a collection to make it available to the chat model. - Selectează o Colectie pentru ca modelul să o poată accesa. - - - - HomeView - - - - Welcome to GPT4All - Bun venit in GPT4All - - - - - The privacy-first LLM chat application - Programul care prioritizeaza intimitatea (privacy) - - - - - Start chatting - Începe o conversaţie - - - - - Start Chatting - Începe o conversaţie - - - - - Chat with any LLM - Dialoghează cu un LLM - - - - - LocalDocs - LocalDocs/Documente Locale - - - - - Chat with your local files - Dialoghează cu fişierele tale locale - - - - - Find Models - Caută modele - - - - - Explore and download models - Explorează şi descarcă modele - - - - - Latest news - Ultimele ştiri - - - - - Latest news from GPT4All - Ultimele ştiri de la GPT4All - - - - - Release Notes - Despre această versiune - - - - - Documentation - Documentaţie - - - - - Discord - Discord - - - - - X (Twitter) - X (Twitter) - - - - - Github - Github - - - - - GPT4All.io - GPT4All.io - - - - - Subscribe to Newsletter - Abonare la Newsletter - - - - LocalDocsSettings - - - - LocalDocs - LocalDocs/Documente Locale - - - - - LocalDocs Settings - Configurarea LocalDocs - - - - - Indexing - Indexare - - - - - Allowed File Extensions - Extensii compatibile de fişier - - - - - Comma-separated list. LocalDocs will only attempt to process files with these + se oferă ajutor + + + + + + + Reload · %1 + Reincarcare · %1 + + + + + Loading · %1 + Incarcare · %1 + + + + + Load · %1 (default) → + Incarca · %1 (implicit) → + + + + + retrieving localdocs: %1 ... + se preiau LocalDocs/Documentele Locale: %1 ... + + + + + searching localdocs: %1 ... + se cauta in LocalDocs: %1 ... + + + + + Send a message... + Trimite un mesaj... + + + + + Load a model to continue... + Incarca un model pentru a continua... + + + + + Send messages/prompts to the model + Trimite mesaje/prompt-uri către model + + + + + Cut + Decupare (Cut) + + + + + Paste + Alipire (Paste) + + + + + Select All + Selectez tot + + + + + Send message + Trimit mesajul + + + + + Sends the message/prompt contained in textfield to the model + Trimite modelului mesajul/prompt-ul din campul-text + + + + CollectionsDrawer + + + + Warning: searching collections while indexing can return incomplete results + Atenţie: căutarea în Colecţii in timp ce sunt Indexate poate cauza rezultate incomplete + + + + + %n file(s) + + %n fişier + %n fişiere + + + + + + + %n word(s) + + %n cuvânt + %n cuvinte + + + + + + + Updating + Actualizare + + + + + + Add Docs + + Adaug documente + + + + + Select a collection to make it available to the chat model. + Selectează o Colectie pentru ca modelul să o poată accesa. + + + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + + + HomeView + + + + Welcome to GPT4All + Bun venit in GPT4All + + + + + The privacy-first LLM chat application + Programul care prioritizeaza intimitatea (privacy) + + + + + Start chatting + Începe o conversaţie + + + + + Start Chatting + Începe o conversaţie + + + + + Chat with any LLM + Dialoghează cu un LLM + + + + + LocalDocs + LocalDocs/Documente Locale + + + + + Chat with your local files + Dialoghează cu fişierele tale locale + + + + + Find Models + Caută modele + + + + + Explore and download models + Explorează şi descarcă modele + + + + + Latest news + Ultimele ştiri + + + + + Latest news from GPT4All + Ultimele ştiri de la GPT4All + + + + + Release Notes + Despre această versiune + + + + + Documentation + Documentaţie + + + + + Discord + Discord + + + + + X (Twitter) + X (Twitter) + + + + + Github + Github + + + + + GPT4All.io + GPT4All.io + + + + + Subscribe to Newsletter + Abonare la Newsletter + + + + LocalDocsSettings + + + + LocalDocs + LocalDocs/Documente Locale + + + + + LocalDocs Settings + Configurarea LocalDocs + + + + + Indexing + Indexare + + + + + Allowed File Extensions + Extensii compatibile de fişier + + + Comma-separated list. LocalDocs will only attempt to process files with these extensions. - Elemente (extensii) separate prin virgulă. LocalDocs va încerca procesarea - numai a fişierelor cu aceste extensii. - - - - - Embedding - Embedding - - - - - Use Nomic Embed API - Folosesc Nomic Embed API - - - - - Embed documents using the fast Nomic API instead of a private local model. + Elemente (extensii) separate prin virgulă. LocalDocs va încerca procesarea + numai a fişierelor cu aceste extensii. + + + + + Embedding + Embedding + + + + + Use Nomic Embed API + Folosesc Nomic Embed API + + + Embed documents using the fast Nomic API instead of a private local model. Requires restart. - Embedding pe documente folosind API de la Nomic în locul unui model local. - Necesită repornire. - - - - - Nomic API Key - Cheia API Nomic - - - - - API key to use for Nomic Embed. Get one from the Atlas <a + Embedding pe documente folosind API de la Nomic în locul unui model local. + Necesită repornire. + + + + + Nomic API Key + Cheia API Nomic + + + API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart. - Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a + Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> - Necesită repornire. - - - - - Embeddings Device - Dispozitivul pentru Embeddings - - - - - The compute device used for embeddings. "Auto" uses the CPU. Requires + Necesită repornire. + + + + + Embeddings Device + Dispozitivul pentru Embeddings + + + The compute device used for embeddings. "Auto" uses the CPU. Requires restart. - Dispozitivul pentru Embeddings. - "Auto" apelează la CPU. Necesită repornire - - - - - Display - Visualizare - - - - - Show Sources - Afişarea Surselor - - - - - Display the sources used for each response. - Afişează Sursele utilizate pentru fiecare replică. - - - - - Advanced - Avansat - - - - - Warning: Advanced usage only. - Atenţie: Numai pentru utilizare avansată. - - - - - + Dispozitivul pentru Embeddings. + "Auto" apelează la CPU. Necesită repornire + + + + + Comma-separated list. LocalDocs will only attempt to process files with these extensions. + + + + + + Embed documents using the fast Nomic API instead of a private local model. Requires restart. + + + + + + API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart. + + + + + + The compute device used for embeddings. "Auto" uses the CPU. Requires restart. + + + + + + Display + Visualizare + + + + + Show Sources + Afişarea Surselor + + + + + Display the sources used for each response. + Afişează Sursele utilizate pentru fiecare replică. + + + + + Advanced + Avansat + + + + + Warning: Advanced usage only. + Atenţie: Numai pentru utilizare avansată. + + + + + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. + + + + + + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. + + + + + + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. + + + + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. - + Valori prea mari pot cauza erori cu LocalDocs, replici lente sau - absenţa lor completă. În mare, numărul {N caractere x N citate} este adăugat - la Context Window/Size/Length a modelului. Mai multe informaţii: <a + absenţa lor completă. În mare, numărul {N caractere x N citate} este adăugat + la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>. - - - - - Document snippet size (characters) - Lungimea (în caractere) a citatelor din documente - - - - - Number of characters per document snippet. Larger numbers increase likelihood of + + + + + Document snippet size (characters) + Lungimea (în caractere) a citatelor din documente + + + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. - Numarul caracterelor din fiecare citat. Numere mari amplifică probabilitatea - unor replici corecte, dar de asemenea pot cauza generare lentă. - - - - - Max document snippets per prompt - Numărul maxim de citate per prompt - - - - - Max best N matches of retrieved document snippets to add to the context for + Numarul caracterelor din fiecare citat. Numere mari amplifică probabilitatea + unor replici corecte, dar de asemenea pot cauza generare lentă. + + + + + Max document snippets per prompt + Numărul maxim de citate per prompt + + + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. - Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul - pentru prompt. Numere mari amplifică probabilitatea - unor replici corecte, dar de asemenea pot cauza generare lentă. - - - - LocalDocsView - - - - LocalDocs - LocalDocs/Documente Locale - - - - - Chat with your local files - Dialoghează cu fişierele tale locale - - - - - + Add Collection - + Adaugă o Colecţie - - - - - ERROR: The LocalDocs database is not valid. - EROARE: Baza de date LocalDocs nu e validă. - - - - - No Collections Installed - Nu există Colecţii instalate - - - - - Install a collection of local documents to get started using this feature - Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta - - - - - + Add Doc Collection - + Adaugă o Colecţie de documente - - - - - Shows the add model view - Afişează sectiunea de adăugare a unui model - - - - - Indexing progressBar - Bara de progresie a Indexării - - - - - Shows the progress made in the indexing - Afişează progresia Indexării - - - - - ERROR - EROARE - - - - - INDEXING - ...SE INDEXEAZĂ... - - - - - EMBEDDING - ...EMBEDDINGs... - - - - - REQUIRES UPDATE - NECESITĂ UPDATE - - - - - READY - GATA - - - - - INSTALLING - ...INSTALARE... - - - - - Indexing in progress - Se Indexează... - - - - - Embedding in progress - ...Se calculează Embeddings... - - - - - This collection requires an update after version change - Această Colecţie necesită update după schimbarea versiunii - - - - - Automatically reindexes upon changes to the folder - Se reindexează automat după schimbări ale folderului - - - - - Installation in progress - ...Instalare în curs... - - - - - % - % - - - - - %n file(s) - - %n fişier - %n fişiere - - - - - - %n word(s) - - %n cuvânt - %n cuvinte - - - - - - Remove - Elimină - - - - - Rebuild - Reconstrucţie - - - - - Reindex this folder from scratch. This is slow and usually not needed. - Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil. - - - - - Update - Update/Actualizare - - - - - Update the collection to the new version. This is a slow operation. - Actualizează Colectia la noua versiune. Această procedură e lentă. - - - - ModelList - - - + Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul + pentru prompt. Numere mari amplifică probabilitatea + unor replici corecte, dar de asemenea pot cauza generare lentă. + + + + LocalDocsView + + + + LocalDocs + LocalDocs/Documente Locale + + + + + Chat with your local files + Dialoghează cu fişierele tale locale + + + + + + Add Collection + + Adaugă o Colecţie + + + ERROR: The LocalDocs database is not valid. + EROARE: Baza de date LocalDocs nu e validă. + + + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + + + + + + No Collections Installed + Nu există Colecţii instalate + + + + + Install a collection of local documents to get started using this feature + Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta + + + + + + Add Doc Collection + + Adaugă o Colecţie de documente + + + + + Shows the add model view + Afişează sectiunea de adăugare a unui model + + + + + Indexing progressBar + Bara de progresie a Indexării + + + + + Shows the progress made in the indexing + Afişează progresia Indexării + + + + + ERROR + EROARE + + + + + INDEXING + ...SE INDEXEAZĂ... + + + + + EMBEDDING + ...EMBEDDINGs... + + + + + REQUIRES UPDATE + NECESITĂ UPDATE + + + + + READY + GATA + + + + + INSTALLING + ...INSTALARE... + + + + + Indexing in progress + Se Indexează... + + + + + Embedding in progress + ...Se calculează Embeddings... + + + + + This collection requires an update after version change + Această Colecţie necesită update după schimbarea versiunii + + + + + Automatically reindexes upon changes to the folder + Se reindexează automat după schimbări ale folderului + + + + + Installation in progress + ...Instalare în curs... + + + + + % + % + + + + + %n file(s) + + %n fişier + %n fişiere + + + + + + + %n word(s) + + %n cuvânt + %n cuvinte + + + + + + + Remove + Elimină + + + + + Rebuild + Reconstrucţie + + + + + Reindex this folder from scratch. This is slow and usually not needed. + Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil. + + + + + Update + Update/Actualizare + + + + + Update the collection to the new version. This is a slow operation. + Actualizează Colectia la noua versiune. Această procedură e lentă. + + + + ModelList + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> - - <ul><li>Necesită o cheie API OpenAI personală. - </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI! - </li><li>Cheia ta API va fi stocată pe disc (local) - </li><li>Va fi utilizată numai pentru comunicarea cu - OpenAI</li><li>Poţi solicita o cheie API aici: <a + + <ul><li>Necesită o cheie API OpenAI personală. + </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI! + </li><li>Cheia ta API va fi stocată pe disc (local) + </li><li>Va fi utilizată numai pentru comunicarea cu + OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aquí.</a></li> - - - - <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> + + + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - <strong>Modelul ChatGPT GPT-3.5 Turbo al + <strong>Modelul ChatGPT GPT-3.5 Turbo al OpenAI</strong><br> %1 - - - - <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - <strong>Modelul ChatGPT GPT-4 al OpenAI</strong><br> %1 %2 - - - - <strong>Mistral Tiny model</strong><br> %1 - <strong>Modelul Mistral Tiny</strong><br> %1 - - - - <strong>Mistral Small model</strong><br> %1 - <strong>Modelul Mistral Small</strong><br> %1 - - - - <strong>Mistral Medium model</strong><br> %1 - <strong>Modelul Mistral Medium</strong><br> %1 - - - - <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does + + + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> + + + + + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 + + + + + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. + + + + + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 + <strong>Modelul ChatGPT GPT-4 al OpenAI</strong><br> %1 %2 + + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> + + + + + <strong>Mistral Tiny model</strong><br> %1 + <strong>Modelul Mistral Tiny</strong><br> %1 + + + + <strong>Mistral Small model</strong><br> %1 + <strong>Modelul Mistral Small</strong><br> %1 + + + + <strong>Mistral Medium model</strong><br> %1 + <strong>Modelul Mistral Medium</strong><br> %1 + + + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> + + + + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. - <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu - garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii. - - - - + <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu + garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii. + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> - - <ul><li>Necesită cheia personală Mistral API. - </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la - Mistral!</li><li>Cheia ta API va fi stocată - pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu - Mistral</li><li>Poţi solicita o cheie API aici: <a + + <ul><li>Necesită cheia personală Mistral API. + </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la + Mistral!</li><li>Cheia ta API va fi stocată + pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu + Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aquí</a>.</li> - - - - <strong>Created by + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> - <strong>Creat de către + <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest - model are %3 Likes (Îmi Place).<li>Acest model are %4 download-uri.<li>Mai multe informaţii - pot fi găsite la: <a + model are %3 Likes (Îmi Place).<li>Acest model are %4 download-uri.<li>Mai multe informaţii + pot fi găsite la: <a href="https://huggingface.co/%5">aquí.</a></ul> - - - - ModelSettings - - - - Model - Model - - - - - Model Settings - Configurarea modelului - - - - - Clone - Clonez - - - - - Remove - Elimin - - - - - Name - Denumire - - - - - Model File - Fisierul modelului - - - - - System Prompt - System Prompt - - - - - Prefixed at the beginning of every conversation. Must contain the appropriate + + + + ModelSettings + + + + Model + Model + + + + + Model Settings + Configurarea modelului + + + + + Clone + Clonez + + + + + Remove + Elimin + + + + + Name + Denumire + + + + + Model File + Fisierul modelului + + + + + System Prompt + System Prompt + + + Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - Plasat la începutul fiecărei conversaţii. Trebuie să conţină - token-uri(le) adecvate de încadrare. - - - - - Prompt Template - Prompt Template - - - - - The template that wraps every prompt. - Standardul de formulare a fiecărui prompt. - - - - - Must contain the string "%1" to be replaced with the user's + Plasat la începutul fiecărei conversaţii. Trebuie să conţină + token-uri(le) adecvate de încadrare. + + + + + Prompt Template + Prompt Template + + + + + The template that wraps every prompt. + Standardul de formulare a fiecărui prompt. + + + Must contain the string "%1" to be replaced with the user's input. - Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie + Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul. - - - - - Chat Name Prompt - Denumirea conversaţiei - - - - - Prompt used to automatically generate chat names. - Standardul de formulare a denumirii conversaţiilor. - - - - - Suggested FollowUp Prompt - Prompt-ul sugerat pentru a continua - - - - - Prompt used to generate suggested follow-up questions. - Prompt-ul folosit pentru generarea întrebărilor de continuare. - - - - - Context Length - Lungimea Contextului - - - - - Number of input and output tokens the model sees. - Numărul token-urilor de input şi de output văzute de model. - - - - - Maximum combined prompt/response tokens before information is lost. + + + + + Chat Name Prompt + Denumirea conversaţiei + + + + + Prompt used to automatically generate chat names. + Standardul de formulare a denumirii conversaţiilor. + + + + + Suggested FollowUp Prompt + Prompt-ul sugerat pentru a continua + + + + + Prompt used to generate suggested follow-up questions. + Prompt-ul folosit pentru generarea întrebărilor de continuare. + + + + + Context Length + Lungimea Contextului + + + + + Number of input and output tokens the model sees. + Numărul token-urilor de input şi de output văzute de model. + + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. - Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. - Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va determina rezultate mai slabe. - NOTĂ: Nu are efect până la reincărcarea modelului. - - - - - Temperature - Temperatura - - - - - Randomness of model output. Higher -> more variation. - Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate. - - - - - Temperature increases the chances of choosing less likely tokens. + Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. + Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va determina rezultate mai slabe. + NOTĂ: Nu are efect până la reincărcarea modelului. + + + + + Temperature + Temperatura + + + + + Randomness of model output. Higher -> more variation. + Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate. + + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. - Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. - NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile. - - - - - Top-P - Top-P - - - - - Nucleus Sampling factor. Lower -> more predicatable. - Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare. - - - - - Only the most likely tokens up to a total probability of top_p can be chosen. + Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. + NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile. + + + + + Top-P + Top-P + + + + + Nucleus Sampling factor. Lower -> more predicatable. + Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare. + + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. - Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. - NOTĂ: Se evită selectarea token-urilor foarte improbabile. - - - - - Min-P - Min-P - - - - - Minimum token probability. Higher -> more predictable. - Probabilitatea mínimă a unui token. Mai mare -> mai predictibil. - - - - - Sets the minimum relative probability for a token to be considered. - Stabileşte probabilitatea minimă relativă a unui token de luat în considerare. - - - - - Top-K - Top-K - - - - - Size of selection pool for tokens. - Dimensiunea setului de token-uri. - - - - - Only the top K most likely tokens will be chosen from. - Se va alege numai din cele mai probabile K token-uri. - - - - - Max Length - Lungimea maximă - - - - - Maximum response length, in tokens. - Lungimea maximă - in token-uri - a replicii. - - - - - Prompt Batch Size - Prompt Batch Size - - - - - The batch size used for prompt processing. - Dimensiunea setului de token-uri citite simultan din prompt. - - - - - Amount of prompt tokens to process at once. + Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. + NOTĂ: Se evită selectarea token-urilor foarte improbabile. + + + + + Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. + + + + + + Must contain the string "%1" to be replaced with the user's input. + + + + + + Maximum combined prompt/response tokens before information is lost. +Using more context than the model was trained on will yield poor results. +NOTE: Does not take effect until you reload the model. + + + + + + Temperature increases the chances of choosing less likely tokens. +NOTE: Higher temperature gives more creative but less predictable outputs. + + + + + + Only the most likely tokens up to a total probability of top_p can be chosen. +NOTE: Prevents choosing highly unlikely tokens. + + + + + + Min-P + Min-P + + + + + Minimum token probability. Higher -> more predictable. + Probabilitatea mínimă a unui token. Mai mare -> mai predictibil. + + + + + Sets the minimum relative probability for a token to be considered. + Stabileşte probabilitatea minimă relativă a unui token de luat în considerare. + + + + + Top-K + Top-K + + + + + Size of selection pool for tokens. + Dimensiunea setului de token-uri. + + + + + Only the top K most likely tokens will be chosen from. + Se va alege numai din cele mai probabile K token-uri. + + + + + Max Length + Lungimea maximă + + + + + Maximum response length, in tokens. + Lungimea maximă - in token-uri - a replicii. + + + + + Prompt Batch Size + Prompt Batch Size + + + + + The batch size used for prompt processing. + Dimensiunea setului de token-uri citite simultan din prompt. + + + + + Amount of prompt tokens to process at once. +NOTE: Higher values can speed up reading prompts but will use more RAM. + + + + + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. +Lower values increase CPU load and RAM usage, and make inference slower. +NOTE: Does not take effect until you reload the model. + + + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. - Numarul token-urilor procesate simultan. - NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM. - - - - Repeat Penalty - Penalizarea pentru repetare - - - - - Repetition penalty factor. Set to 1 to disable. - Factorul de penalizare a repetării ce se dezactivează cu valoarea 1. - - - - - Repeat Penalty Tokens - Token-uri pentru penalzare a repetării - - - - - Number of previous tokens used for penalty. - Numărul token-urilor anterioare considerate pentru penalizare. - - - - - GPU Layers - Layere în GPU - - - - - Number of model layers to load into VRAM. - Numărul layerelor modelului ce vor fi încărcate în VRAM. - - - - - How many model layers to load into VRAM. Decrease this if GPT4All runs out of + Numarul token-urilor procesate simultan. + NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM. + + + + + Repeat Penalty + Penalizarea pentru repetare + + + + + Repetition penalty factor. Set to 1 to disable. + Factorul de penalizare a repetării ce se dezactivează cu valoarea 1. + + + + + Repeat Penalty Tokens + Token-uri pentru penalzare a repetării + + + + + Number of previous tokens used for penalty. + Numărul token-urilor anterioare considerate pentru penalizare. + + + + + GPU Layers + Layere în GPU + + + + + Number of model layers to load into VRAM. + Numărul layerelor modelului ce vor fi încărcate în VRAM. + + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. - Cât de multe layere ale modelului să fie încărcate în VRAM. - Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. - Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. - NOTĂ: Nu are efect până la reîncărcarea modelului. - - - - ModelsView - - - - No Models Installed - Nu există modele instalate - - - - - Install a model to get started using GPT4All - Instalează un model pentru a începe să foloseşti GPT4All - - - - - - - + Add Model - + Adaugă un model - - - - - Shows the add model view - Afişează secţiunea de adăugare a unui model - - - - - Installed Models - Modele instalate - - - - - Locally installed chat models - Modele conversaţionale instalate local - - - - - Model file - Fişierul modelului - - - - - Model file to be downloaded - Fişierul modelului ce va fi descărcat - - - - - Description - Descriere - - - - - File description - Descrierea fişierului - - - - - Cancel - Anulare - - - - - Resume - Continuare - - - - - Stop/restart/start the download - Oprirea/Repornirea/Initierea descărcării - - - - - Remove - Elimină - - - - - Remove model from filesystem - Elimină modelul din sistemul de fişiere - - - - - - - Install - Instalează - - - - - Install online model - Instalează un model din online - - - - - <strong><font size="1"><a + Cât de multe layere ale modelului să fie încărcate în VRAM. + Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. + Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. + NOTĂ: Nu are efect până la reîncărcarea modelului. + + + + ModelsView + + + + No Models Installed + Nu există modele instalate + + + + + Install a model to get started using GPT4All + Instalează un model pentru a începe să foloseşti GPT4All + + + + + + + + Add Model + + Adaugă un model + + + + + Shows the add model view + Afişează secţiunea de adăugare a unui model + + + + + Installed Models + Modele instalate + + + + + Locally installed chat models + Modele conversaţionale instalate local + + + + + Model file + Fişierul modelului + + + + + Model file to be downloaded + Fişierul modelului ce va fi descărcat + + + + + Description + Descriere + + + + + File description + Descrierea fişierului + + + + + Cancel + Anulare + + + + + Resume + Continuare + + + + + Stop/restart/start the download + Oprirea/Repornirea/Initierea descărcării + + + + + Remove + Elimină + + + + + Remove model from filesystem + Elimină modelul din sistemul de fişiere + + + + + + + Install + Instalează + + + + + Install online model + Instalează un model din online + + + <strong><font size="1"><a href="#error">Error</a></strong></font> - <strong><font size="1"><a + <strong><font size="1"><a href="#eroare">Eroare</a></strong></font> - - - - - <strong><font size="2">WARNING: Not recommended for your + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - <strong><font size="2">ATENŢIE: Nerecomandat pentru - acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău + <strong><font size="2">ATENŢIE: Nerecomandat pentru + acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău (%2).</strong></font> - - - - - %1 GB - %1 GB - - - - - ? - ? - - - - - Describes an error that occurred when downloading - Descrie o eroare apărută în timpul descărcării - - - - - Error for incompatible hardware - Eroare - hardware incompatibil - - - - - Download progressBar - Bara de progresie a descărcării - - - - - Shows the progress made in the download - Afişează progresia descărcării - - - - - Download speed - Viteza de download - - - - - Download speed in bytes/kilobytes/megabytes per second - Viteza de download în bytes/kilobytes/megabytes pe secundă - - - - - Calculating... - ...Se calculeaza... - - - - - - - Whether the file hash is being calculated - Dacă se va calcula hash-ul fişierului - - - - - Busy indicator - Indicator de activitate - - - - - Displayed when the file hash is being calculated - Afişat când se calculează hash-ul unui fişier - - - - - enter $API_KEY - introdu cheia $API_KEY - - - - - File size - Dimensiunea fişierului - - - - - RAM required - RAM necesară - - - - - Parameters - Parametri - - - - - Quant - Quant(ificare) - - - - - Type - Tip - - - - MyFancyLink - - - - Fancy link - Link haios - - - - - A stylized link - Un link cu stil - - - - MySettingsStack - - - - Please choose a directory - Selectează un director (folder) - - - - MySettingsTab - - - - Restore Defaults - Restaurează valorile implicite - - - - - Restores settings dialog to a default state - Restaurează secţiunea de configurare la starea sa implicită - - - - NetworkDialog - - - - Contribute data to the GPT4All Opensource Datalake. - Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All. - - - - - By enabling this feature, you will be able to participate in the democratic + + + + + %1 GB + %1 GB + + + + + ? + ? + + + + + Describes an error that occurred when downloading + Descrie o eroare apărută în timpul descărcării + + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + + + Error for incompatible hardware + Eroare - hardware incompatibil + + + + + Download progressBar + Bara de progresie a descărcării + + + + + Shows the progress made in the download + Afişează progresia descărcării + + + + + Download speed + Viteza de download + + + + + Download speed in bytes/kilobytes/megabytes per second + Viteza de download în bytes/kilobytes/megabytes pe secundă + + + + + Calculating... + ...Se calculeaza... + + + + + + + + + + + Whether the file hash is being calculated + Dacă se va calcula hash-ul fişierului + + + + + Busy indicator + Indicator de activitate + + + + + Displayed when the file hash is being calculated + Afişat când se calculează hash-ul unui fişier + + + + + ERROR: $API_KEY is empty. + + + + + + enter $API_KEY + introdu cheia $API_KEY + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + + File size + Dimensiunea fişierului + + + + + RAM required + RAM necesară + + + + + Parameters + Parametri + + + + + Quant + Quant(ificare) + + + + + Type + Tip + + + + MyFancyLink + + + + Fancy link + Link haios + + + + + A stylized link + Un link cu stil + + + + MySettingsStack + + + + Please choose a directory + Selectează un director (folder) + + + + MySettingsTab + + + + Restore Defaults + Restaurează valorile implicite + + + + + Restores settings dialog to a default state + Restaurează secţiunea de configurare la starea sa implicită + + + + NetworkDialog + + + + Contribute data to the GPT4All Opensource Datalake. + Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All. + + + By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. @@ -2985,240 +2693,194 @@ used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! - Dacă activezi această funcţionalitate, vei participa la procesul democratic - de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului. + Dacă activezi această funcţionalitate, vei participa la procesul democratic + de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului. - Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi - trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica, - Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ. - Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All. + Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi + trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica, + Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ. + Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All. - NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta - DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi - această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti. - Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi - utilizate de către Nomic AI pentru a imbunătăţi modele viitoare în GPT4All. Nomic AI va păstra - toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca - participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale! - - - - - Terms for opt-in - Termenii pentru participare - - - - - Describes what will happen when you opt-in - Descrie ce se întâmplă când participi - - - - - Please provide a name for attribution (optional) - Specifică o denumire pentru această apreciere (optional) - - - - - Attribution (optional) - Apreciere (opţional) - - - - - Provide attribution - Apreciază - - - - - Enable - Activează - - - - - Enable opt-in - Activează participarea - - - - - Cancel - Anulare - - - - - Cancel opt-in - Anulează participarea - - - - NewVersionDialog - - - - New version is available - O nouă versiune disponibilă! - - - - - Update - Update/Actualizare - - - - - Update to new version - Actualizează la noua versiune - - - - PopupDialog - - - - Reveals a shortlived help balloon - Afisează un mesaj scurt de asistenţă - - - - - Busy indicator - Indicator de activitate - - - - - Displayed when the popup is showing busy - Se afişează când procedura este în desfăşurare - - - - SettingsView - - - - - - Settings - Configurare - - - - - Contains various application settings - Conţine setări ale programului - - - - - Application - Program - - - - - Model - Model - - - - - LocalDocs - LocalDocs/Documente Locale - - - - StartupDialog - - - - Welcome! - Bun venit! - - - - - ### Release notes + NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta + DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi + această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti. + Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi + utilizate de către Nomic AI pentru a imbunătăţi modele viitoare în GPT4All. Nomic AI va păstra + toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca + participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale! + + + + + By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. + +When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. + +NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! + + + + + + Terms for opt-in + Termenii pentru participare + + + + + Describes what will happen when you opt-in + Descrie ce se întâmplă când participi + + + + + Please provide a name for attribution (optional) + Specifică o denumire pentru această apreciere (optional) + + + + + Attribution (optional) + Apreciere (opţional) + + + + + Provide attribution + Apreciază + + + + + Enable + Activează + + + + + Enable opt-in + Activează participarea + + + + + Cancel + Anulare + + + + + Cancel opt-in + Anulează participarea + + + + NewVersionDialog + + + + New version is available + O nouă versiune disponibilă! + + + + + Update + Update/Actualizare + + + + + Update to new version + Actualizează la noua versiune + + + + PopupDialog + + + + Reveals a shortlived help balloon + Afisează un mesaj scurt de asistenţă + + + + + Busy indicator + Indicator de activitate + + + + + Displayed when the popup is showing busy + Se afişează când procedura este în desfăşurare + + + + SettingsView + + + + + + Settings + Configurare + + + + + Contains various application settings + Conţine setări ale programului + + + + + Application + Program + + + + + Model + Model + + + + + LocalDocs + LocalDocs/Documente Locale + + + + StartupDialog + + + + Welcome! + Bun venit! + + + ### Release notes %1### Contributors %2 - ### Despre versiune + ### Despre versiune %1### Contributori %2 - - - - - Release notes - Despre versiune - - - - - Release notes for this version - Despre această versiune - - - - - ### Opt-ins for anonymous usage analytics and datalake + + + + + Release notes + Despre versiune + + + + + Release notes for this version + Despre această versiune + + + ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. @@ -3241,249 +2903,221 @@ attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! - ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake - Activând aceste functionalităţi vei putea participa la procesul democratic + ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake + Activând aceste functionalităţi vei putea participa la procesul democratic de instruire a unui - model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele. - Cand un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este - trimisă la componenta - Open-source DataLake a GPT4All. Mai mult - poti aprecia (Like/Dislike) răspunsul. Dacă - un răspuns Nu Îţi Place. poţi - sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în + model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele. + Cand un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este + trimisă la componenta + Open-source DataLake a GPT4All. Mai mult - poti aprecia (Like/Dislike) răspunsul. Dacă + un răspuns Nu Îţi Place. poţi + sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All. - NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta + NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta DataLake a GPT4All. - Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate. - Totuşi, te poţi aştepta la a beneficia de apreciere - - opţional, dacă doreşti. Datele din conversaţie vor fi disponibile - pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI - pentru a îmbunătăţi modele viitoare în GPT4All. - Nomic AI va păstra - toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca + Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate. + Totuşi, te poţi aştepta la a beneficia de apreciere - + opţional, dacă doreşti. Datele din conversaţie vor fi disponibile + pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI + pentru a îmbunătăţi modele viitoare în GPT4All. + Nomic AI va păstra + toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca participant contribuitor la orice lansare a unui model GPT4All - care foloseşte datele tale! - - - - - Terms for opt-in - Termenii pentru participare - - - - - Describes what will happen when you opt-in - Descrie ce se întâmplă când accepţi - - - - - - - Opt-in for anonymous usage statistics - Acceptă colectarea de statistici despre utilizare anonmă - - - - - - - Yes - Da - - - - - Allow opt-in for anonymous usage statistics - Acceptă colectarea de statistici despre utilizare anonimă - - - - - - - No - Nu - - - - - Opt-out for anonymous usage statistics - Anuleaza colectarea de statistici despre utilizare anonimă - - - - - Allow opt-out for anonymous usage statistics - Permite anularea colectării de statistici despre utilizare anonimă - - - - - - - Opt-in for network - Acceptă pentru reţea - - - - - Allow opt-in for network - Permite acceptarea pentru reţea - - - - - Allow opt-in anonymous sharing of chats to the GPT4All Datalake - Permite partajarea (share) anonimă a conversaţiilor către DataLake a GPT4All - - - - - Opt-out for network - Refuz pentru reţea - - - - - Allow opt-out anonymous sharing of chats to the GPT4All Datalake - Permite anularea partajării (share) anonime a conversaţiilor către DataLake a GPT4All - - - - SwitchModelDialog - - - - <b>Warning:</b> changing the model will erase the current + care foloseşte datele tale! + + + + + ### Release notes +%1### Contributors +%2 + + + + + + ### Opt-ins for anonymous usage analytics and datalake +By enabling these features, you will be able to participate in the democratic process of training a +large language model by contributing data for future model improvements. + +When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All +Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you +can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. + +NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. +You should have no expectation of chat privacy when this feature is enabled. You should; however, have +an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone +to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all +attribution information attached to your data and you will be credited as a contributor to any GPT4All +model release that uses your data! + + + + + + Terms for opt-in + Termenii pentru participare + + + + + Describes what will happen when you opt-in + Descrie ce se întâmplă când accepţi + + + + + + + Opt-in for anonymous usage statistics + Acceptă colectarea de statistici despre utilizare anonmă + + + + + + + Yes + Da + + + + + Allow opt-in for anonymous usage statistics + Acceptă colectarea de statistici despre utilizare anonimă + + + + + + + No + Nu + + + + + Opt-out for anonymous usage statistics + Anuleaza colectarea de statistici despre utilizare anonimă + + + + + Allow opt-out for anonymous usage statistics + Permite anularea colectării de statistici despre utilizare anonimă + + + + + + + Opt-in for network + Acceptă pentru reţea + + + + + Allow opt-in for network + Permite acceptarea pentru reţea + + + + + Allow opt-in anonymous sharing of chats to the GPT4All Datalake + Permite partajarea (share) anonimă a conversaţiilor către DataLake a GPT4All + + + + + Opt-out for network + Refuz pentru reţea + + + + + Allow opt-out anonymous sharing of chats to the GPT4All Datalake + Permite anularea partajării (share) anonime a conversaţiilor către DataLake a GPT4All + + + + SwitchModelDialog + + <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>Atenţie:</b> schimbarea modelului va sterge conversaţia - curentă. Confirmi aceasta? - - - - - Continue - Continuă - - - - - Continue with model loading - Continuă cu încărcarea modelului - - - - - - - Cancel - Anulare - - - - ThumbsDownDialog - - - - Please edit the text below to provide a better response. (optional) - Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional). - - - - - Please provide a better response... - Te rog, oferă o replică mai bună... - - - - - Submit - Trimite - - - - - Submits the user's response - Trimite răspunsul dat de utilizator - - - - - Cancel - Anulare - - - - - Closes the response dialog - Închide dialogul răspunsului - - - - main - - - - + <b>Atenţie:</b> schimbarea modelului va sterge conversaţia + curentă. Confirmi aceasta? + + + + + <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? + + + + + + Continue + Continuă + + + + + Continue with model loading + Continuă cu încărcarea modelului + + + + + + + Cancel + Anulare + + + + ThumbsDownDialog + + + + Please edit the text below to provide a better response. (optional) + Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional). + + + + + Please provide a better response... + Te rog, oferă o replică mai bună... + + + + + Submit + Trimite + + + + + Submits the user's response + Trimite răspunsul dat de utilizator + + + + + Cancel + Anulare + + + + + Closes the response dialog + Închide dialogul răspunsului + + + + main + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet @@ -3492,199 +3126,183 @@ model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - - <h3>A apărut o eroare la iniţializare:; + + <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. - "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte - condiţiile minime pentru a rula acest program. În particular, nu suportă - instrucţiunile AVX pe care programul le necesită pentru a integra un model - conversaţional modern. În acest moment, unica solutie este să îţi aduci la zi sistemul hardware - cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: + "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte + condiţiile minime pentru a rula acest program. În particular, nu suportă + instrucţiunile AVX pe care programul le necesită pentru a integra un model + conversaţional modern. În acest moment, unica solutie este să îţi aduci la zi sistemul hardware + cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - - - - - GPT4All v%1 - GPT4All v%1 - - - - - <h3>Encountered an error starting + + + + + GPT4All v%1 + GPT4All v%1 + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. - <h3>A apărut o eroare la iniţializare:; - </h3><br><i>"Nu poate fi accesat fişierul de configurare - a programului."</i><br><br>Din păcate, ceva împiedică - programul în a accesa acel fişier. Cauza poate fi un set de permisiuni - incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. + <h3>A apărut o eroare la iniţializare:; + </h3><br><i>"Nu poate fi accesat fişierul de configurare + a programului."</i><br><br>Din păcate, ceva împiedică + programul în a accesa acel fişier. Cauza poate fi un set de permisiuni + incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. Poti vizita canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde - vei putea primi asistenţă. - - - - - Connection to datalake failed. - Conectarea la DataLake a eşuat. - - - - - Saving chats. - Se salvează conversaţiile. - - - - - Network dialog - Dialogul despre reţea - - - - - opt-in to share feedback/conversations - acceptă partajarea (share) de comentarii/conversaţii - - - - - Home view - Secţiunea de început - - - - - Home view of application - Secţiunea de început a programului - - - - - Home - Prima pagină - - - - - Chat view - Secţiunea conversaţiilor - - - - - Chat view to interact with models - Secţiunea de chat pentru interacţiune cu modele - - - - - Chats - Conversaţii/Chat-uri - - - - - - - Models - Modele - - - - - Models view for installed models - Secţiunea modelelor instalate - - - - - - - LocalDocs - LocalDocs/Documente Locale - - - - - LocalDocs view to configure and use local docs - Secţiunea LocalDocs de configurare şi folosire a documentelor locale - - - - - - - Settings - Configurare - - - - - Settings view for application configuration - Secţiunea de configurare a programului - - - - - The datalake is enabled - DataLake: ACTIV - - - - - Using a network model - Se foloseşte un model pe reţea - - - - - Server mode is enabled - Modul Server: ACTIV - - - - - Installed models - Modele instalate - - - - - View of installed models - Secţiunea modelelor instalate - - - \ No newline at end of file + vei putea primi asistenţă. + + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> + + + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. + + + + + + Connection to datalake failed. + Conectarea la DataLake a eşuat. + + + + + Saving chats. + Se salvează conversaţiile. + + + + + Network dialog + Dialogul despre reţea + + + + + opt-in to share feedback/conversations + acceptă partajarea (share) de comentarii/conversaţii + + + + + Home view + Secţiunea de început + + + + + Home view of application + Secţiunea de început a programului + + + + + Home + Prima pagină + + + + + Chat view + Secţiunea conversaţiilor + + + + + Chat view to interact with models + Secţiunea de chat pentru interacţiune cu modele + + + + + Chats + Conversaţii/Chat-uri + + + + + + + Models + Modele + + + + + Models view for installed models + Secţiunea modelelor instalate + + + + + + + LocalDocs + LocalDocs/Documente Locale + + + + + LocalDocs view to configure and use local docs + Secţiunea LocalDocs de configurare şi folosire a documentelor locale + + + + + + + Settings + Configurare + + + + + Settings view for application configuration + Secţiunea de configurare a programului + + + + + The datalake is enabled + DataLake: ACTIV + + + + + Using a network model + Se foloseşte un model pe reţea + + + + + Server mode is enabled + Modul Server: ACTIV + + + + + Installed models + Modele instalate + + + + + View of installed models + Secţiunea modelelor instalate + + + diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts index 743a3289..23f60492 100644 --- a/gpt4all-chat/translations/gpt4all_zh_CN.ts +++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts @@ -79,26 +79,26 @@ AddModelView - - + + ← Existing Models ← 存在的模型 - - + + Explore Models 发现模型 - - + + Discover and download models by keyword search... 通过关键词查找并下载模型 ... - - + + Text field for discovering and filtering downloadable models 用于发现和筛选可下载模型的文本字段 @@ -107,38 +107,38 @@ 搜索中 - - + + Initiate model discovery and filtering 启动模型发现和过滤 - - + + Triggers discovery and filtering of models 触发模型的发现和筛选 - - + + Default 默认 - - + + Likes 喜欢 - - + + Downloads 下载 - - + + Recent 近期 @@ -147,14 +147,14 @@ 排序: - - + + Asc 升序 - - + + Desc 倒序 @@ -163,8 +163,8 @@ 排序目录: - - + + None @@ -177,146 +177,176 @@ 网络问题:无法访问 http://gpt4all.io/models/models3.json - - + + Searching · %1 搜索中 · %1 - - + + Sort by: %1 排序: %1 - - + + Sort dir: %1 排序目录: %1 - - + + Limit: %1 数量: %1 - - + + Network error: could not retrieve %1 网络错误:无法检索 %1 - - - - + + + + Busy indicator 繁忙程度 - - + + Displayed when the models request is ongoing 在模型请求进行中时显示 - - + + Model file 模型文件 - - + + Model file to be downloaded 待下载模型 - - + + Description 描述 - - + + File description 文件描述 - - + + Cancel 取消 - - + + Resume 继续 - - + + Download 下载 - - + + Stop/restart/start the download 停止/重启/开始下载 - - + + Remove 删除 - - + + Remove model from filesystem 从系统中删除模型 - - - - + + + + Install 安装 - - + + Install online model 安装在线模型 - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">错误</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">警告: 你的设备硬件不推荐 ,模型需要的内存 (%1 GB)比你的系统还要多 (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB %1 GB - - - - + + + + ? @@ -325,8 +355,8 @@ <a href="#error">错误</a> - - + + Describes an error that occurred when downloading 描述下载过程中发生的错误 @@ -343,70 +373,74 @@ 你的系统需要 ( - - + + Error for incompatible hardware 硬件不兼容的错误 - - + + Download progressBar 下载进度 - - + + Shows the progress made in the download 显示下载进度 - - + + Download speed 下载速度 - - + + Download speed in bytes/kilobytes/megabytes per second 下载速度 b/kb/mb /s - - + + Calculating... 计算中 - - + + + + + + Whether the file hash is being calculated 是否正在计算文件哈希 - - + + Displayed when the file hash is being calculated 在计算文件哈希时显示 - - + + enter $API_KEY 输入$API_KEY - - + + File size 文件大小 - - + + RAM required RAM 需要 @@ -415,20 +449,20 @@ GB - - + + Parameters 参数 - - + + Quant 量化 - - + + Type 类型 @@ -499,224 +533,242 @@ 应用的主题颜色 - - + + Dark 深色 - - + + Light 亮色 - - + + LegacyDark LegacyDark - - + + Font Size 字体大小 - - + + The size of text in the application. 应用中的文本大小。 - - + + Small - - + + Medium - - + + Large - - + + Language and Locale 语言和本地化 - - + + The language and locale you wish to use. 你想使用的语言 - - + + + System Locale + + + + + Device 设备 - - The compute device used for text generation. "Auto" uses Vulkan or Metal. - 用于文本生成的计算设备. "自动" 使用 Vulkan or Metal. + 用于文本生成的计算设备. "自动" 使用 Vulkan or Metal. - - + + + The compute device used for text generation. + + + + + + + + Application default + + + + + Default Model 默认模型 - - + + The preferred model for new chats. Also used as the local server fallback. 新聊天的首选模式。也用作本地服务器回退。 - - + + Suggestion Mode 建议模式 - - + + Generate suggested follow-up questions at the end of responses. 在答复结束时生成建议的后续问题。 - - + + When chatting with LocalDocs 本地文档检索 - - + + Whenever possible 只要有可能 - - + + Never 从不 - - + + Download Path 下载目录 - - + + Where to store local models and the LocalDocs database. 本地模型和本地文档数据库存储目录 - - + + Browse 查看 - - + + Choose where to save model files 模型下载目录 - - + + Enable Datalake 开启数据湖 - - + + Send chats and feedback to the GPT4All Open-Source Datalake. 发送对话和反馈给GPT4All 的开源数据湖。 - - + + Advanced 高级 - - + + CPU Threads CPU线程 - - + + The number of CPU threads used for inference and embedding. 用于推理和嵌入的CPU线程数 - - + + Save Chat Context 保存对话上下文 - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. 保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话. - - + + Enable Local Server 开启本地服务 - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. 将OpenAI兼容服务器暴露给本地主机。警告:导致资源使用量增加。 - - + + API Server Port API 服务端口 - - + + The port to use for the local server. Requires restart. 使用本地服务的端口,需要重启 - - + + Check For Updates 检查更新 - - + + Manually check for an update to GPT4All. 手动检查更新 - - + + Updates 更新 @@ -743,6 +795,19 @@ 响应: + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + ChatDrawer @@ -971,9 +1036,9 @@ - + - + LocalDocs 本地文档 @@ -998,63 +1063,63 @@ (默认) → - - + + Load the default model 载入默认模型 - - + + Loads the default model which can be changed in settings 加载默认模型,可以在设置中更改 - - + + No Model Installed 没有下载模型 - - + + GPT4All requires that you install at least one model to get started GPT4All要求您至少安装一个模型才能开始 - - + + Install a Model 下载模型 - - + + Shows the add model view 查看添加的模型 - - + + Conversation with the model 使用此模型对话 - - + + prompt / response pairs from the conversation 对话中的提示/响应对 - - + + GPT4All GPT4All - - + + You @@ -1067,14 +1132,14 @@ model to get started 模型在思考 - - + + recalculating context ... 重新生成上下文... - - + + response stopped ... 响应停止... @@ -1087,118 +1152,118 @@ model to get started 检索本地文档: - - + + processing ... 处理中 - - + + generating response ... 响应中... - - + + generating questions ... 生成响应 - - - - + + + + Copy 复制 - - + + Copy Message 复制内容 - - + + Disable markdown 不允许markdown - - + + Enable markdown 允许markdown - - + + Thumbs up 点赞 - - + + Gives a thumbs up to the response 点赞响应 - - + + Thumbs down 点踩 - - + + Opens thumbs down dialog 打开点踩对话框 - - + + %1 Sources %1 资源 - - + + Suggested follow-ups 建议的后续行动 - - + + Erase and reset chat session 擦除并重置聊天会话 - - + + Copy chat session to clipboard 复制对话到剪切板 - - + + Redo last chat response 重新生成上个响应 - - + + Stop generating 停止生成 - - + + Stop the current response generation 停止当前响应 - - + + Reloads the model 重载模型 @@ -1210,9 +1275,9 @@ model to get started - + - + Reload · %1 重载 · %1 @@ -1223,68 +1288,68 @@ model to get started 载入中 · %1 - - + + Load · %1 (default) → 载入 · %1 (默认) → - - + + retrieving localdocs: %1 ... 检索本地文档: %1 ... - - + + searching localdocs: %1 ... 搜索本地文档: %1 ... - - + + Send a message... 发送消息... - - + + Load a model to continue... 选择模型并继续 - - + + Send messages/prompts to the model 发送消息/提示词给模型 - - + + Cut 剪切 - - + + Paste 粘贴 - - + + Select All 全选 - - + + Send message 发送消息 - - + + Sends the message/prompt contained in textfield to the model 将文本框中包含的消息/提示发送给模型 @@ -1292,46 +1357,84 @@ model to get started CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results 提示: 索引时搜索集合可能会返回不完整的结果 - - + + %n file(s) - - + + %n word(s) - - + + Updating 更新中 - - + + + Add Docs + 添加文档 - - + + Select a collection to make it available to the chat model. 选择一个集合,使其可用于聊天模型。 + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1599,162 +1702,166 @@ model to get started + 添加集合 - - ERROR: The LocalDocs database is not valid. - 错误: 本地文档数据库错误. + 错误: 本地文档数据库错误. + + + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + - - + + No Collections Installed 没有集合 - - + + Install a collection of local documents to get started using this feature 安装一组本地文档以开始使用此功能 - - + + + Add Doc Collection + 添加文档集合 - - + + Shows the add model view 查看添加的模型 - - + + Indexing progressBar 索引进度 - - + + Shows the progress made in the indexing 显示索引进度 - - + + ERROR 错误 - - + + INDEXING 索引 - - + + EMBEDDING EMBEDDING - - + + REQUIRES UPDATE 需更新 - - + + READY 准备 - - + + INSTALLING 安装中 - - + + Indexing in progress 构建索引中 - - + + Embedding in progress Embedding进度 - - + + This collection requires an update after version change 此集合需要在版本更改后进行更新 - - + + Automatically reindexes upon changes to the folder 在文件夹变动时自动重新索引 - - + + Installation in progress 安装进度 - - + + % % - - + + %n file(s) %n 文件 - - + + %n word(s) %n 词 - - + + Remove 删除 - - + + Rebuild 重新构建 - - + + Reindex this folder from scratch. This is slow and usually not needed. 从头开始重新索引此文件夹。这个过程较慢,通常情况下不需要。 - - + + Update 更新 - - + + Update the collection to the new version. This is a slow operation. 将集合更新为新版本。这是一个缓慢的操作。 @@ -1762,41 +1869,61 @@ model to get started ModelList - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>需要个人 OpenAI API 密钥。</li><li>警告:将把您的聊天内容发送给 OpenAI!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与 OpenAI 通信</li><li>您可以在此处<a href="https://platform.openai.com/account/api-keys">申请 API 密钥。</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Mistral Medium model</strong><br> %1 + + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* 即使您为ChatGPT-4向OpenAI付款,这也不能保证API密钥访问。联系OpenAI获取更多信息。 @@ -1805,7 +1932,7 @@ model to get started <strong>OpenAI's ChatGPT model GPT-4</strong><br> - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> @@ -1822,7 +1949,7 @@ model to get started <strong>Mistral Medium model</strong><br> - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> @@ -1913,32 +2040,32 @@ optional image 用于自动生成聊天名称的提示。 - - + + Suggested FollowUp Prompt 建议的后续提示 - - + + Prompt used to generate suggested follow-up questions. 用于生成建议的后续问题的提示。 - - + + Context Length 上下文长度 - - + + Number of input and output tokens the model sees. 模型看到的输入和输出令牌的数量。 - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1947,152 +2074,152 @@ NOTE: Does not take effect until you reload the model. 注意:在重新加载模型之前不会生效。 - - + + Temperature 温度 - - + + Randomness of model output. Higher -> more variation. 模型输出的随机性。更高->更多的变化。 - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. 温度增加了选择不太可能的token的机会。 注:温度越高,输出越有创意,但预测性越低。 - - + + Top-P Top-P - - + + Nucleus Sampling factor. Lower -> more predicatable. 核子取样系数。较低->更具可预测性。 - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. 只能选择总概率高达top_p的最有可能的令牌。 注意:防止选择极不可能的token。 - - + + Min-P Min-P - - + + Minimum token probability. Higher -> more predictable. 最小令牌概率。更高 -> 更可预测。 - - + + Sets the minimum relative probability for a token to be considered. 设置被考虑的标记的最小相对概率。 - - + + Top-K Top-K - - + + Size of selection pool for tokens. 令牌选择池的大小。 - - + + Only the top K most likely tokens will be chosen from. 仅从最可能的前 K 个标记中选择 - - + + Max Length 最大长度 - - + + Maximum response length, in tokens. 最大响应长度(以令牌为单位) - - + + Prompt Batch Size 提示词大小 - - + + The batch size used for prompt processing. 用于快速处理的批量大小。 - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. 一次要处理的提示令牌数量。 注意:较高的值可以加快读取提示,但会使用更多的RAM。 - - + + Repeat Penalty 重复惩罚 - - + + Repetition penalty factor. Set to 1 to disable. 重复处罚系数。设置为1可禁用。 - - + + Repeat Penalty Tokens 重复惩罚数 - - + + Number of previous tokens used for penalty. 用于惩罚的先前令牌数量。 - - + + GPU Layers GPU 层 - - + + Number of model layers to load into VRAM. 要加载到VRAM中的模型层数。 - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2104,132 +2231,162 @@ NOTE: Does not take effect until you reload the model. ModelsView - - + + No Models Installed 无模型 - - + + Install a model to get started using GPT4All 安装模型并开始使用 - - - - + + + + + Add Model + 添加模型 - - + + Shows the add model view 查看增加到模型 - - + + Installed Models 已安装的模型 - - + + Locally installed chat models 本地安装的聊天 - - + + Model file 模型文件 - - + + Model file to be downloaded 待下载的模型 - - + + Description 描述 - - + + File description 文件描述 - - + + Cancel 取消 - - + + Resume 继续 - - + + Stop/restart/start the download 停止/重启/开始下载 - - + + Remove 删除 - - + + Remove model from filesystem 从系统中删除模型 - - - - + + + + Install 按照 - - + + Install online model 安装在线模型 - - + + <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Error</a></strong></font> - - + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> - - + + + ERROR: $API_KEY is empty. + + + + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + %1 GB %1 GB - - + + ? @@ -2238,8 +2395,8 @@ NOTE: Does not take effect until you reload the model. <a href="#错误">错误</a> - - + + Describes an error that occurred when downloading 描述下载时发生的错误 @@ -2256,76 +2413,80 @@ NOTE: Does not take effect until you reload the model. GB) 你的系统需要 ( - - + + Error for incompatible hardware 硬件不兼容的错误 - - + + Download progressBar 下载进度 - - + + Shows the progress made in the download 显示下载进度 - - + + Download speed 下载速度 - - + + Download speed in bytes/kilobytes/megabytes per second 下载速度 b/kb/mb /s - - + + Calculating... 计算中... - - + + + + + + Whether the file hash is being calculated 是否正在计算文件哈希 - - + + Busy indicator 繁忙程度 - - + + Displayed when the file hash is being calculated 在计算文件哈希时显示 - - + + enter $API_KEY 输入 $API_KEY - - + + File size 文件大小 - - + + RAM required 需要 RAM @@ -2334,20 +2495,20 @@ NOTE: Does not take effect until you reload the model. GB - - + + Parameters 参数 - - + + Quant 量化 - - + + Type 类型 @@ -2513,9 +2674,8 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O QObject - Default - 默认 + 默认 diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts index e4a90fe3..808a1253 100644 --- a/gpt4all-chat/translations/gpt4all_zh_TW.ts +++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts @@ -79,321 +79,375 @@ AddModelView - - + + ← Existing Models ← 現有模型 - - + + Explore Models 探索模型 - - + + Discover and download models by keyword search... 透過關鍵字搜尋探索並下載模型...... - - + + Text field for discovering and filtering downloadable models 用於探索與過濾可下載模型的文字字段 - - + + Searching · %1 搜尋 · %1 - - + + Initiate model discovery and filtering 探索與過濾模型 - - + + Triggers discovery and filtering of models 觸發探索與過濾模型 - - + + Default 預設 - - + + Likes - - + + Downloads 下載次數 - - + + Recent 最新 - - + + Sort by: %1 排序依據:%1 - - + + Asc 升序 - - + + Desc 降序 - - + + Sort dir: %1 排序順序:%1 - - + + None - - + + Limit: %1 上限:%1 - - + + + Network error: could not retrieve %1 + + + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + + + %1 GB + + + + + + + + ? + + + Network error: could not retrieve http://gpt4all.io/models/models3.json - 網路錯誤:無法取得 http://gpt4all.io/models/models3.json + 網路錯誤:無法取得 http://gpt4all.io/models/models3.json - - - - + + + + Busy indicator 參考自 https://terms.naer.edu.tw 忙線指示器 - - + + Displayed when the models request is ongoing 當模型請求正在進行時顯示 - - + + Model file 模型檔案 - - + + Model file to be downloaded 即將下載的模型檔案 - - + + Description 描述 - - + + File description 檔案描述 - - + + Cancel 取消 - - + + Resume 恢復 - - + + Download 下載 - - + + Stop/restart/start the download 停止/重啟/開始下載 - - + + Remove 移除 - - + + Remove model from filesystem 從檔案系統移除模型 - - - - + + + + Install 安裝 - - + + Install online model 安裝線上模型 - - <a href="#error">Error</a> - <a href="#error">錯誤</a> + <a href="#error">錯誤</a> - - + + Describes an error that occurred when downloading 解釋下載時發生的錯誤 - - <strong><font size="2">WARNING: Not recommended for your hardware. - <strong><font size="2">警告:不推薦在您的硬體上運作。 + <strong><font size="2">警告:不推薦在您的硬體上運作。 - - Model requires more memory ( - 模型需要比較多的記憶體( + 模型需要比較多的記憶體( - - GB) than your system has available ( - GB),但您的系統記憶體空間不足( + GB),但您的系統記憶體空間不足( - - + + Error for incompatible hardware 錯誤,不相容的硬體 - - + + Download progressBar 下載進度條 - - + + Shows the progress made in the download 顯示下載進度 - - + + Download speed 下載速度 - - + + Download speed in bytes/kilobytes/megabytes per second 下載速度每秒 bytes/kilobytes/megabytes - - + + Calculating... 計算中...... - - - - + + + + + + + + Whether the file hash is being calculated 是否正在計算檔案雜湊 - - + + Displayed when the file hash is being calculated 計算檔案雜湊值時顯示 - - + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY 輸入 $API_KEY - - + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + File size 檔案大小 - - + + RAM required 所需的記憶體 - - GB - GB + GB - - + + Parameters 參數 - - + + Quant 量化 - - + + Type 類型 @@ -465,224 +519,242 @@ 應用程式的配色方案。 - - + + Dark 暗色 - - + + Light 亮色 - - + + LegacyDark 傳統暗色 - - + + Font Size 字體大小 - - + + The size of text in the application. 應用程式中的字體大小。 - - + + Small - - + + Medium - - + + Large - - + + Language and Locale 語言與區域設定 - - + + The language and locale you wish to use. 您希望使用的語言與區域設定。 - - + + + System Locale + + + + + Device 裝置 - - The compute device used for text generation. "Auto" uses Vulkan or Metal. - 用於生成文字的計算設備。 「Auto」將自動使用 Vulkan 或 Metal。 + 用於生成文字的計算設備。 「Auto」將自動使用 Vulkan 或 Metal。 - - + + Default Model 預設模型 - - + + The preferred model for new chats. Also used as the local server fallback. 用於新交談的預設模型。也用於作為本機伺服器後援使用。 - - + + Suggestion Mode 建議模式 - - + + When chatting with LocalDocs 當使用「我的文件」交談時 - - + + Whenever possible 視情況允許 - - + + Never 永不 - - + + Generate suggested follow-up questions at the end of responses. 在回覆末尾生成後續建議的問題。 - - + + + The compute device used for text generation. + + + + + + + + Application default + + + + + Download Path 下載路徑 - - + + Where to store local models and the LocalDocs database. 儲存本機模型與「我的文件」資料庫的位置。 - - + + Browse 瀏覽 - - + + Choose where to save model files 選擇儲存模型檔案的位置 - - + + Enable Datalake 啟用資料湖泊 - - + + Send chats and feedback to the GPT4All Open-Source Datalake. 將交談與回饋傳送到 GPT4All 開放原始碼資料湖泊。 - - + + Advanced 進階 - - + + CPU Threads 中央處理器(CPU)線程 - - + + The number of CPU threads used for inference and embedding. 用於推理與嵌入的中央處理器線程數。 - - + + Save Chat Context 儲存交談語境 - - + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. 將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。 - - + + Enable Local Server 啟用本機伺服器 - - + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. 將 OpenAI 相容伺服器公開給本機。警告:導致資源使用增加。 - - + + API Server Port API 伺服器埠口 - - + + The port to use for the local server. Requires restart. 用於本機伺服器的埠口。需要重新啟動。 - - + + Check For Updates 檢查更新 - - + + Manually check for an update to GPT4All. 手動檢查 GPT4All 的更新。 - - + + Updates 更新 @@ -690,7 +762,7 @@ Chat - + New Chat 新的交談 @@ -701,16 +773,25 @@ 伺服器交談 - - Prompt: - 提示詞: + 提示詞: - - Response: - 回覆: + 回覆: + + + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + @@ -824,16 +905,12 @@ ChatView - - <h3>Encountered an error loading model:</h3><br> - <h3>載入模型時發生錯誤:</h3><br> + <h3>載入模型時發生錯誤:</h3><br> - - <br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help - <br><br>模型載入失敗的原因有很多,但最常見的原因包括檔案格式錯誤、下載不完整或損壞、檔案類型錯誤、系統主記憶體不足或模型類型不相容。以下是解決問題的一些建議:<br><ul><li>確保模型檔案具有相容的格式與類型<li>檢查下載資料夾中的模型檔案是否完整<li>您可以找到下載資料夾在設置對話方塊中<li>如果您已旁載入模型,請透過檢查md5sum 確保檔案未損壞<li>在我們的<a href="https://docs.gpt4all.io/ 中了解有關支援哪些模型的更多資訊">GUI 檔案</a><li>查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助 + <br><br>模型載入失敗的原因有很多,但最常見的原因包括檔案格式錯誤、下載不完整或損壞、檔案類型錯誤、系統主記憶體不足或模型類型不相容。以下是解決問題的一些建議:<br><ul><li>確保模型檔案具有相容的格式與類型<li>檢查下載資料夾中的模型檔案是否完整<li>您可以找到下載資料夾在設置對話方塊中<li>如果您已旁載入模型,請透過檢查md5sum 確保檔案未損壞<li>在我們的<a href="https://docs.gpt4all.io/ 中了解有關支援哪些模型的更多資訊">GUI 檔案</a><li>查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助 @@ -866,26 +943,8 @@ 程式碼已複製到剪貼簿。 - - - - - - - - - - - - - - - - - - Response: - 回覆: + 回覆: @@ -949,9 +1008,9 @@ - + - + Reload · %1 重新載入 · %1 @@ -962,297 +1021,307 @@ 載入中 · %1 - - + + Load · %1 (default) → 載入 · %1 (預設) → - - + + The top item is the current model 最上面的那項是目前使用的模型 - - - - + + + + LocalDocs 我的文件 - - + + Add documents 新增文件 - - + + add collections of documents to the chat 將文件集合新增至交談中 - - Load · - 載入 · + 載入 · - - (default) → - (預設) → + (預設) → - - + + Load the default model 載入預設模型 - - + + Loads the default model which can be changed in settings 預設模型可於設定中變更 - - + + No Model Installed 沒有已安裝的模型 - - + + GPT4All requires that you install at least one model to get started GPT4All 要求您至少安裝一個 模型開始 - - + + Install a Model 安裝一個模型 - - + + Shows the add model view 顯示新增模型視圖 - - + + Conversation with the model 與模型對話 - - + + prompt / response pairs from the conversation 對話中的提示詞 / 回覆組合 - - + + GPT4All GPT4All - - + + You - - Busy indicator 參考自 https://terms.naer.edu.tw - 忙線指示器 + 忙線指示器 - - The model is thinking - 模型正在思考中 + 模型正在思考中 - - + + recalculating context ... 重新計算語境中...... - - + + response stopped ... 回覆停止...... - - + + retrieving localdocs: %1 ... 檢索本機文件中:%1 ...... - - + + searching localdocs: %1 ... 搜尋本機文件中:%1 ...... - - + + processing ... 處理中...... - - + + generating response ... 生成回覆...... - - - - + + + generating questions ... + + + + + + + Copy 複製 - - + + Copy Message 複製訊息 - - + + Disable markdown 停用 Markdown - - + + Enable markdown 啟用 Markdown - - + + Thumbs up - - + + Gives a thumbs up to the response 對這則回覆比讚 - - + + Thumbs down 倒讚 - - + + Opens thumbs down dialog 開啟倒讚對話視窗 - - + + %1 Sources %1 來源 - - + + Suggested follow-ups 後續建議 - - + + Erase and reset chat session 刪除並重置交談會話 - - + + Copy chat session to clipboard 複製交談會議到剪貼簿 - - + + Redo last chat response 復原上一個交談回覆 - - + + + Stop generating + + + + + Stop the current response generation 停止當前回覆生成 - - + + Reloads the model 重新載入模型 - - + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help + + + + + Send a message... 傳送一則訊息...... - - + + Load a model to continue... 載入模型以繼續...... - - + + Send messages/prompts to the model 向模型傳送訊息/提示詞 - - + + Cut 剪下 - - + + Paste 貼上 - - + + Select All 全選 - - + + Send message 傳送訊息 - - + + Sends the message/prompt contained in textfield to the model 將文字欄位中包含的訊息/提示詞傳送到模型 @@ -1260,46 +1329,84 @@ model to get started CollectionsDrawer - - + + Warning: searching collections while indexing can return incomplete results 警告:在索引時搜尋收藏可能會傳回不完整的結果 - - + + %n file(s) %n 個檔案 - - + + %n word(s) %n 個字 - - + + Updating 更新中 - - + + + Add Docs + 新增文件 - - + + Select a collection to make it available to the chat model. 選擇一個收藏以使其可供交談模型使用。 + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + HomeView @@ -1369,44 +1476,44 @@ model to get started 從 GPT4All 來的最新消息 - - + + Release Notes 版本資訊 - - + + Documentation 文件 - - + + Discord Discord - - + + X (Twitter) X (Twitter) - - + + Github Github - - + + GPT4All.io GPT4All.io - - + + Subscribe to Newsletter 訂閱電子報 @@ -1567,162 +1674,166 @@ model to get started + 新增收藏 - - ERROR: The LocalDocs database is not valid. - 錯誤:「我的文件」資料庫已損壞。 + 錯誤:「我的文件」資料庫已損壞。 - - + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + + + + + No Collections Installed 沒有已安裝的收藏 - - + + Install a collection of local documents to get started using this feature 安裝本機文件收藏以開始使用此功能 - - + + + Add Doc Collection + 新增文件收藏 - - + + Shows the add model view 查看新增的模型視圖 - - + + Indexing progressBar 索引進度條 - - + + Shows the progress made in the indexing 顯示索引進度 - - + + ERROR 錯誤 - - + + INDEXING 索引中 - - + + EMBEDDING 嵌入中 - - + + REQUIRES UPDATE 必須更新 - - + + READY 已就緒 - - + + INSTALLING 安裝中 - - + + Indexing in progress 正在索引 - - + + Embedding in progress 正在嵌入 - - + + This collection requires an update after version change 該收藏需要在版本變更後更新 - - + + Automatically reindexes upon changes to the folder 若資料夾有變動,會自動重新索引 - - + + Installation in progress 正在安裝中 - - + + % % - - + + %n file(s) %n 個檔案 - - + + %n word(s) %n 個字 - - + + Remove 移除 - - + + Rebuild 重建 - - + + Reindex this folder from scratch. This is slow and usually not needed. 重新索引該資料夾。這將會耗費許多時間並且通常不太需要這樣做。 - - + + Update 更新 - - + + Update the collection to the new version. This is a slow operation. 更新收藏。這將會耗費許多時間。 @@ -1730,47 +1841,67 @@ model to get started ModelList - + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>需要個人的 OpenAI API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 OpenAI</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 OpenAI 進行通訊</li><li>您可以在<a href="https://platform.openai.com/account/api-keys">此處</a>申請一個 API 金鑰。</li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI 的 ChatGPT 模型 GPT-3.5 Turbo</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* 即使您已向 OpenAI 付費購買了 ChatGPT 的 GPT-4 模型使用權,但這也不能保證您能擁有 API 金鑰的使用權限。請聯繫 OpenAI 以查閱更多資訊。 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>OpenAI 的 ChatGPT 模型 GPT-4</strong><br> %1 %2 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>需要個人的 Mistral API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 Mistral!</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 Mistral 進行通訊</li><li>您可以在<a href="https://console.mistral.ai/user/api-keys">此處</a>申請一個 API 金鑰。</li> - + <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral 迷你模型</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Mistral 小型模型</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Mistral 中型模型</strong><br> %1 - + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>建立者:%1。</strong><br><ul><li>發行於:%2。<li>這個模型有 %3 個讚。<li>這個模型有 %4 次下載次數。<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul> @@ -1790,65 +1921,63 @@ model to get started 模型設定 - - + + Clone 複製 - - + + Remove 移除 - - + + Name 名稱 - - + + Model File 模型檔案 - - + + System Prompt 系統提示詞 - - + + Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. 在每個對話的開頭加上前綴。必須包含適當的構建符元(framing tokens)。 - - + + Prompt Template 提示詞模板 - - + + The template that wraps every prompt. 包裝每個提示詞的模板。 - - + + Must contain the string "%1" to be replaced with the user's input. 必須包含要替換為使用者輸入的字串「%1」。 - - Add optional image - 新增 + 新增 可選圖片 @@ -1864,32 +1993,32 @@ optional image 用於自動生成交談名稱的提示詞。 - - + + Suggested FollowUp Prompt 後續建議提示詞 - - + + Prompt used to generate suggested follow-up questions. 用於生成後續建議問題的提示詞。 - - + + Context Length 語境長度(Context Length) - - + + Number of input and output tokens the model sees. 模型看見的輸入與輸出的符元數量。 - - + + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1898,152 +2027,152 @@ NOTE: Does not take effect until you reload the model. 注意:重新載入模型後才會生效。 - - + + Temperature 語境溫度(Temperature) - - + + Randomness of model output. Higher -> more variation. 模型輸出的隨機性。更高 -> 更多變化。 - - + + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. 語境溫度會提高選擇不容易出現的符元機率。 注意:較高的語境溫度會生成更多創意,但輸出的可預測性會相對較差。 - - + + Top-P 核心採樣(Top-P) - - + + Nucleus Sampling factor. Lower -> more predicatable. 核心採樣因子。更低 -> 更可預測。 - - + + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. 只選擇總機率約為核心採樣,最有可能性的符元。 注意:用於避免選擇不容易出現的符元。 - - + + Min-P 最小符元機率(Min-P) - - + + Minimum token probability. Higher -> more predictable. 最小符元機率。更高 -> 更可預測。 - - + + Sets the minimum relative probability for a token to be considered. 設定要考慮的符元的最小相對機率。 - - + + Top-K 高頻率採樣機率(Top-K) - - + + Size of selection pool for tokens. 符元選擇池的大小。 - - + + Only the top K most likely tokens will be chosen from. 只選擇前 K 個最有可能性的符元。 - - + + Max Length 最大長度(Max Length) - - + + Maximum response length, in tokens. 最大響應長度(以符元為單位)。 - - + + Prompt Batch Size 提示詞批次大小(Prompt Batch Size) - - + + The batch size used for prompt processing. 用於即時處理的批量大小。 - - + + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. 一次處理的提示詞符元數量。 注意:較高的值可以加快讀取提示詞的速度,但會使用比較多的記憶體。 - - + + Repeat Penalty 重複處罰(Repeat Penalty) - - + + Repetition penalty factor. Set to 1 to disable. 重複懲罰因子。設定為 1 以停用。 - - + + Repeat Penalty Tokens 重複懲罰符元(Repeat Penalty Tokens) - - + + Number of previous tokens used for penalty. 之前用於懲罰的符元數量。 - - + + GPU Layers 圖形處理器負載層(GPU Layers) - - + + Number of model layers to load into VRAM. 要載入到顯示記憶體中的模型層數。 - - + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2055,237 +2184,285 @@ NOTE: Does not take effect until you reload the model. ModelsView - - + + No Models Installed 沒有已安裝的模型 - - + + Install a model to get started using GPT4All 安裝模型以開始使用 GPT4All - - - - + + + + + Add Model + 新增模型 - - + + Shows the add model view 顯示新增模型視圖 - - + + Installed Models 已安裝的模型 - - + + Locally installed chat models 本機已安裝的交談模型 - - + + Model file 模型檔案 - - + + Model file to be downloaded 即將下載的模型檔案 - - + + Description 描述 - - + + File description 檔案描述 - - + + Cancel 取消 - - + + Resume 恢復 - - + + Stop/restart/start the download 停止/重啟/開始下載 - - + + Remove 移除 - - + + Remove model from filesystem 從檔案系統移除模型 - - - - + + + + Install 安裝 - - + + Install online model 安裝線上模型 - - + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + + + %1 GB + + + + + + ? + + + <a href="#error">Error</a> - <a href="#error">錯誤</a> + <a href="#error">錯誤</a> - - + + Describes an error that occurred when downloading 解釋下載時發生的錯誤 - - <strong><font size="2">WARNING: Not recommended for your hardware. - <strong><font size="2">警告:不推薦在您的硬體上運作。 + <strong><font size="2">警告:不推薦在您的硬體上運作。 - - Model requires more memory ( - 模型需要比較多的記憶體( + 模型需要比較多的記憶體( - - GB) than your system has available ( - GB),但您的系統記憶體空間不足( + GB),但您的系統記憶體空間不足( - - + + Error for incompatible hardware 錯誤,不相容的硬體 - - + + Download progressBar 下載進度條 - - + + Shows the progress made in the download 顯示下載進度 - - + + Download speed 下載速度 - - + + Download speed in bytes/kilobytes/megabytes per second 下載速度每秒 bytes/kilobytes/megabytes - - + + Calculating... 計算中...... - - - - + + + + + + + + Whether the file hash is being calculated 是否正在計算檔案雜湊 - - + + Busy indicator 參考自 https://terms.naer.edu.tw 忙線指示器 - - + + Displayed when the file hash is being calculated 計算檔案雜湊值時顯示 - - + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY 輸入 $API_KEY - - + + + ERROR: $BASE_URL is empty. + + + + + + enter $BASE_URL + + + + + + ERROR: $MODEL_NAME is empty. + + + + + + enter $MODEL_NAME + + + + + File size 檔案大小 - - + + RAM required 所需的記憶體 - - GB - GB + GB - - + + Parameters 參數 - - + + Quant 量化 - - + + Type 類型 @@ -2498,36 +2675,32 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將 歡迎使用! - - ### Release notes - ### 版本資訊 + ### 版本資訊 - - ### Contributors - ### 貢獻者 + ### 貢獻者 - - + + Release notes 版本資訊 - - + + Release notes for this version 這個版本的版本資訊 - - + + ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. @@ -2555,88 +2728,96 @@ model release that uses your data! Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將被認可為任何使用您的資料的 GPT4All 模型版本的貢獻者! - - + + Terms for opt-in 計畫規範 - - + + Describes what will happen when you opt-in 解釋當您加入計畫後,會發生什麼事情 - - - - + + + + Yes - - - - + + + + No - - - - + + + + Opt-in for anonymous usage statistics 匿名使用統計計畫 - - + + + ### Release notes +%1### Contributors +%2 + + + + + Allow opt-in for anonymous usage statistics 加入匿名使用統計計畫 - - + + Opt-out for anonymous usage statistics 退出匿名使用統計計畫 - - + + Allow opt-out for anonymous usage statistics 終止並退出匿名使用統計計畫 - - - - + + + + Opt-in for network 資料湖泊計畫 - - + + Allow opt-in for network 加入資料湖泊計畫 - - + + Opt-out for network 退出資料湖泊計畫 - - + + Allow opt-in anonymous sharing of chats to the GPT4All Datalake 開始將交談內容匿名分享到 GPT4All 資料湖泊 - - + + Allow opt-out anonymous sharing of chats to the GPT4All Datalake 終止將交談內容匿名分享到 GPT4All 資料湖泊 @@ -2712,92 +2893,80 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將 main - - GPT4All v - GPT4All v + GPT4All v - - - - <h3>Encountered an error starting up:</h3><br> - <h3>啟動時發生錯誤:</h3><br> + <h3>啟動時發生錯誤:</h3><br> - - <i>"Incompatible hardware detected."</i> - <i>「偵測到不相容的硬體」</i> + <i>「偵測到不相容的硬體」</i> - - <br><br>Unfortunately, your CPU does not meet the minimal requirements to run - <br><br>糟糕,您的中央處理器不符合運行的最低要求 + <br><br>糟糕,您的中央處理器不符合運行的最低要求 - - this program. In particular, it does not support AVX intrinsics which this - 這個程式。特別是,它不支援 AVX 內在函數,這 + 這個程式。特別是,它不支援 AVX 內在函數,這 - - program requires to successfully run a modern large language model. - 程式需要成功運行現代大型語言模型。 + 程式需要成功運行現代大型語言模型。 - - The only solution at this time is to upgrade your hardware to a more modern CPU. - 此時唯一的解決方案是將硬體升級到更現代的中央處理器。 + 此時唯一的解決方案是將硬體升級到更現代的中央處理器。 - - <br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions"> 中文網址造成 Linguist 會發出警告,請無視。 - <br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86"> + <br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86"> - - https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> 中文網址造成 Linguist 會發出警告,請無視。 - https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86</a> + https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86</a> - - <i>"Inability to access settings file."</i> - <i>「無法存取設定檔。」</i> + <i>「無法存取設定檔。」</i> - - <br><br>Unfortunately, something is preventing the program from accessing - <br><br>糟糕,有些東西正在阻止程式存取 + <br><br>糟糕,有些東西正在阻止程式存取 - - the settings file. This could be caused by incorrect permissions in the local - 設定檔。這可能是本機權限設定不正確導致的 + 設定檔。這可能是本機權限設定不正確導致的 - - app config directory where the settings file is located. - 設定檔案所在的app config 目錄。 + 設定檔案所在的app config 目錄。 - - Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. - 請查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助。 + 請查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助。 + + + + + GPT4All v%1 + + + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> + + + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. +