"Specifically, `gradio-tools` is a Python library for converting `Gradio` apps into tools that can be leveraged by a large language model (LLM)-based agent to complete its task. For example, an LLM could use a `Gradio` tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different `Gradio` tool to apply OCR to a document on your Google Drive and then answer questions about it.\n",
"It's very easy to create you own tool if you want to use a space that's not one of the pre-built tools. Please see this section of the gradio-tools documentation for information on how to do that. All contributions are welcome!"
"Loaded as API: https://gradio-client-demos-stable-diffusion.hf.space ✔\n",
"Loaded as API: https://taesiri-blip-2.hf.space ✔\n",
"Loaded as API: https://microsoft-promptist.hf.space ✔\n",
"Loaded as API: https://damo-vilab-modelscope-text-to-video-synthesis.hf.space ✔\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: Do I need to use a tool? Yes\n",
"Action: StableDiffusionPromptGenerator\n",
"Action Input: A dog riding a skateboard\u001b[0m\n",
"Job Status: Status.STARTING eta: None\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3mA dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? Yes\n",
"Action: StableDiffusion\n",
"Action Input: A dog riding a skateboard, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3ma painting of a dog sitting on a skateboard\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? Yes\n",
"Action: TextToVideo\n",
"Action Input: a painting of a dog sitting on a skateboard\u001b[0m\n",
"Job Status: Status.STARTING eta: None\n",
"Due to heavy traffic on this app, the prediction will take approximately 73 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis)\n",
"Due to heavy traffic on this app, the prediction will take approximately 42 seconds.For faster predictions without waiting in queue, you may duplicate the space using: Client.duplicate(damo-vilab/modelscope-text-to-video-synthesis)\n",