{"code":0,"data":{"plugins":[{"agent_strategy":{},"badges":[],"brief":{"en_US":"Ollama"},"category":"model","created_at":"2024-12-04T09:46:32Z","endpoint":{},"icon":"langgenius/packages/ollama/_assets/icon_s_en.svg","index_id":"langgenius___ollama","install_count":129728,"introduction":"## Overview\n\nOllama is a cross-platform inference framework client (MacOS, Windows, Linux) designed for seamless deployment of large language models (LLMs) such as Llama 2, Mistral, Llava, and more. With its one-click setup, Ollama enables local execution of LLMs, providing enhanced data privacy and security by keeping your data on your own machine.\n\nDify supports integrating LLM and Text Embedding capabilities of large language models deployed with Ollama.\n\n## Configure\n\n#### 1. Download Ollama\nVisit [Ollama download page](https://ollama.com/download) to download the Ollama client for your system.\n\n#### 2. Run Ollama and Chat with Llava\n\n````\nollama run llama3.2\n````\n\nAfter successful launch, Ollama starts an API service on local port 11434, which can be accessed at `http://localhost:11434`.\n\nFor other models, visit [Ollama Models](https://ollama.com/library) for more details.\n\n#### 3. Install Ollama Plugin\nGo to the Dify marketplace and search the Ollama to download it.\n\n\n\n#### 4. Integrate Ollama in Dify\n\nIn `Settings \u003e Model Providers \u003e Ollama`, fill in:\n\n\n\n- Model Name:`llama3.2`\n- Base URL: `http://\u003cyour-ollama-endpoint-domain\u003e:11434`\n- Enter the base URL where the Ollama service is accessible.\n- If Dify is deployed using Docker, consider using the local network IP address, e.g., `http://192.168.1.100:11434` or `http://host.docker.internal:11434` to access the service.\n- For local source code deployment, use `http://localhost:11434`.\n- Model Type: `Chat`\n- Model Context Length: `4096`\n- The maximum context length of the model. If unsure, use the default value of 4096.\n- Maximum Token Limit: `4096`\n- The maximum number of tokens returned by the model. If there are no specific requirements for the model, this can be consistent with the model context length.\n- Support for Vision: `Yes`\n- Check this option if the model supports image understanding (multimodal), like `llava`.\n\nClick \"Save\" to use the model in the application after verifying that there are no errors.\n\nThe integration method for Embedding models is similar to LLM, just change the model type to Text Embedding.\n\nFor more detail, please check [Dify's official document](https://docs.dify.ai/development/models-integration/ollama).\n","label":{"en_US":"Ollama"},"latest_package_identifier":"langgenius/ollama:0.0.5@cc38c90a58d4b4e43c9a821d352829b2c2a8d6d742de9fec9e61e6b34865b496","latest_version":"0.0.5","model":{"background":"#F9FAFB","configurate_methods":["customizable-model"],"description":{"en_US":"Ollama"},"help":{"title":{"en_US":"How to integrate with Ollama","zh_Hans":"如何集成 Ollama"},"url":{"en_US":"https://docs.dify.ai/tutorials/model-configuration/ollama"}},"icon_large":{"en_US":"icon_l_en.svg"},"icon_small":{"en_US":"icon_s_en.svg"},"label":{"en_US":"Ollama"},"model_credential_schema":{"credential_form_schemas":[{"default":null,"label":{"en_US":"Base URL","zh_Hans":"基础 URL"},"max_length":0,"options":[],"placeholder":{"en_US":"Base url of Ollama server, e.g. http://192.168.1.100:11434","zh_Hans":"Ollama server 的基础 URL,例如 http://192.168.1.100:11434"},"required":true,"show_on":[],"type":"text-input","variable":"base_url"},{"default":"chat","label":{"en_US":"Completion mode","zh_Hans":"模型类型"},"max_length":0,"options":[{"label":{"en_US":"Completion","zh_Hans":"补全"},"show_on":[],"value":"completion"},{"label":{"en_US":"Chat","zh_Hans":"对话"},"show_on":[],"value":"chat"}],"placeholder":{"en_US":"Select completion mode","zh_Hans":"选择对话类型"},"required":true,"show_on":[{"value":"llm","variable":"__model_type"}],"type":"select","variable":"mode"},{"default":"4096","label":{"en_US":"Model context size","zh_Hans":"模型上下文长度"},"max_length":0,"options":[],"placeholder":{"en_US":"Enter your Model context size","zh_Hans":"在此输入您的模型上下文长度"},"required":true,"show_on":[],"type":"text-input","variable":"context_size"},{"default":"4096","label":{"en_US":"Upper bound for max tokens","zh_Hans":"最大 token 上限"},"max_length":0,"options":[],"placeholder":null,"required":true,"show_on":[{"value":"llm","variable":"__model_type"}],"type":"text-input","variable":"max_tokens"},{"default":"false","label":{"en_US":"Vision support","zh_Hans":"是否支持 Vision"},"max_length":0,"options":[{"label":{"en_US":"Yes","zh_Hans":"是"},"show_on":[],"value":"true"},{"label":{"en_US":"No","zh_Hans":"否"},"show_on":[],"value":"false"}],"placeholder":null,"required":false,"show_on":[{"value":"llm","variable":"__model_type"}],"type":"radio","variable":"vision_support"},{"default":"false","label":{"en_US":"Function call support","zh_Hans":"是否支持函数调用"},"max_length":0,"options":[{"label":{"en_US":"Y