io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig$OllamaConfig.jdp Maven / Gradle / Ivy
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.baseUrl=Base URL where the Ollama serving is running
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.chatModel=Chat model related settings
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.embeddingModel=Embedding model related settings
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.enableIntegration=Whether to enable the integration. Defaults to {@code true}, which means requests are made to the OpenAI\nprovider.\nSet to {@code false} to disable all requests.
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.logRequests=Whether the Ollama client should log requests
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.logResponses=Whether the Ollama client should log responses
io.quarkiverse.langchain4j.ollama.runtime.config.LangChain4jOllamaConfig.OllamaConfig.timeout=Timeout for Ollama calls
© 2015 - 2024 Weber Informatics LLC | Privacy Policy