All Downloads are FREE. Search and download functionalities are using the official Maven repository.

io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.jdp Maven / Gradle / Ivy

There is a newer version: 0.21.0
Show newest version
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.format=the format to return a response in. Currently, the only accepted value is {@code json}
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.logRequests=Whether chat model requests should be logged
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.logResponses=Whether chat model responses should be logged
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.numPredict=Maximum number of tokens to predict when generating text
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.seed=With a static number the result is always the same. With a random number the result varies\nExample\:\n\n
\n{@code\nRandom random \= new Random();\nint x \= random.nextInt(Integer.MAX_VALUE);\n}\n
io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.stop=Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.temperature=The temperature of the model. Increasing the temperature will make the model answer with\nmore variability. A lower temperature will make the model answer more conservatively. io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.topK=Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower\nvalue (e.g. 10) will be more conservative io.quarkiverse.langchain4j.ollama.runtime.config.ChatModelConfig.topP=Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5)\nwill generate more focused and conservative text




© 2015 - 2024 Weber Informatics LLC | Privacy Policy