Skip to content

v0.1.20

Compare
Choose a tag to compare
@SimFG SimFG released this 26 Apr 15:11
· 161 commits to main since this release
268e32c

🎉 Introduction to new functions of GPTCache

  1. support the temperature param, like openai

A non-negative number of sampling temperature, defaults to 0.
A higher temperature makes the output more random.
A lower temperature means a more deterministic and confident output.

  1. Add llama adapter
from gptcache.adapter.llama_cpp import Llama

llm = Llama('./models/7B/ggml-model.bin')
answer = llm(prompt=question)

Full Changelog: 0.1.19...0.1.20