Skip to content

Releases: ggml-org/llama.vscode

v0.0.8

08 Feb 15:03
55fa454
Compare
Choose a tag to compare

Overview

Fix a regression introduced in the previous v0.0.7 version.

What's Changed

Full Changelog: v0.0.7...v0.0.8

v0.0.7

08 Feb 07:16
0e5371a
Compare
Choose a tag to compare

What's Changed

  • Fix the problem with cutting the lines of a suggestion after by @igardev in #22
  • Fix manual trigger without cache + accept always on pressing a Tab by @igardev in #25
  • Add experimental OpenAI compatible endpoint option by @ohmeow in #16

New Contributors

Full Changelog: v0.0.6...v0.0.7

v0.0.6

31 Jan 07:48
9407bdd
Compare
Choose a tag to compare

What's Changed

  • On copy and cut postpone adding a chunk with 1000 ms. by @igardev in #14
  • Remove repeating suffix of a suggestion by @igardev in #18

Full Changelog: v0.0.5...v0.0.6

v0.0.5

23 Jan 13:55
7e6877b
Compare
Choose a tag to compare

What's Changed

  • feat: add status bar controls and completion toggles by @ChetanXpro in #6

New Contributors

Full Changelog: v0.0.4...v0.0.5

v0.0.4

22 Jan 17:40
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.0.3...v0.0.4

v0.0.3

22 Jan 16:56
28de776
Compare
Choose a tag to compare

Overview

Minor readme updates:

  • Better instructions
  • Add example

v0.0.2

22 Jan 16:53
57f7f38
Compare
Choose a tag to compare

Overview

Minor readme updates:

  • Better instructions
  • Add example

v0.0.1

21 Jan 12:54
1c1d020
Compare
Choose a tag to compare

Overview

Initial implementation of the llama-vscode extension. See the README for instructions.

Feedback is appreciated!