Search results
16 packages found
Sort by: Default
- Default
- Most downloaded this week
- Most downloaded this month
- Most dependents
- Recently published
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- vulkan
- grammar
- View more
React Native binding of llama.cpp
llama.cpp gguf file parser for javascript
Fork of llama.rn for ChatterUI
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
- local ai
- inference server
- model pool
- gpt4all
- node-llama-cpp
- transformers.js
- llama.cpp
- chatbot
- bot
- llm
- ai
- nlp
- openai api
React Native binding of llama.cpp
llama.cpp LLM Provider
llama.cpp LLM Provider
React Native binding of llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- similique
- id
- llama.cpp
- bindings
- ai
- sequi
- blanditiis
- error
- llm
- laboriosam
- metal
- cuda
- grammar
- json-grammar
- View more
llama.cpp LLM Provider - OpenAI Compatible
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
- llama
- llama-cpp
- llama.cpp
- bindings
- ai
- cmake
- cmake-js
- prebuilt-binaries
- llm
- gguf
- metal
- cuda
- grammar
- json-grammar
- View more
Node.js bindings for LlamaCPP, a C++ library for running language models.
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
A simple grammar builder compatible with GBNF (llama.cpp)
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp