The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
-
Updated
Dec 21, 2024 - JavaScript
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
Harness LLMs with Multi-Agent Programming
A simple "Be My Eyes" web app with a llama.cpp/llava backend
Chrome Extension to Summarize or Chat with Web Pages/Local Documents Using locally running LLMs. Keep all of your data and conversations private. 🔐
Code with AI in VSCode, but you get to choose the AI.
Your fully proficient, AI-powered and local chatbot assistant🤖
Text-to-speech API endpoint compatible with OpenAI's TTS API endpoint, using Microsoft Edge TTS to generate speech for free locally
A python package for developing AI applications with local LLMs.
Openai-style, fast & lightweight local language model inference w/ documents
LLM story writer with a focus on high-quality long output based on a user provided prompt.
Python library for the instruction and reliable validation of structured outputs (JSON) of Large Language Models (LLMs) with Ollama and Pydantic. -> Deterministic work with LLMs.
Recipes for on-device voice AI and local LLM
Structured inference with Llama 2 in your browser
PalmHill.BlazorChat is a chat application and API built with Blazor WebAssembly, SignalR, and WebAPI, featuring real-time LLM conversations, markdown support, customizable settings, and a responsive design. This project supports Llama2 models and was tested with Orca2.
A lo-fi AI-first note taker running locally on-device
Run local LLM from Huggingface in React-Native or Expo using onnxruntime.
Infinite Craft but in Pyside6 and Python with local LLM (llama3 & others) using Ollama
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."