Overview

A personal AI assistant that uses local LLM models (DeepSeek), requires no internet connection, and runs entirely on-device.

Why It Exists?

Cloud-based AI models like OpenAI or Claude constantly collect user data, monitor conversations, and impose heavy censorship. I wanted a personal, 100% private AI companion that runs strictly on my own hardware, accessible even without an internet connection.

Architecture & Decisions

Built on top of Ollama to efficiently run DeepSeek-R1 models locally. The backend logic is powered by Rust to ensure zero-latency communication with the local AI server. The frontend is a cross-platform Tauri application, ensuring it uses minimal RAM compared to typical web wrappers.

Key Features

  • 100% Offline execution (Zero cloud dependency)
  • Total privacy with zero data telemetry
  • Seamless integration with DeepSeek-R1
  • Hardware-accelerated local inference
  • Customizable system prompts and personas