Elephas Inbuilt Offline Models for 100% Privacy

Using Elephas Inbuilt Offline Models

Using Elephas Inbuilt Offline Models

Elephas comes with powerful built-in offline AI models, allowing you to chat and index files without an internet connection. Everything runs locally on your Mac — faster, private, and fully under your control.

Requirements

  • Apple Silicon (M1 or later)
  • macOS 14+
  • Paid Elephas plan
💡

No token limits when using offline models

 

Watch Video :

Creating a Brain with Offline Models

The primary way to leverage Elephas offline models is through Brain creation. Here's how to set up a Brain using offline models:

  1. Open the Create Brain window.
  1. Under Mode, select Offline.
Notion image
  1. Choose your models:
      • Indexing Model → Select from offline embedding models (e.g., multilingual-e5-large). multilingual-e5-large offers better quality results than small variant.
      • Default Chat Model → Select from offline chat models (e.g., llama3.2:3b). llama3.2:3b offer good quality results. Uses around 2 GB of memory when running.
Notion image
  1. Click Create Brain to finish.
 
💡

When you create a Brain with an offline model, Elephas takes care of the heavy lifting for you. If the model isn’t already on your machine, it will automatically download and start in the background — no extra clicks needed. Just set it and go!

Offline Model Indicator

The Offline Indicator helps you instantly confirm when Elephas is running fully offline.

When a Brain is created using offline models, a wifi.slash icon appears in the interface. This icon indicates that all AI processing is happening locally on your device — with no internet usage.

Where You’ll See the Offline Indicator

The offline icon appears in the following places:

  • Next to the Brain name in the dashboard
    • Notion image
  • In the chat header while viewing a conversation
    • Notion image
  • When starting a new chat with an offline Brain
    • Notion image
This confirms that all indexing and chatting are occurring fully offline, keeping your private data secure and local.

File Indexing in Offline Brains

Once your offline Brain is created, the next step is to index your files.

Let’s take Apple Notes as an example — one of the most sensitive apps on your Mac. With Elephas offline indexing, you can securely connect and search your notes, all processed privately on your device.

Here’s how easy it is:

Method 1: Using “Add Files or Folders”

  1. Click the Knowledge Base button in the header
  1. In the Knowledge Base panel, click Add Files or Folders at the top.
    1. Notion image
       
  1. The Add Sources sheet opens.
    1. Notion image
       
  1. Switch to the Apps tab to view available integrations.
    1. Notion image
       
  1. Select Apple Notes (or Notion, Obsidian, etc.).
    1. Notion image
       
  1. Choose specific folders or individual notes.
    1. Notion image
       
  1. Click Connect Notes.
    1. Notion image
 

Elephas will immediately start indexing your content locally using the selected offline embedding model.

Notion image
Notion image

Method 2: Using Connected Sources

  1. Open the Knowledge Base for your Brain.
  1. Click the Integrations card under CONNECTED SOURCES. If no integrations are connected yet, a ➕ icon will be shown.
    1. Notion image
       
  1. The Add Sources sheet opens directly to the Apps tab.
    1. Notion image
       
  1. Select Apple Notes (or any other supported integration).
  1. Choose the folders or notes you want to include.
    1. Notion image
       
  1. Click Connect Notes.
 
Your private Apple Notes will never leave the system. Everything is processed locally—offline and secure. 
 

For a step-by-step walkthrough, refer to Integrate Apple Notes with Elephas Super Brain

Using Super Chat with Offline Brains

After your files are indexed, you can start chatting with your Brain — fully offline.

  • Open your offline Brain from the sidebar.
  • The chat interface will open by default, showing your recent conversations in the chat listing.
  • Click on an existing conversation to continue, or use the "New Chat" button to start a new conversation.
  • Ask questions based on your indexed files, and Elephas will respond using the offline chat model.
Notion image

For detailed instructions, refer to our Elephas Super Chat support documentation.

Memory Management with Offline Models

The following model management guide is optional, as Elephas handles the model download, load automatically. This is only needed if you want to stop the model to save memory.

💡

Elephas Offline by default runs only one indexing model and only one chat model at any point of time. This is done to save memory.

If you are using an Elephas built-in offline model as your chat model, keep in mind that:

  • These models use a significant amount of memory.
  • Once your chat session is complete, you can turn off or stop the offline chat model from Elephas Preferences → Offline tab → Chat Models.
    • Notion image

This helps free up memory and ensures smoother performance for other tasks.

Managing Offline Models from Preferences

Elephas automatically handles offline model downloads and startup for you.

However, if you want to start, stop, delete, or switch offline models manually (to manage memory or performance), you can do so from Preferences.

👉 For a complete step-by-step guide, see: Managing Offline Models from Preferences

Configuring in Model Settings

Once you’ve started an offline model, you can choose it in Elephas Model settings:

  1. Open Preferences → Model Settings
    1. Notion image
       
  1. Select the model under Elephas Offline provider
Notion image
Notion image

Pick the model you want for:

  • Chat → choose from offline chat models (e.g., Llama 3.2 3B Instruct).
  • Indexing → choose from offline embedding models (e.g., Multilingual E5 Large).
 
💡

With inbuilt offline models, Elephas gives you the power of AI anytime, anywhere — completely private and under your control.

 
Did this answer your question?
😞
😐
🤩