Skip to main content

Ollama Model

This section explains how to create, configure, and test an Ollama Model data source in DataDios. Ollama is an open-source tool that allows you to run large language models (LLMs) locally. By configuring Ollama as a data source, you can integrate local AI models into your DataDios workflows.


Prerequisites

Before configuring an Ollama data source, ensure:

  1. Ollama is installed and running on your local machine or a server accessible from DataDios
  2. At least one model is pulled (e.g., ollama pull mistral:7b-instruct-q4_0)
  3. The Ollama API is accessible via the configured host and port (default: http://localhost:11434)

Steps to Create and Test an Ollama Data Source

Step 1: Navigate to Data Sources

  1. Navigate to the Data Sources tab in DataDios

  2. You will see a list of existing data sources organized by type

    Data Sources List


Step 2: Create a Data Source

  1. Click + CREATE SOURCE (or the + button)

  2. In the data source type dropdown, expand Gen AI Models

  3. Select Ollama Model

    Choose Ollama Model


Step 3: Fill Connection Details

In the Create Data Sources form, provide the required parameters:

Ollama Connection Form

  • Name: A unique name for your Ollama data source (e.g., QuickStartOllama)
  • project_name: Project identifier for organizing your models
  • model_name: Select the Ollama model to use (e.g., mistral:7b-instruct-q4_0)
  • embedding_model: Select an embedding model for vector operations (e.g., mxbai-embed-large)
  • host: The Ollama server URL (e.g., http://localhost:11434 or your server IP)
  • port: The port number where Ollama is running (default: 11434)
  • schedule_sync: (Optional) Configure for metadata synchronization (see Metadata Timeline)

JSON Configuration

You can also configure the data source using JSON format by clicking the Json toggle:

{
"project_name": "ollam_local",
"model_name": "mistral:7b-instruct-q4_0",
"embedding_model": "mxbai-embed-large",
"host": "http://localhost:11434",
"port": "11434",
"schedule_sync": ""
}

Step 4: Test Connection

  1. After entering details, click TEST CONNECTION
  2. Ensure the connection is validated successfully
  3. If the test fails, verify:
    • Ollama service is running (ollama serve)
    • The host and port are correct
    • The specified model is available (ollama list)

Step 5: Save Data Source

  1. If the test succeeds, click CREATE to save the data source
  2. You will be redirected to the Data Sources Listing Page, where the Ollama data source will appear under Gen AI Models

Step 6: Update Data Source (Optional)

To modify an existing Ollama data source:

  1. Click the edit icon (pencil) next to your Ollama data source

  2. Update the connection parameters as needed

    Update Ollama Data Source

  3. Click TEST CONNECTION to verify the updated configuration

  4. Click SAVE to apply changes


Available Ollama Models

Common models you can use with Ollama include:

ModelDescriptionUse Case
mistral:7b-instruct-q4_0Mistral 7B instruction-tunedGeneral purpose, chat, coding
llama2Meta's LLaMA 2General purpose NLP tasks
codellamaCode-specialized LLaMACode generation and analysis
llama3Meta's LLaMA 3Advanced reasoning and chat

For embedding models:

ModelDescription
mxbai-embed-largeHigh-quality text embeddings
nomic-embed-textFast, efficient embeddings

Using Ollama with AI Studio

Once configured, you can use your Ollama data source in AI Studio for:

  • Natural language queries on your data
  • Text generation and completion
  • Document analysis and summarization
  • Code generation assistance

For detailed information, refer to the AI Studio documentation.


Best Practices

  1. Use Local Network for better performance when running Ollama on a local server
  2. Choose Appropriate Models based on your hardware capabilities (smaller quantized models for limited resources)
  3. Test Connection before saving to ensure the Ollama server is accessible
  4. Monitor Resource Usage as LLMs can be memory-intensive
  5. Keep Models Updated by periodically pulling the latest versions (ollama pull <model>)

Troubleshooting

IssueSolution
Connection refusedEnsure Ollama is running with ollama serve
Model not foundPull the model first: ollama pull <model_name>
Timeout errorsCheck network connectivity and firewall settings
Out of memoryUse a smaller quantized model or increase system RAM