LM Mini Documentation

LM Mini is a mobile chat client for LM Studio. It lets you chat with locally running large language models from your iOS or Android device.

🚧 Documentation in Progress

We're actively building out the documentation. Check back soon for detailed guides on every feature.

Quick Start

  1. Install LM Studio

    Download and install LM Studio on your computer. Load a language model.

  2. Start the API Server

    In LM Studio, go to the Developer tab and start the local server. The default address is localhost:1234.

  3. Install LM Mini

    Download LM Mini from the App Store or Google Play.

  4. Connect

    Open LM Mini and enter your computer's local IP address (e.g., 192.168.1.100:1234). Make sure both devices are on the same Wi-Fi network.

  5. Start Chatting

    Select a model and start a conversation. That's it!

Requirements

ComponentRequirement
LM Studiov0.3.0 or later
iOSiOS 15.5 or later
AndroidAndroid 8.0 (API 26) or later
NetworkSame Wi-Fi network (or use LM Connect for remote access)

Chat Interface

LM Mini provides a full-featured chat interface with:

  • Markdown rendering with syntax highlighting
  • LaTeX math equation support
  • Conversation folders and organization
  • Custom themes, avatars, and backgrounds
  • Adjustable font size
  • Export conversations to JSON or text

Detailed documentation coming soon.

Model Management

Browse, download, and manage models directly from LM Mini:

  • View all loaded and available models
  • Download new models from LM Studio's catalog
  • Switch models mid-conversation
  • Configure model load parameters (context length, flash attention, etc.)

Detailed documentation coming soon.

Voice Chat

LM Mini supports hands-free voice conversations:

  • Speech-to-text input using native recognition
  • Text-to-speech output with adjustable speed and pitch
  • Continuous conversation mode
  • Multiple TTS providers

Detailed documentation coming soon.

Tool Calling & MCP

Extend your AI with real-world capabilities:

  • Web search integration
  • MCP (Model Context Protocol) server support
  • Ephemeral MCP servers via HTTP
  • Integrated MCPs from LM Studio's mcp.json

Detailed documentation coming soon.

Image Generation

Connect to AUTOMATIC1111 / Stable Diffusion WebUI to generate images:

  • Generate images from chat prompts
  • Full control over parameters (steps, CFG scale, sampler, etc.)
  • High-res upscaling support

Detailed documentation coming soon.

Remote Access

Access your LM Studio from anywhere using LM Mini Connect:

  • No port forwarding or VPN needed
  • Secure encrypted relay
  • Simple QR code pairing
  • Works across networks

See the LM Connect Setup Guide for details.

Cloud Providers

LM Mini Pro users can connect to cloud AI providers:

  • OpenAI (GPT-4o, o1, etc.)
  • Anthropic (Claude)
  • Google (Gemini)
  • Groq, Mistral, OpenRouter, and more

Detailed documentation coming soon.

LM Connect Setup Guide

  1. Download LM Mini Connect

    Download the desktop companion app for your platform (macOS, Windows, or Linux).

  2. Enter LM Studio Address

    Set the LM Studio address (usually localhost:1234). If authentication is enabled, enter the API token.

  3. Click Connect

    The app will connect to the relay server and show a QR code.

  4. Scan in LM Mini

    Open LM Mini on your phone → Settings → Remote Access → Scan QR Code.

  5. Done!

    Your phone is now connected to LM Studio via the secure relay. This works from any network.

Troubleshooting LM Connect

Connection fails immediately

Make sure LM Studio is running and the API server is started. The default address is localhost:1234. If you changed the port, update it in LM Mini Connect.

QR code scan doesn't work

Ensure you have LM Mini Pro to use remote access. The QR code scanning feature requires a Pro subscription.

High latency

The relay server adds a small amount of latency. For the best experience, ensure both your computer and phone have a stable internet connection.

Model Parameters

Fine-tune model behavior with these parameters:

ParameterRangeDescription
Temperature0.0 — 2.0Controls randomness. Lower = more focused, higher = more creative.
Top P0.0 — 1.0Nucleus sampling. Limits to tokens with cumulative probability ≤ top_p.
Top K1 — 100Limits to the K most likely tokens at each step.
Min P0.0 — 1.0Minimum probability threshold relative to the most likely token.
Max Tokens1 — 32768Maximum number of tokens in the response.
Repeat Penalty1.0 — 2.0Penalizes repeated tokens to reduce loops.

System Prompts

Customize the AI's behavior with system prompts. You can create and save a library of prompts for different use cases.

Detailed documentation coming soon.

Appearance

Customize LM Mini's look and feel:

  • Light, dark, or system theme
  • Custom chat background images
  • User and assistant avatars
  • Adjustable chat font size (10pt — 24pt)
  • Per-conversation bubble colors

Detailed documentation coming soon.