Skip to main content

The Toolbox

Core orchestration ships with Cohort. The full ecosystem adds communication, content, and monitoring tools for production deployments.

At a Glance

8
Built-In Tools
All health-monitored
1
Dashboard
Start, stop, configure everything
$0
Default Cost
Local-first, API optional
0
Auto-Sends
Human approval gates on everything

Communication & Research

How your agents reach the outside world -- with you approving every outbound action.

Ecosystem tools -- available in full BOSS deployments, not included in pip install cohort
[EC]

Email & CalendarEnterprise

Send emails via Resend, manage Google Calendar events, and post to social media (Twitter, LinkedIn, Facebook, Threads). Every outbound message goes through a human approval gate before sending -- nothing leaves without your explicit OK.

Approval workflow
Agent drafts --> Pending queue --> You approve --> Sent + logged
Full audit log: who requested it, what was sent, which channel, timestamps, approval status. Nothing is ever sent automatically.
[YT]

YouTubeEnterprise

Search YouTube videos by keyword, get detailed metadata (title, description, channel, view counts, duration), and auto-extract chapter timestamps from video descriptions. Agents use this for research, content analysis, and finding relevant tutorials.

Search
Keyword search with filters
Metadata
Title, views, duration, tags
Chapters
Auto-extract timestamps

Content & Intelligence

Automated pipelines that discover, analyze, and draft content -- on a schedule, with human approval.

Ecosystem tools -- available in full BOSS deployments, not included in pip install cohort
[RSS]

RSS & News MonitoringPro

Automated tech intelligence pipeline. Monitors RSS feeds on a schedule, fetches new articles, scores them for relevance (0-10), and generates periodic briefings. Three stages: RSS Fetch, Analysis, and Weekly Digest.

Pipeline stages
RSS Fetch --> Analysis & Scoring --> Weekly Briefing
Configurable fetch windows (e.g., business hours only), per-day article caps, minimum relevance threshold. Scores: 7+ high, 4-6 moderate, <4 low priority.
[SM]

Social Media & MarketingPro

Content marketing automation with multi-project support. Create multiple projects (brands/products), each with its own content strategy -- target audiences, content pillars, relevance keywords, and brand voice. AI drafts posts from high-scoring content. You approve before anything goes live.

Content pipeline
1. Fetch --> 2. Analyze --> 3. Draft --> 4. Pending --> 5. Approved
Sources: RSS feeds, Reddit, forums. Each project has its own strategy config. Daily limits prevent runaway automation. Nothing publishes without human approval.
[DOC]

Document ProcessingFree

Summarize and analyze documents using local AI. Three input methods: drag-and-drop files, paste a URL, or paste text directly. Four analysis modes: Summary, Outline, Key Points, and All.

Supported Formats

PDF Word Excel Images Video HTML CSV JSON Python JS/TS Markdown

Analysis Modes

Summary -- Concise paragraph of main ideas
Outline -- Hierarchical structure with headings
Key Points -- Bullet list of important facts
Image -- Describe visual content

Two processing backends: Ollama (local, free, private) or Claude API (cloud, paid, higher quality for complex documents). Toggle between them in the UI.

System Management

Monitor services, manage models, and keep everything running from one place.

[H]

System Health MonitorFree

Monitors all system services and lets you start, stop, and restart them from the dashboard. Click "Run All Checks" to scan every service, then use buttons to control individual services. Add new services by editing a YAML config file.

Service status panel
Cohort Server :5100
Healthy
Ollama :11434
Healthy
Web Search :8005
Healthy
Email & Calendar :8001
Healthy
YouTube :8002
Down
One-click Start / Stop / Restart per service. Health checks with automatic retry to avoid false positives. State persisted across restarts.
[LLM]

LLM ManagerFree

Manage local AI models via Ollama. View installed models, check GPU memory usage, pull new models, and remove ones you don't need. Shows real-time VRAM usage and which models are currently loaded in GPU memory.

92
tokens/sec
qwen3.5:9b on RTX 3080
262K
context window
default model
6.6 GB
VRAM usage
fits consumer GPUs

Models loaded on-demand, unloaded after use. Quick-pull buttons for recommended models. GPU memory bar shows real-time VRAM allocation.

[Q]

Work QueueFree

Priority-ordered task execution with single-active constraint. Tasks queue up, the highest-priority one runs, and results flow back to the requesting channel. No parallel execution surprises.

Task lifecycle
queued --> active --> completed
Priority levels: critical > high > medium > low. FIFO within each level. JSON-backed persistence -- survives restarts.

Setup & Onboarding

The easiest way for anyone -- not just developers -- to run local AI. One command handles everything.

[W]

Setup WizardFree

One command. Cohort detects your GPU, picks the right AI model, installs everything, and gets you talking to your first agent in under 2 minutes. No configuration files. No terminal expertise. No prior AI experience.

3s
Hardware detected
46s
Wizard complete
44s
First AI response
0
Config files to edit

Real recording -- fresh install to first AI conversation. No cuts, no edits.

[S+]

Response ModesFree

Three tiers of response quality. Toggle with one click in the dashboard. Default is free and local.

[S] Smart
Fast local inference, no thinking tokens, 4K budget. Free.
[S+] Smarter
Thinking tokens on, 16K budget. Best balance. Free. (default)
[S++] Smartest
Local Qwen preprocesses -> distills -> cloud API. 70% fewer tokens. Your API key.

Ready to Build?

Core tools ship with pip install cohort. Ecosystem tools available in full deployments.