Iggy's Mission Control

Senior Business Analyst · Mac mini M2
Gateway Online Updated 20 Mar 2026
Agent Roster
Live from openclaw.json · 6 active agents · Updated 2026-03-16
🎮 3 DISCORD 💬 3 TELEGRAM 2 PLANNED
🎮
Discord — Primary Team
DAILY DRIVER
CHIEF OF STAFF main
👑
Ben
Personal Assistant · Orchestrator
Opus 4-6 🎮 #ben 💬 @bossRing_bot
First point of contact. Delegates to Ralph and Charlie. Cannot be spawned by others.
spawns spawns
ralph
🔧
Ralph
Foreman · QA Manager
Sonnet 4-6 🎮 #ralph
QA, sign-off, monitoring, demo recording. Approves or sends back.
Can spawn
charlie
peer
spawn
charlie
🤖
Charlie
Infrastructure · Automation
Sonnet 4-6 🎮 #charlie
Builds infrastructure, portals, scripts, automation, cron jobs.
Can spawn
ralph
Automated Sub-agents (cron-spawned)
CRON 08:00
🔬
Scout
AI Research Agent
Haiku 3.5 🎮 #research Daily 08:00
Searches trending AI news, vibe coding tools, and agent developments. Posts brief to #research every morning.
Top-down spawn (Ben → team)
Peer spawn (Ralph ↔ Charlie)
Automated cron sub-agent
Ben cannot be spawned
🗺️
Discord Channel Map
CONTEXT ROUTING
📡 Direct Lines
Each agent owns its own channel — go here to talk directly to that agent
#ben 👑 Ben (main)
#ralph 🔧 Ralph
#charlie 🤖 Charlie
⚡ CORE-0 — Work Context
Work-specific channels — any agent can post here via message tool
#2_gcc GCC Epic / VSRISK-919
#1_ksv_adb KSV / ADB Epic
#0_misc Misc work topics
#templates-context Epic / Story templates
💡 How Cross-Channel Routing Works
Yes — a Telegram message can trigger Discord output. All agents share the same gateway. Ben triggered via Telegram can spawn Charlie as a sub-agent, who can then post results to any Discord channel (e.g. #charlie or #2_gcc). The trigger channel doesn't limit the output channel.
Context-aware routing is possible. Any agent can use the message tool to post to a specific channel based on the topic — e.g. GCC Epic work automatically goes to #2_gcc, infrastructure output goes to #charlie. This is an opt-in behavior — needs explicit instruction or a routing rule.
🔗
Full Spawn Permission Matrix
main ──▶ ralph charlie QA review + infra tasks
opus ──▶ ralph charlie QA review + infra tasks
ralph ──▶ charlie delegate infra work
charlie ──▶ ralph request QA / sign-off
No spawn access: work firma main (unspawnable)
💬
Telegram — Specialist Agents
ON DEMAND / MOBILE
Used seldomly or for specific isolated contexts. These agents have no Discord presence — Telegram only.
Active
opus
🔮
Ring Opus
Heavy Lifter · Deep Reasoning
Model
Claude Opus 4-6
Channel
💬 Telegram · @bossRingOpus_bot
When to use
Complex analysis, architecture decisions, epic deep-dives. When Sonnet isn't cutting it. Highest quality output — slower and more expensive.
Sub-agent Access
charlie
Active
work
💼
Work
BA Work Agent
Model
Claude Sonnet 4-6
Channel
💬 Telegram · @StndMain_bot
When to use
Dedicated work context on mobile: Epics, User Stories, Jira/Confluence prep, refinement docs, meeting notes. Isolated from personal context.
Active
firma
🏢
Firma
Company Admin
Model
Claude Sonnet 4-6
Channel
💬 Telegram · @firmaVamoTamo_bot
When to use
Admin for Iggy's two companies. Putni troškovi, invoices, contracts, company paperwork. Fully isolated context — nothing leaks to personal or work agents.
🔭
Planned — Not Yet Active
Planned
🖥️
RTX 5090 — Link
Local GPU Compute
Ollama on Link PC · Qwen 2.5 32B / DeepSeek-R1 · Image gen, Whisper STT, local inference. 32GB VRAM · 1,792 GB/s.
Planned
🍎
Mac mini — Local
Overnight Memory Agent
Ollama on Mac mini · Llama 3.2 3B / Qwen 2.5 7B · Overnight memory consolidation, lightweight background tasks.

💡 Team Vision

Ben is the daily interface — Discord #ben for everything. Charlie and Ralph are the operational backbone. Telegram agents are specialists: pull in Ring Opus for heavy thinking, Work for isolated BA context, Firma for company admin. Any agent can write to any channel — routing is flexible. Future: local models on Link + Mac mini for overnight processing and GPU-heavy tasks.

📁 Projects
📅 Schedule Overview
All cron jobs & recurring tasks · Europe/Vienna
Memory System
Long-term
MEMORY.md
Curated knowledge · updated periodically
Daily Notes
memory/YYYY-MM-DD.md
Raw session logs · one per day

🧠 How It Works

  • Each session starts fresh — files are continuity
  • Daily notes capture raw events, decisions, findings
  • MEMORY.md = distilled long-term knowledge
  • Periodic heartbeat reviews consolidate daily → long-term
  • All 4 Telegram bots share the same workspace + memory files

🌙 Planned: Overnight Memory Agent

Run a local model (Llama 3.2 / Qwen 7B) on the Mac mini overnight to review daily notes, consolidate MEMORY.md, and clean up stale entries — zero API cost.

Cron Jobs
Scheduled automation running on the Mac mini via OpenClaw.
✅ Active
📧 CDOTTZ Email Check
08:00 · 12:00 · 18:00
Checks inbox for emails from office@cdottz.com · Classifies by body keyword (i/u/up/p) or forwarded sender · Uploads to CDOTTZ folders on Drive
Schedule: 0 8,12,18 * * * · Europe/Vienna · Agent: Charlie (Haiku)
🔍 Tunnel Watchdog
Every 5 min
Checks iggy-portal, picfun-portal, kitchen-portal — restarts any dead server or tunnel automatically
Schedule: every 5 min · Agent: Charlie (Haiku) · Silent unless restart needed
☀️ Morning Brief
Daily 08:00
Weather (Vienna + Vrbnik), AI news, workload summary → Telegram (Sonnet)
Schedule: 0 8 * * * · Europe/Vienna · Channel: Telegram
📋 Daily Digest
Daily 07:00
24h agent activity recap — completed tasks, in-progress, waiting on Iggy → Discord #daily-digest
Schedule: 0 7 * * * · Europe/Vienna · Channel: Discord
📝 Auto Load — loadHere.md (12:00)
Daily 12:00
Update loadHere.md with work activity (noon run) — work-only entries → Discord #general
Schedule: 0 12 * * * · Europe/Vienna · Channel: Discord
📝 Auto Load — loadHere.md (18:00)
Daily 18:00
Update loadHere.md with work activity (evening run) — work-only entries → Discord #general
Schedule: 0 18 * * * · Europe/Vienna · Channel: Discord
🚀 GitHub Push — agent branch
Daily 18:00
Push unpushed commits on agent branch to GitHub → Discord #general
Schedule: 0 18 * * * · Europe/Vienna · Channel: Discord
🔍 Google Docs API Tabs Check
Every 2 weeks
Web search for updates on Google Docs API tab creation support — community + official docs → Discord #general
Schedule: every 14 days · Next: ~Mar 16 · Channel: Discord
🔧 Planned / Backlog
🌙 Nightly Memory Consolidation
Run a local/cheap model overnight to review daily notes, distill key learnings into MEMORY.md, and clean stale entries. Zero API cost.
🔍 Tunnel Watchdog
Check iggy-portal, picfun-portal, and kitchen-portal tunnels every few minutes — restart any that are down. Assign to local model when RTX 5090 is available.
📊 Weekly Work Summary
Every Friday: pull this week's loadHere.md entries and generate a structured weekly summary for hour reporting.
🖥️ Mac Mini Remote Access — Tailscale + Screen Sharing
Goal: View & control Mac mini from Link (Win PC) and MacBook — local + remote (travel).

Plan: macOS Screen Sharing + Tailscale mesh VPN

What's done: Nothing yet — both steps require sudo/admin and couldn't run headlessly.

TODO (Iggy does manually):
1. System Settings → General → Sharing → toggle Screen Sharing ON
2. Install Tailscale on Mac mini — Mac App Store (search "Tailscale") OR Terminal: brew install --cask tailscale
3. Open Tailscale → Log in (tailscale.com, free account)
4. Install Tailscale on MacBook + Link PC (same account)
5. From MacBook: use built-in Screen Sharing app → connect to Mac mini's Tailscale IP
6. From Link (Windows): install RealVNC Viewer (free) → connect to Mac mini's Tailscale IP
🔍 Monitoring Dashboard
Job health · Error tracking · Last run status · Updated 2026-03-16
● 8 HEALTHY ⚠ 3 ERRORS
🚨
Jobs With Errors
📋 Daily Digest TIMEOUT
Schedule: 0 7 * * * · Europe/Vienna
Last run: timed out (120s limit exceeded)
Consecutive errors: 1
Delivery: → Discord #daily-digest
💡 Fix: Increase timeout or simplify the digest prompt
📝 Auto Load (12:00) DELIVERY FAILED
Schedule: 0 12 * * * · Europe/Vienna
Last run: completed but delivery failed
Consecutive errors: 2
Delivery: → Discord #general
💡 Fix: Check Discord #general channel permissions / delivery target
🚀 GitHub Push 9 CONSECUTIVE ERRORS
Schedule: 0 18 * * * · Europe/Vienna
Last run: delivery failed repeatedly
Consecutive errors: 9
Delivery: → Discord #general
💡 Fix: Same delivery issue as Auto Load — #general channel target may be misconfigured
Healthy Jobs
Job Schedule Model Last Run Status
🔍 Tunnel Watchdog Every 5 min Haiku 3.5 ~12s ● OK
☀️ Morning Brief Daily 08:00 Sonnet ~285s ● OK
📧 CDOTTZ Email (08:00) Daily 08:00 Haiku 3.5 ~11s ● OK
📧 CDOTTZ Email (12:00) Daily 12:00 Haiku 3.5 ~13s ● OK
📧 CDOTTZ Email (18:00) Daily 18:00 Haiku 3.5 ~12s ● OK
📝 Auto Load (18:00) Daily 18:00 Sonnet ~48s ● OK
🔍 Ralph Repo Monitor Mon 09:00 Sonnet ~18s ● OK
🔍 Google Docs API Check Every 14 days Sonnet ~119s ● OK
Daily Schedule Timeline (Europe/Vienna)
07:00 📋 Daily Digest → Discord #daily-digest ⚠ ERROR
08:00 ☀️ Morning Brief → Telegram  ·  📧 CDOTTZ Email → Discord
09:00 🔍 Charlie Monitoring Sweep → Discord #charlie (Mon-Fri)
12:00 📧 CDOTTZ Email → Discord  ·  📝 Auto Load → Discord #general ⚠ ERROR
18:00 📧 CDOTTZ Email → Discord  ·  📝 Auto Load → Discord  ·  🚀 GitHub Push ⚠ ERROR
24/7 🔍 Tunnel Watchdog — every 5 min (silent unless restart needed)
📅
Weekly / Periodic
🔍 Ralph Repo Monitor
Every Monday 09:00 · Checks snarktank/ralph for updates
🔍 Google Docs API Check
Every 14 days · Checks if tab creation API is available yet

🤖 Charlie's Daily Monitoring Sweep

Every weekday at 09:00, Charlie runs a monitoring sweep: lists all cron jobs, checks for errors, compares against the dashboard calendar, and reports any discrepancies. If everything is healthy, he stays silent. If something's broken, he alerts in #charlie.

🗂️ Kanban Board
📋 Backlog 0
⚡ In Progress 0
🛠️ In Development 0
🧪 Testing 0
✅ Done 0
Ideas Backlog
Local AI — Hardware & Models Investigation
🔬 Models to Watch
Qwen 3 · Kimi K2 · DeepSeek-R1 · Llama 3.1 405B
MoE models are game-changers — 1T params but only 32B active per token
Frontier Open Models
🧠 Qwen 3 (Alibaba) MoE
235B total / 22B active · 128 experts, 8 active per token · 128K context
Thinking + non-thinking modes · Agentic tool use · 100+ languages
VRAM: ~14GB active (Q4) — fits RTX 5090 easily · ~130GB full model load for Mac Studio
🌙 Kimi K2 (Moonshot AI) MoE
1T total / 32B active · Trained with Muon optimizer on 15.5T tokens
Best-in-class agentic capabilities · Tool use, reasoning, autonomous problem-solving
VRAM: ~20GB active (Q4) — fits RTX 5090 · ~550GB full model → needs 512GB Mac Studio
🐋 DeepSeek-R1 MoE
671B total / 37B active · Chain-of-thought reasoning
Competitive with o1 on math/code benchmarks
VRAM: ~22GB active (Q4) — fits RTX 5090 · ~370GB full → fits 512GB Mac Studio
🦙 Llama 3.1 405B (Meta) Dense
405B dense — ALL parameters active every token (no MoE)
GPT-4 class · Largest truly open-weight dense model
VRAM: ~230GB Q4 — needs Mac Studio 512GB (won't fit any single GPU)

💡 Why MoE Changes Everything

MoE (Mixture of Experts) models have huge total parameter counts but only activate a fraction per token. This means:

  • Speed: Only 22-37B params compute per token → fast inference
  • Quality: 1T total params = the model has vastly more knowledge stored
  • Catch: ALL parameters must be loaded into memory, even though only a few activate. So you need the RAM to hold the full model.
  • RTX 5090 (32GB): Can only hold active params → runs MoE models at "small model" speed but loses quality of full model (offloading to system RAM kills speed)
  • Mac Studio 512GB: Loads the ENTIRE model → full quality at moderate speed
Hardware Comparison
🟢 RTX 5090
You already own this
VRAM: 32GB GDDR7
Bandwidth: 1,792 GB/s
Price: ~€2,000
Max model: Qwen 2.5 32B (Q4), Qwen 3 22B active
Speed: ⚡⚡⚡⚡⚡ Fastest
Best for: Interactive tasks, image gen, Whisper
🟣 Mac Studio M3 Ultra 512GB
Investigation target
Memory: 512GB unified (= VRAM)
Bandwidth: 819 GB/s
Price: ~€10,000+
Max model: Kimi K2 1T, Llama 405B, DeepSeek-R1 671B
Speed: ⚡⚡⚡ Moderate
Best for: Overnight batch, full MoE models, zero API cost
🟡 NVIDIA RTX 6000 Ada
Workstation GPU (current gen)
VRAM: 48GB GDDR6
Bandwidth: 960 GB/s
Price: ~€7,000
Max model: 32B Q8 or 70B Q3
Speed: ⚡⚡⚡⚡ Fast
Best for: Bigger models than 5090, still fast
🔵 PC: 2× RTX 5090
Dual GPU setup
VRAM: 64GB combined
Bandwidth: ~3,584 GB/s combined
Price: ~€4,500 (GPUs) + PC
Max model: 70B Q4 across both GPUs
Speed: ⚡⚡⚡⚡ Fast (with tensor parallelism)
Best for: 70B models at high speed

📊 What Makes Sense?

Use Case Best Hardware Why
Interactive daily useRTX 5090 ✅Already own it, fastest for ≤32B
Overnight memory agent (light)Mac mini ✅Already own it, 7B model is enough
Overnight deep reasoning (70B+)Mac Studio 512GBSilent, low power, huge model capacity
Full Kimi K2 (1T params)Mac Studio 512GBOnly option that holds 550GB+ model
Fast 70B interactive2× RTX 50903.5 TB/s bandwidth, 70B fits in 64GB
Image gen / WhisperRTX 5090 ✅CUDA optimized, fastest option
Mac Studio 512GB — What Would You Run?

🍎 The 512GB Sweet Spot

  • Kimi K2 (1T/32B active) — Best agentic model, fits fully loaded (~550GB with Q4). Would need tight quantization.
  • DeepSeek-R1 (671B/37B active) — Best open reasoning model. Fits comfortably at Q4 (~370GB).
  • Llama 3.1 405B — Dense, all params active. ~230GB Q4. Fits easily. GPT-4 class.
  • Qwen 3 235B (full load) — ~130GB Q4. Could run alongside Whisper + other tools simultaneously.
  • Multiple models at once: Load Qwen 3 + Whisper + embedding model all in memory

⚡ Speed Reality Check

819 GB/s bandwidth on M3 Ultra. For a 405B Q4 model (~230GB):

  • ~3.5 tokens/sec — readable but not instant
  • Good enough for: overnight batch processing, long analysis, memory consolidation
  • Not great for: interactive chat, real-time responses
  • Waiting for M4 Ultra? Likely ~1,000+ GB/s → ~4.5 tok/sec, meaningful improvement

💰 Cost Comparison (once, not recurring)

  • Mac Studio M3 Ultra 512GB: ~€10,000-12,000
  • 2× RTX 5090 PC build: ~€6,000-7,000 (but only 64GB VRAM, can't run 405B)
  • RTX 6000 Ada (48GB): ~€7,000 for GPU alone (still only 48GB)
  • API cost equivalent: At ~$15/M tokens (Opus), €10k buys ~600M tokens. If you burn 1M tokens/day = 600 days of API. Mac Studio pays for itself in ~2 years of heavy use.
Skills Inventory
7
Ready
1
Blocked
43
Not Installed
🕹️ Agent HQ — Live View
Pixel art office · agent status · live activity
Orchestrating team...
Ben 🦁
Reviewing outputs...
Ralph 🐱
Charlie 🐶
Scout 🤖
Live Activity Last hour
Ben · team update
Updated Team dashboard with org chart + Scout agent
just now
Ben · config change
Upgraded to Opus 4-6, set spawn permissions
15 min ago
Ralph · comms check
Confirmed sub-agent communication working
20 min ago
Charlie · monitoring
Daily monitoring sweep job created
30 min ago
Scout · scheduled
AI research brief set for 08:00 daily
30 min ago
Ben · portal update
Monitoring dashboard deployed with error tracking
35 min ago
🦁 Ben
Working
Opus 4-6 · Chief of Staff
🐱 Ralph
Idle
Sonnet 4-6 · QA Manager
🐶 Charlie
Idle
Sonnet 4-6 · Infrastructure
🤖 Scout
Next: 08:00
Haiku 3.5 · AI Research
Copilot
🎯 Vision
Real-time meeting intelligence — live transcript with speaker diarization, rolling summaries, action items, and sentiment analysis. Switchable between cloud and local models. Works with or without the Link PC.
Operating Modes
📡
Full Mode
Video + Audio · Link PC required
Planned
Elgato video capture via OBS + audio from meetings. Periodic screenshots for screen content analysis (slides, shared docs). Requires Link PC online.
🎤
Audio-Only Mode
Mic capture · Any device
Phase 0
Captures audio via browser mic on any device (Mac Mini, work laptop, phone). Speaker on loud so mic picks up both sides. No Link PC dependency.
🔁
Replay Mode
Upload recording · Post-meeting
Planned
Upload a meeting recording for post-meeting analysis. Full transcript, summary, and action item extraction.
Architecture
Layer 1
🎙️ Capture
Browser mic (standalone) · OBS + Elgato (Full mode) · File upload (Replay)
Layer 2
🔊 Speech-to-Text
Cloud: Deepgram Nova-3 · Local: NVIDIA Parakeet v3 on RTX 5090
Layer 3
🧠 Intelligence
Cloud: Claude · Local: LLM on RTX 5090 · Rolling summaries, action items, decisions
Layer 4
🖥️ Dashboard
Web UI · Live transcript with speaker colors · Summary panel · Controls & switches
STT Models
☁️
Deepgram Nova-3
Cloud · ~$0.0043/min
Phase 0
Sub-second latency. Built-in diarization. WebSocket streaming. Best cloud option for real-time transcription.
🟢
NVIDIA Parakeet v3
Local · RTX 5090 · Free
Phase 3
TensorRT optimized for NVIDIA GPUs. Native diarization + timestamps. Via NeMo toolkit. Best local option for RTX hardware.
Distil-Whisper
Local · Fallback
Backup
6x faster than Whisper Large, 99% accuracy. Good fallback if Parakeet has compatibility issues. No native diarization.
Config Switches
🎙️ Audio Source
Browser mic · Elgato · Phone · File
📹 Video Capture
On / Off
🔊 STT Engine
Deepgram (cloud) · Parakeet (local)
🧠 Analysis Model
Cloud (Claude) · Local (LLM)
⏯️ Live Mode
Real-time · Post-meeting
😊 Sentiment & Tone
On / Off · Default: Off
✅ Action Items
On / Off · Default: On
📝 Rolling Summary
On / Off · Default: On
🔗 Context Linking
Link to epics/Jira · Default: Off
👥 Speaker Diarization
On / Off · Default: On
🖼️ Screenshot Analysis
On / Off (video mode) · Default: On
📄 Auto-Report
Post-meeting report · Default: On
Presets
Fast
Transcript + Action Items only
No sentiment, no context linking, no screenshots. Minimal latency. Best for quick standups.
📊
Full Analysis
Everything on
All switches enabled. Sentiment, context linking, screenshots, auto-report. Best for important workshops/refinements.
🎯
Custom
Pick and choose
Toggle individual switches. Save custom presets for different meeting types.
Phased Rollout
🎤 Phase 0 — Cloud Audio MVP Up Next
Browser-based audio recorder (work laptop or phone) → stream to Mac Mini → Deepgram Nova-3 STT → Simple web UI with live transcript + speaker labels. No Link PC needed.
Storage: ~/Documents/meeting-copilot/ on Mac Mini
🧠 Phase 1 — Add Intelligence Planned
Rolling summaries via Claude. Action item extraction. Post-meeting report generation. Context linking to epics. Sentiment/tone as optional toggle.
📡 Phase 2 — Link PC + Video Planned
OBS on Link PC captures Elgato video + audio. Periodic screenshots → vision model reads screen content (slides, shared docs). Screen content alongside transcript.
🖥️ Phase 3 — Local Models Planned
NVIDIA Parakeet v3 on RTX 5090 for local STT. Local LLM for summaries. Cloud ↔ local toggle in UI. Fallback: Distil-Whisper.
🔬 Phase 4 — Advanced Analysis Planned
Speaker enrollment (recognize voices). "What did they mean" intent analysis. Cross-meeting trend tracking. Meeting type auto-detection.
🗂️ Backlog
🎤 Speaker Identification / Enrollment
Voice enrollment — short clips of each team member to build speaker profiles. Without this, transcript only shows "Speaker 1, 2…"
🔊 Better Audio Routing
Extract higher quality audio directly from Teams call (virtual audio cable / VB-Cable / BlackHole) instead of relying on speaker + mic.
⏱️ Latency Fine-Tuning
Optimize end-to-end latency (STT + diarization + summarization). Target: minimize delay between spoken words and transcript appearing.
💾 Storage Strategy
Currently: ~/Documents/meeting-copilot/ on Mac Mini. Long-term: auto-cleanup policy, retention rules, archival for large recordings.
🖥️ Dedicated Dashboard
Move out of Mission Control into its own portal with dedicated URL. Link from Portals section.
System Setup
✅ Running — 4 bots
Telegram
Ring · Ring Opus · Work · Firma
✅ Running
Anthropic / Claude
Sonnet 4-6 (default) · Opus 4-6
✅ LaunchAgent
Gateway
Port 18789 · Loopback only
✅ ON
macOS Firewall
Enabled Feb 2026
🐙 GitHub — Agent Account Done
benmachinanode-create · ben.machinanode@gmail.com
Collaborator on think-ai-link/0_CORE · Agent branch only
Git identity: ben.machina / ben.machinanode@gmail.com
📧 Agent Email Done
ben.machina@agentmail.to — Send-only
Used for morning briefs, reports, service registrations.
🖥️ Hardware
Mac mini M2 — 16GB / 256GB · OpenClaw home base
Link (Private PC) — RTX 5090 · OBS · Cursor · Elgato
Work laptop — Jira · Confluence
🔐 Tailscale — Secure Remote Access Planned
Private VPN mesh between devices. No port forwarding needed.
Free for personal use. Not yet configured.
📧 CDOTTZ — Email Processing
📨 Source
From: office@cdottz.com
Forwarded to: ben.machina@agentmail.to
Trigger: Sender match + has attachment
📁 Destination
Drive: CDOTTZ / {folder}
Naming: {prefix}_YYYYMMDD.pdf
Prefixes: izvod / ira / ura / ura_priv / putni
Active Rules
Trusted Sender Drive Folder Type Naming Status
kdi@pbz.hr CDOTTZ / 0_izvodi Bankovni izvodi izvod_YYYY_NNN.pdf ✅ Live
office@cdottz.com + body: i CDOTTZ / 01_IRA IRA — izlazni računi ira_YYYYMMDD.pdf ✅ Live
office@cdottz.com + body: u CDOTTZ / 02_URA URA — ulazni računi ura_YYYYMMDD.pdf ✅ Live
office@cdottz.com + body: p CDOTTZ / 03_putniNalog Putni nalozi — travel orders putni_YYYYMMDD.pdf ✅ Live
office@cdottz.com + body: up CDOTTZ / 02_URA_PRIV URA PRIV — private incoming invoices ura_priv_YYYYMMDD.pdf ✅ Live
Activity Log
Latest first · Checked 3× daily (08:00 · 12:00 · 18:00) · Agent: Charlie
0
Uploaded
0
Unknown
0
Failed
Last Check
Received Processed Subject Folder File Status Drive
⏳ Loading...