OpenCoworkAI/open-codesign is ranked by live GitHub momentum and cross-source evidence. It moved +34 stars in 24h with momentum score 24.4.
+34 stars 24h | +286 7d
0 in 24h | 0 sources
0/5 channels firing
no linked package yet
last commit 1d ago
// KNOWN REPO · PACKAGE · LAUNCH · SITE SURFACES
Each channel contributes 0-1. Per-channel tiers: GitHub (breakout 1.0 / hot 0.7 / rising 0.4), HN (front-page 1.0 / ≥3 mentions 0.7 / 1-2 mentions 0.4), Bluesky (≥5 mentions 1.0 / 2-4 0.7 / 1 0.4), dev.to (≥3 articles 1.0 / 2 0.7 / 1 0.4), Reddit (corpus-normalized 48h velocity).
* Reddit bar shows a per-repo velocity proxy (raw score / 100); the score formula uses the corpus-normalized version so a single repo's bar may not match its contribution to the corpus-wide ranking.
No mentions on this channel in the last 7 days.
// QUIET HERE DOESN'T MEAN THE REPO IS DEAD — CHECK OTHER TABS
From a goal to a task DAG, automatically. TypeScript-native multi-agent orchestration with MCP and live tracing. Three runtime dependencies.
AI gateway written in Go. Lightweight unified OpenAI-compatible API for OpenAI, Anthropic, Gemini, Groq, xAI & Ollama. LiteLLM alternative with observability, guardrails, streaming, costs and usage tracking.
The fastest local AI engine for Apple Silicon. 4.2x faster than Ollama, 0.08s cached TTFT, 100% tool calling. 17 tool parsers, prompt cache, reasoning separation, cloud routing. Drop-in OpenAI replacement. Works with Claude Code, Cursor, Aider.
Run Claude Code 100% on-device with local AI on Apple Silicon. MLX-native Anthropic-API server, 65 tok/s Qwen 3.5 122B, Llama 3.3 70B, Gemma 4 31B. Private, offline, airgap-ready. Built for NDA / legal / healthcare workflows.
LLM inference in C/C++
Hundreds of models & providers. One command to find what runs on your hardware.
AISO scan
Scan queued — refresh to check status.