03:59 · Dec 16, 2025 · Tue #AI #MCP 多模型视觉理解MCP服务器。支持GLM-4.5V、DeepSeek-OCR(免费)和Qwen3-VL-Flash技术。为不支持图片理解的 AI 编码模型提供视觉处理能力。https://github.com/JochenYang/luma-mcp GitHub GitHub - JochenYang/luma-mcp: Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide… Multi-Model Visual Understanding MCP Server, GLM-4.6V, DeepSeek-OCR (free), and Qwen3-VL-Flash. Provide visual processing capabilities for AI coding models that do not support image understanding.多...