virtual-insanity
← 뒤로

260314 x (11개)

budding aggregate 2026-03-14

260314 X(트위터) 모음

RT by @hwchase17: I'll be talking about agentic AI and autonomous scientific dis

원문

RT by @hwchase17: I'll be talking about agentic AI and autonomous scientific dis

I'll be talking about agentic AI and autonomous scientific discovery at GTC next week ahead of Jensen's keynote, on Monday. Should be fun.

With @steipete, @vincentweisser, @hwchase17, @saranormous, @Alfred_Lin

출처: https://nitter.net/SGRodriques/status/2032442508152910060#m


There's a very cool "pregame" for GTC which has some awesome unscripted live con

원문

There's a very cool "pregame" for GTC which has some awesome unscripted live con

There's a very cool "pregame" for GTC which has some awesome unscripted live conversations - am on one with a great group of folks, come by and watch

nvidia.com/gtc/pregame/


Sam Rodriques (@SGRodriques)

I'll be talking about agentic AI and autonomous scientific discovery at GTC next week ahead of Jensen's keynote, on Monday. Should be fun.

With @steipete, @vincentweisser, @hwchase17, @saranormous, @Alfred_Lin

출처: https://nitter.net/hwchase17/status/2032480152425583002#m


RT by @hwchase17: Will AI models eat agent frameworks?

OR

Will agent framework

원문

RT by @hwchase17: Will AI models eat agent frameworks?

OR

Will agent framework

Will AI models eat agent frameworks?

OR

Will agent frameworks be where the true value lies, on top of commoditized AI models?

-- @hwchase17 (see 6:26 onwards in the video for full version)


Video


Matt Turck (@mattturck)

Everything Gets Rebuilt: my conversation with Harrison Chase, CEO of @LangChain about agent harnesses, evals, runtimes, sandboxes, MCP and the future of the agent stack

00:00 Intro - meet @hwchase17 - at the Chase Center for the @daytonaio Compute conference

01:32 What changed in agents over the last year

03:57 Why coding agents are ahead

06:26 Do models commoditize the framework layer?

08:27 Harnesses, in plain English

10:11 Why system prompts matter so much

13:11 The upside — and downside — of subagents

15:31 Why a useful agent needs a filesystem

18:13 Additional core primitives of modern agents

19:12 Skills: the new primitive

20:19 What context compaction actually means

23:02 How memory works in agents

25:16 One mega-agent or many specialized agents?

27:46 The future of MCP

29:38 Why agents need sandboxes

32:35 How sandboxes help with security

33:32 How Harrison Chase started LangChain

37:24 LangChain vs LangGraph vs Deep Agents

40:17 Why observability matters more for agents

41:48 Evals, no-code, and continuous improvement

44:41 What LangChain is building next

45:29 Where the real moat in AI lives


Video

출처: https://nitter.net/mattturck/status/2032528435600396650#m


RT by @hwchase17: If you can imagine it, you can render it!

Great gen-ui exampl

원문

RT by @hwchase17: If you can imagine it, you can render it!

Great gen-ui exampl

If you can imagine it, you can render it!

Great gen-ui example by @CopilotKit build with @LangChain 🚀


CopilotKit🪁 (@CopilotKit)

Introducing Open Generative UI Repo 🌟

We built an open-source version of @claudeai's new feature for your own AI agents!

It's a template for building rich, interactive AI-generated UI with CopilotKit and @LangChain LangGraph.

Ask the agent to visualize algorithms, create 3D animations, render charts, or generate interactive diagrams — all rendered as live HTML/SVG inside a sandboxed iframe.

👾 Check it out: github.com/CopilotKit/OpenGe…


Video

출처: https://github.com/CopilotKit/OpenGenerativeUI


RT by @hwchase17: We're building with cutting edge tools to solve a critical mis

원문

RT by @hwchase17: We're building with cutting edge tools to solve a critical mis

We're building with cutting edge tools to solve a critical mission. Security teams are falling behind, as hackers have 100x-ed the speed and scale of attacks through leveraging AI.

The only way to fight fire, is with fire. That's why our entire stack is AI-native.


LangChain (@LangChain)

🚀 LangSmith for Startups Spotlight: @cogent_security

Cogent is building AI agents that protect the world's largest organizations from cyberattacks. One of the hardest problems in cybersecurity is going from finding a vulnerability to actually fixing it. Cogent is automating that entire process from end-to-end.

Cogent is already working with dozens of Fortune 1000 and Global 2000 enterprise customers such as major universities, hospitality brands, and consumer retailers.

Cogent uses LangSmith for production tracing and monitoring of our agents. Their team leverages execution traces for usage insight and use-case categorization, self-refinement loops to diagnose eval failures, and online evaluators to flag undesired behavior.

Join their team if you want to build frontier AI for mission critical problems 🤝cogent.com/careers


Video

출처: https://nitter.net/cogent_security/status/2032145427378983161#m


RT by @hwchase17: AI is about to have a very big week.

Excited to co-host GTC P

원문

RT by @hwchase17: AI is about to have a very big week.

Excited to co-host GTC P

AI is about to have a very big week.

Excited to co-host GTC Pregame Live this Monday with the one and only @saranormous.

We’re diving into three shifts that matter:

- Open models powering the ecosystem
- The agentic AI inflection point
- AI entering the physical world

With builders from @cohere, @MistralAI, @perplexity_ai, @LangChain, @openclaw and more.

If you care about where AI is actually heading, join us live starting at 8:00am PT: nvidia.com/gtc/pregame


NVIDIA (@nvidia)

x.com/i/article/203222730803…

출처: https://nitter.net/Alfred_Lin/status/2032495041969844634#m


RT by @hwchase17: Build your agent UI the way YOU like‼️

Connect it to any compo

원문

RT by @hwchase17: Build your agent UI the way YOU like‼️

Connect it to any compo

Build your agent UI the way YOU like‼️
Connect it to any component library 📦 or bring your own components 🚀

Here an example how to use @LangChain_JS with AI Elements by @vercel 👇


LangChain JS (@LangChain_JS)

Built a full streaming AI chat UI 🤔 with reasoning tokens, live tool calls, and shimmer loading states 🤯 in under 50 lines of React‼️

@langchain/react + AI Elements (elements.ai-sdk.dev/ by @vercel) is the combo you didn't know you needed. 🧵

출처: https://nitter.net/bromann/status/2032488137461731723#m


RT by @jerryjliu0: Choosing between Skills and MCP tools for your AI agents? Her

원문

RT by @jerryjliu0: Choosing between Skills and MCP tools for your AI agents? Her

Choosing between Skills and MCP tools for your AI agents? Here's an overview from @itsclelia and @tuanacelik

🔧 MCP tools offer deterministic API calls with fixed schemas - perfect for precise, predictable operations but require dev knowledge and introduce network latency
📝 Skills use natural language instructions stored locally - minimal setup required but open to LLM misinterpretation and hallucinations
⚖️ The real decision factor: how fast your domain evolves. Fast-changing environments favor MCP's single source of truth, while stable domains benefit from Skills' lightweight approach
🏗️ In practice, we found our documentation MCP provided better, always up-to-date context than custom skills for our coding agent use case

Read our full analysis of when to use each approach: llamaindex.ai/blog/skills-vs…

출처: https://nitter.net/llama_index/status/2032487366129233950#m


RT by @hwchase17: Finally @googlechrome v146 is out with web MCP support. I can

원문

RT by @hwchase17: Finally @googlechrome v146 is out with web MCP support. I can

Finally @googlechrome v146 is out with web MCP support. I can now have a @LangChain_JS Deep Agent constantly browse through my @X feed in the background and update a daily summary that I look at the end of the day instead of constantly scrolling through the app 🙌

Check out: github.com/christian-bromann…


Petr Baudis (@xpasky)

It took another two months but Chrome 146 is out since yesterday! And *that* means: with a single toggle, you can expose your current live browsing session via MCP and have your CLI agent do things in it.

Aaand I have been waiting to deal with my LI connects until this moment.

출처: https://github.com/christian-bromann/deepagent-x-feed-monitoring


RT by @jerryjliu0: Claude code is addicting because it’s like playing GTA with c

원문

RT by @jerryjliu0: Claude code is addicting because it’s like playing GTA with c

Claude code is addicting because it’s like playing GTA with cheat codes as a kid


cat (@_catwu)

my three favorite claude code shortcuts:

1. `!` prefix runs bash inline. the command + output land in context
2. `ctrl+s` stashes your draft. type something else, submit, and it pops back
3. `ctrl+g` opens the prompt (or plan) in $EDITOR for bigger edits

출처: https://nitter.net/disiok/status/2032644787799732617#m


literally the only thing preventing claude code taking over the world was the au

원문

literally the only thing preventing claude code taking over the world was the au

literally the only thing preventing claude code taking over the world was the auto-compaction at 200k

agi is here folks


Claude (@claudeai)

1 million context window: Now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.

출처: https://nitter.net/jerryjliu0/status/2032577425864085779#m


관련 노트

  • [[NVIDIA]]