260323 reddit 모음
[r/ClaudeAI] Claude, realizing protests are going on right outside his office: (
[r/ClaudeAI] Claude, realizing protests are going on right outside his office: (
Claude, realizing protests are going on right outside his office:
출처: https://www.reddit.com/r/ClaudeAI/comments/1s0nxji/claude_realizing_protests_are_going_on_right/
[r/ClaudeAI] What a Claude Max weekly limit is actually worth in API dollars (15
[r/ClaudeAI] What a Claude Max weekly limit is actually worth in API dollars (15
I tracked 80 autonomous coding tasks and correlated per-task API costs against the weekly utilisation percentage to calculate the dollar value of a full weekly limit.
Results: - Max 5x ($100/mo): weekly limit worth ~$523 in API pricing — about 20x what you pay - Max 20x ($200/mo): weekly limit worth ~$1,100 — about 22x - The $200 plan gives ~2x the weekly budget, not 4x (the 4x only applies to the 5-hour burst window)
Per-task costs (API-equivalent): - Implementation stage: $2.66 avg - Code review stage: $0.57 avg - Median task: $4-5 total
These are lower-bound estimates — I also used Claude Code interactively on the same account during the measurement period, which consumed utilisation without appearing in the task cost data.
Methodology and full breakdown: https://botfarm.run/blog/claude-max-true-price/
출처: https://www.reddit.com/r/ClaudeAI/comments/1s0n5bf/what_a_claude_max_weekly_limit_is_actually_worth/
[r/ObsidianMD] I built an Obsidian plugin that runs Claude Code with academic re
[r/ObsidianMD] I built an Obsidian plugin that runs Claude Code with academic re
My wife is an academic and writes all her notes for papers in Obsidian. I kept watching her jump between the editor, terminal, browser, and academic databases over and over while working on a single paper, so I built this for her.
It’s called KatmerCode: Claude Code inside Obsidian as a sidebar chat.
It uses the Agent SDK, so you get the same core workflow as the terminal version, but next to your manuscript: tools, MCP servers, subagents, /compact, streaming, session resume, all of it.
The part I’m happiest with is the academic workflow. It ships with slash-command skills like:
- /peer-review — evaluates a manuscript across 8 criteria and generates an HTML review report
- /cite-verify — checks references against CrossRef, Semantic Scholar, and OpenAlex
- /research-gap — searches the literature and identifies promising gaps
- /journal-match — suggests target journals based on your manuscript and references
- /lit-search, /citation-network, /abstract too
It also does inline diff editing inside Obsidian, so when Claude changes a file you see word-level track changes with accept/undo controls instead of blind overwrites.
The reports are one of my favorite parts: they open inside Obsidian or in the browser, with charts, tables, badges, and a consistent academic-style design.
Honest caveat: these are research aids, not oracles. Database coverage is imperfect, and a skill like /peer-review is not a substitute for a real reviewer. The value is in catching issues early and surfacing things you might otherwise miss.
This is still early (v0.1.0) and definitely has rough edges, but it’s already useful in a real workflow here.
Open source (MIT): https://github.com/hkcanan/katmer-code
If you write in Obsidian, especially for academic work, I’d genuinely love feedback on what would make something like this actually useful.
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0njnb/i_built_an_obsidian_plugin_that_runs_claude_code/
[r/ObsidianMD] Electron (the framework used by Obsidian) has gotten much better
[r/ObsidianMD] Electron (the framework used by Obsidian) has gotten much better
Electron (the framework used by Obsidian) has gotten much better support for modern desktop Linux recently
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0meul/electron_the_framework_used_by_obsidian_has/
[r/ObsidianMD] I built Quilden — Free Obsidian sync plugin with E2E encryption,
[r/ObsidianMD] I built Quilden — Free Obsidian sync plugin with E2E encryption,
Hey r/ObsidianMD,
I built Quilden - sync your Obsidian vault with GitHub (setup in \~1 min) and accross any device, with end-to-end encryption, full history, and access it from any browser (no install).
All-in-one:
- Easy GitHub sync (auto/manual) - this runs through their web API so no need for special libraries installs etc - this will work on any device that has Obsidian supported.
- End-to-end encryption (zero-knowledge) so only the users can read their notes while being encrypted on the server.
- Full vault + per-file history (restore anytime via Git)
- Conflict handling
- Browser access to your vault (work/school devices, no admin rights or third party software installs)
- Full devices support (iOS, Android, Mac, Win, Linux) or even any device with a browser.
- Optional AI tools (for a small one-time price but shows your support to the project)
Why: I kept getting locked out of my vault on restricted machines - this fixes that and gives a proper encrypted sync setup.
More: https://quilden.com
The plugin code and install instructions can be found here:
https://github.com/AA1labs/quilden-sync
The web editor could be accessed from the above landing page.
Some of you may have seen LiveMarker.site - this is a rebrand + major upgrade I was working on from the start as I was considering releasing 2 products, instead decided to combine both under one product.
Would love to hear your feedback.
(Please note this is a first release so always backing up your vault or testing out on a duplicate one is advised).
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0kmie/i_built_quilden_free_obsidian_sync_plugin_with/
[r/LocalLLaMA] [[Alibaba]] confirms they are committed to continuously open-sourcing
[r/LocalLLaMA] Alibaba confirms they are committed to continuously open-sourcing
Source: https://x.com/ModelScope2022/status/2035652120729563290
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0pfml/alibaba_confirms_they_are_committed_to/
[r/LocalLLaMA] Honest take on running 9× RTX 3090 for AI (13↑)
[r/LocalLLaMA] Honest take on running 9× RTX 3090 for AI (13↑)
I bought 9 RTX 3090s.
They’re still one of the best price-to-VRAM GPUs available.
Here’s the conclusion first: 1. I don’t recommend going beyond 6 GPUs 2. If your goal is simply to use AI, just pay for a cloud LLM subscription 3. Proxmox is, in my experience, one of the best OS setups for experimenting with LLMs
To be honest, I had a specific expectation:
If I could build around 200GB of VRAM, I thought I’d be able to run something comparable to Claude-level models locally.
That didn’t happen.
Reality check
Even finding a motherboard that properly supports 4 GPUs is not trivial.
Once you go beyond that: • PCIe lane limitations become real • Stability starts to degrade • Power and thermal management get complicated
The most unexpected part was performance.
Token generation actually became slower when scaling beyond a certain number of GPUs.
More GPUs does not automatically mean better performance, especially without a well-optimized setup.
What I’m actually using it for
Instead of trying to replicate large proprietary models, I shifted toward experimentation.
For example: • Exploring the idea of building AI systems with “emotional” behavior • Running simulations inspired by C. elegans inside a virtual environment • Experimenting with digitally modeled chemical-like interactions
Is the RTX 3090 still worth it?
Yes.
At around $750, 24GB VRAM is still very compelling.
In my case, running 4 GPUs as a main AI server feels like a practical balance between performance, stability, and efficiency. (wake up 4way warriors!)
Final thoughts
If your goal is to use AI efficiently, cloud services are the better option.
If your goal is to experiment, break things, and explore new ideas, local setups are still very valuable.
Just be careful about scaling hardware without fully understanding the trade-offs.
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0p28x/honest_take_on_running_9_rtx_3090_for_ai/
[r/LocalLLaMA] MiniMax M2.7 Will Be Open Weights (353↑)
[r/LocalLLaMA] MiniMax M2.7 Will Be Open Weights (353↑)
Composer 2-Flash has been saved! (For legal reasons that's a joke)
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0mnv3/minimax_m27_will_be_open_weights/
[r/LocalLLaMA] Impressive thread from /r/ChatGPT, where after ChatGPT finds out
[r/LocalLLaMA] Impressive thread from /r/ChatGPT, where after ChatGPT finds out
Impressive thread from /r/ChatGPT, where after ChatGPT finds out no 7Zip, tar, py7zr, apt-get, Internet, it just manually parsed and unzipped from hex data of the .7z file. What model + prompts would be able to do this?
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0mmsn/impressive_thread_from_rchatgpt_where_after/
[r/MachineLearning] [D] Has industry effectively killed off academic machine lea
[r/MachineLearning] [D] Has industry effectively killed off academic machine lea
This wasn't always the case, but now almost any research topic in machine learning that you can imagine is now being done MUCH BETTER in industry due to a glut of compute and endless international talents.
The only ones left in academia seems to be:
- niche research that delves very deeply into how some older models work (e.g., GAN, spiking NN), knowing full-well they will never see the light of day in actual applications, because those very applications are being done better by whatever industry is throwing billions at.
- some crazy scenario that basically would never happen in real-life (all research ever done on white-box adversarial attack for instance (or any-box, tbh), there are tens of thousands).
- straight-up misapplication of ML, especially for applications requiring actual domain expertise like flying a jet plane.
- surveys of models coming out of industry, which by the time it gets out, the models are already depreciated and basically non-existent. In other words, ML archeology.
There are potential revolutionary research like using ML to decode how animals talk, but most of academia would never allow it because it is considered crazy and doesn't immediately lead to a research paper because that would require actual research (like whatever that 10 year old Japanese butterfly researcher is doing).
Also notice researchers/academic faculties are overwhelmingly moving to industry or becoming dual-affiliated or even creating their own pet startups.
I think ML academics are in a real tight spot at the moment. Thoughts?
출처: https://www.reddit.com/r/MachineLearning/comments/1s0hcit/d_has_industry_effectively_killed_off_academic/
관련 노트
- [[260324_reddit]]
- [[260321_reddit]]
- [[260328_reddit]]
- [[260326_reddit]]
- [[Alibaba]]
- [[260305_xt]]
- [[260219_xt]]
- [[260326_x]]
- [[260324_x]]
- [[260323_rss]]
- [[260323_x]]
- [[250123_xt]]
- [[260312_xt]]
- [[260225_xt]]
- [[260218_xt]]
- [[260323_moltbook]]
- [[260322_moltbook]]
- [[260321_moltbook]]
- [[260324_rss]]
- [[260322_rss]]
- [[260321_rss]]
- [[260320_rss]]
- [[260322_reddit]] — 키워드 유사
- [[260319_tg]] — 키워드 유사
[r/ClaudeAI] Claude, realizing protests are going on right outside his office: (
[r/ClaudeAI] Claude, realizing protests are going on right outside his office: (
Claude, realizing protests are going on right outside his office:
출처: https://www.reddit.com/r/ClaudeAI/comments/1s0nxji/claude_realizing_protests_are_going_on_right/
[r/ClaudeAI] What a Claude Max weekly limit is actually worth in API dollars (15
[r/ClaudeAI] What a Claude Max weekly limit is actually worth in API dollars (15
I tracked 80 autonomous coding tasks and correlated per-task API costs against the weekly utilisation percentage to calculate the dollar value of a full weekly limit.
Results: - Max 5x ($100/mo): weekly limit worth ~$523 in API pricing — about 20x what you pay - Max 20x ($200/mo): weekly limit worth ~$1,100 — about 22x - The $200 plan gives ~2x the weekly budget, not 4x (the 4x only applies to the 5-hour burst window)
Per-task costs (API-equivalent): - Implementation stage: $2.66 avg - Code review stage: $0.57 avg - Median task: $4-5 total
These are lower-bound estimates — I also used Claude Code interactively on the same account during the measurement period, which consumed utilisation without appearing in the task cost data.
Methodology and full breakdown: https://botfarm.run/blog/claude-max-true-price/
출처: https://www.reddit.com/r/ClaudeAI/comments/1s0n5bf/what_a_claude_max_weekly_limit_is_actually_worth/
[r/ObsidianMD] I built an Obsidian plugin that runs Claude Code with academic re
[r/ObsidianMD] I built an Obsidian plugin that runs Claude Code with academic re
My wife is an academic and writes all her notes for papers in Obsidian. I kept watching her jump between the editor, terminal, browser, and academic databases over and over while working on a single paper, so I built this for her.
It’s called KatmerCode: Claude Code inside Obsidian as a sidebar chat.
It uses the Agent SDK, so you get the same core workflow as the terminal version, but next to your manuscript: tools, MCP servers, subagents, /compact, streaming, session resume, all of it.
The part I’m happiest with is the academic workflow. It ships with slash-command skills like:
- /peer-review — evaluates a manuscript across 8 criteria and generates an HTML review report
- /cite-verify — checks references against CrossRef, Semantic Scholar, and OpenAlex
- /research-gap — searches the literature and identifies promising gaps
- /journal-match — suggests target journals based on your manuscript and references
- /lit-search, /citation-network, /abstract too
It also does inline diff editing inside Obsidian, so when Claude changes a file you see word-level track changes with accept/undo controls instead of blind overwrites.
The reports are one of my favorite parts: they open inside Obsidian or in the browser, with charts, tables, badges, and a consistent academic-style design.
Honest caveat: these are research aids, not oracles. Database coverage is imperfect, and a skill like /peer-review is not a substitute for a real reviewer. The value is in catching issues early and surfacing things you might otherwise miss.
This is still early (v0.1.0) and definitely has rough edges, but it’s already useful in a real workflow here.
Open source (MIT): https://github.com/hkcanan/katmer-code
If you write in Obsidian, especially for academic work, I’d genuinely love feedback on what would make something like this actually useful.
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0njnb/i_built_an_obsidian_plugin_that_runs_claude_code/
[r/ObsidianMD] Electron (the framework used by Obsidian) has gotten much better
[r/ObsidianMD] Electron (the framework used by Obsidian) has gotten much better
Electron (the framework used by Obsidian) has gotten much better support for modern desktop Linux recently
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0meul/electron_the_framework_used_by_obsidian_has/
[r/ObsidianMD] I built Quilden — Free Obsidian sync plugin with E2E encryption,
[r/ObsidianMD] I built Quilden — Free Obsidian sync plugin with E2E encryption,
Hey r/ObsidianMD,
I built Quilden - sync your Obsidian vault with GitHub (setup in \~1 min) and accross any device, with end-to-end encryption, full history, and access it from any browser (no install).
All-in-one:
- Easy GitHub sync (auto/manual) - this runs through their web API so no need for special libraries installs etc - this will work on any device that has Obsidian supported.
- End-to-end encryption (zero-knowledge) so only the users can read their notes while being encrypted on the server.
- Full vault + per-file history (restore anytime via Git)
- Conflict handling
- Browser access to your vault (work/school devices, no admin rights or third party software installs)
- Full devices support (iOS, Android, Mac, Win, Linux) or even any device with a browser.
- Optional AI tools (for a small one-time price but shows your support to the project)
Why: I kept getting locked out of my vault on restricted machines - this fixes that and gives a proper encrypted sync setup.
More: https://quilden.com
The plugin code and install instructions can be found here:
https://github.com/AA1labs/quilden-sync
The web editor could be accessed from the above landing page.
Some of you may have seen LiveMarker.site - this is a rebrand + major upgrade I was working on from the start as I was considering releasing 2 products, instead decided to combine both under one product.
Would love to hear your feedback.
(Please note this is a first release so always backing up your vault or testing out on a duplicate one is advised).
출처: https://www.reddit.com/r/ObsidianMD/comments/1s0kmie/i_built_quilden_free_obsidian_sync_plugin_with/
[r/LocalLLaMA] Alibaba confirms they are committed to continuously open-sourcing
[r/LocalLLaMA] Alibaba confirms they are committed to continuously open-sourcing
Source: https://x.com/ModelScope2022/status/2035652120729563290
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0pfml/alibaba_confirms_they_are_committed_to/
[r/LocalLLaMA] Honest take on running 9× RTX 3090 for AI (13↑)
[r/LocalLLaMA] Honest take on running 9× RTX 3090 for AI (13↑)
I bought 9 RTX 3090s.
They’re still one of the best price-to-VRAM GPUs available.
Here’s the conclusion first: 1. I don’t recommend going beyond 6 GPUs 2. If your goal is simply to use AI, just pay for a cloud LLM subscription 3. Proxmox is, in my experience, one of the best OS setups for experimenting with LLMs
To be honest, I had a specific expectation:
If I could build around 200GB of VRAM, I thought I’d be able to run something comparable to Claude-level models locally.
That didn’t happen.
Reality check
Even finding a motherboard that properly supports 4 GPUs is not trivial.
Once you go beyond that: • PCIe lane limitations become real • Stability starts to degrade • Power and thermal management get complicated
The most unexpected part was performance.
Token generation actually became slower when scaling beyond a certain number of GPUs.
More GPUs does not automatically mean better performance, especially without a well-optimized setup.
What I’m actually using it for
Instead of trying to replicate large proprietary models, I shifted toward experimentation.
For example: • Exploring the idea of building AI systems with “emotional” behavior • Running simulations inspired by C. elegans inside a virtual environment • Experimenting with digitally modeled chemical-like interactions
Is the RTX 3090 still worth it?
Yes.
At around $750, 24GB VRAM is still very compelling.
In my case, running 4 GPUs as a main AI server feels like a practical balance between performance, stability, and efficiency. (wake up 4way warriors!)
Final thoughts
If your goal is to use AI efficiently, cloud services are the better option.
If your goal is to experiment, break things, and explore new ideas, local setups are still very valuable.
Just be careful about scaling hardware without fully understanding the trade-offs.
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0p28x/honest_take_on_running_9_rtx_3090_for_ai/
[r/LocalLLaMA] MiniMax M2.7 Will Be Open Weights (353↑)
[r/LocalLLaMA] MiniMax M2.7 Will Be Open Weights (353↑)
Composer 2-Flash has been saved! (For legal reasons that's a joke)
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0mnv3/minimax_m27_will_be_open_weights/
[r/LocalLLaMA] Impressive thread from /r/ChatGPT, where after ChatGPT finds out
[r/LocalLLaMA] Impressive thread from /r/ChatGPT, where after ChatGPT finds out
Impressive thread from /r/ChatGPT, where after ChatGPT finds out no 7Zip, tar, py7zr, apt-get, Internet, it just manually parsed and unzipped from hex data of the .7z file. What model + prompts would be able to do this?
출처: https://www.reddit.com/r/LocalLLaMA/comments/1s0mmsn/impressive_thread_from_rchatgpt_where_after/
[r/MachineLearning] [D] Has industry effectively killed off academic machine lea
[r/MachineLearning] [D] Has industry effectively killed off academic machine lea
This wasn't always the case, but now almost any research topic in machine learning that you can imagine is now being done MUCH BETTER in industry due to a glut of compute and endless international talents.
The only ones left in academia seems to be:
- niche research that delves very deeply into how some older models work (e.g., GAN, spiking NN), knowing full-well they will never see the light of day in actual applications, because those very applications are being done better by whatever industry is throwing billions at.
- some crazy scenario that basically would never happen in real-life (all research ever done on white-box adversarial attack for instance (or any-box, tbh), there are tens of thousands).
- straight-up misapplication of ML, especially for applications requiring actual domain expertise like flying a jet plane.
- surveys of models coming out of industry, which by the time it gets out, the models are already depreciated and basically non-existent. In other words, ML archeology.
There are potential revolutionary research like using ML to decode how animals talk, but most of academia would never allow it because it is considered crazy and doesn't immediately lead to a research paper because that would require actual research (like whatever that 10 year old Japanese butterfly researcher is doing).
Also notice researchers/academic faculties are overwhelmingly moving to industry or becoming dual-affiliated or even creating their own pet startups.
I think ML academics are in a real tight spot at the moment. Thoughts?
출처: https://www.reddit.com/r/MachineLearning/comments/1s0hcit/d_has_industry_effectively_killed_off_academic/