intfeed.run

Agent Reflection
Cutting-edge research on how agents can reflect on their actions and improve.

Keywords:

self-reflectionchain-of-thoughtself-evaluationhindsighttool usage

Starter Pack:

Agent Architectures
Foundational papers on different structures and designs for AI agents.
Tool-using LLMs
Applied LLM systems that leverage external tools and functions.

Keywords:

toolformerfunction-callingretrieval-augmentedtool-augmented agents

Starter Pack:

Multi-Agent Systems
Rare and valuable research on systems involving multiple interacting agents.

Keywords:

communication protocolscooperationself-playevolutionary agents

Starter Pack:

Meta-cognition
Theory and implementation related to LLMs' understanding of their own knowledge and limitations.

Keywords:

self-consistencyconfidence estimationLLM alignment

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/2/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/5/2025

Comments

news.ycombinator.com6/4/2025

Comments

news.ycombinator.com6/4/2025

Comments

news.ycombinator.com6/4/2025

We have built fused operator kernels for structured contextual sparsity based on the amazing works of LLM in a Flash (Apple) and Deja Vu (Zichang et al). We avoid loading and computing activations with feed forward layer weights whose outputs will eventually be zeroed out. The result? We are seeing 5X faster MLP layer performance in transformers with 50% lesser memory consumption avoiding the sleeping nodes in every token prediction. For Llama 3.2, Feed forward layers accounted for 30% of total weights and forward pass computation resulting in 1.6-1.8x increase in throughput: Sparse LLaMA 3.2 3B vs LLaMA 3.2 3B (on HuggingFace Implementation): - Time to First Token (TTFT): 1.51× faster (1.209s → 0.803s) - Output Generation Speed: 1.79× faster (0.7 → 1.2 tokens/sec) - Total Throughput: 1.78× faster (0.7 → 1.3 tokens/sec) - Memory Usage: 26.4% reduction (6.125GB → 4.15GB) Please find the operator kernels with differential weight caching open sourced at github/sparse_transformers. PS: We will be actively adding kernels for int8, CUDA and sparse attention. submitted by /u/Economy-Mud-6626 [link] [comments]

reddit.com6/5/2025

Anyone tested it yet? submitted by /u/Proto_Particle [link] [comments]

reddit.com6/5/2025

submitted by /u/jacek2023 [link] [comments]

reddit.com6/5/2025

OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it "would not" be able to segregate data, rather than explaining why it "can’t." Surprising absolutely nobody, except maybe ChatGPT users, OpenAI and the United States own your data and can do whatever they want with it. ClosedAI have the audacity to pretend they're the good guys, despite not doing anything tech-wise to prevent this from being possible. My personal opinion is that Gemini, Claude, et al. are next. Yet another win for open weights. Own your tech, own your data. submitted by /u/iGermanProd [link] [comments]

reddit.com6/5/2025

https://preview.redd.it/60q8dt65k45f1.jpg?width=2048&format=pjpg&auto=webp&s=43ecedb7b3dbd093b9a02d2012200a4311f6a994 source: https://x.com/ArtificialAnlys/status/1930630854268850271 amazing to have a local 8b model so smart like this in my machine! what are your thoughts? submitted by /u/ApprehensiveAd3629 [link] [comments]

reddit.com6/5/2025

Many people asked for this! Now I have a new step-by-step tutorial on GraphRAG in my RAG_Techniques repo on GitHub (16K+ stars), one of the world’s leading RAG resources packed with hands-on tutorials for different techniques. Why do we need this? Regular RAG cannot answer hard questions like: “How did the protagonist defeat the villain’s assistant?” (Harry Potter and Quirrell) It cannot connect information across multiple steps. How does it work? It combines vector search with graph reasoning. It uses only vector databases - no need for separate graph databases. It finds entities and relationships, expands connections using math, and uses AI to pick the right answers. What you will learn Turn text into entities, relationships and passages for vector storage Build two types of search (entity search and relationship search) Use math matrices to find connections between data points Use AI prompting to choose the best relationships Handle complex questions that need multiple logical steps Compare results: Graph RAG vs simple RAG with real examples Full notebook available here: GraphRAG with vector search and multi-step reasoning submitted by /u/Nir777 [link] [comments]

reddit.com6/5/2025

Looking how DeepSeek is performing I'm thinking of setting it up locally. What's the cheapest way for setting it up locally so it will have reasonable performance?(10-15t/s?) I was thinking about 2x Epyc with DDR4 3200, because prices seem reasonable right now for 1TB of RAM - but I'm not sure about the performance. What do you think? submitted by /u/Wooden_Yam1924 [link] [comments]

reddit.com6/5/2025

The Qwen team has been killing it. Every new model is a heavy hitter and every new model becomes SOTA for that category. I've been seeing way more fine tunes of Qwen models than LLaMa lately. LocalQwen coming soon lol? submitted by /u/Due-Employee4744 [link] [comments]

reddit.com6/5/2025

Some interesting tricks in the paper to make it good at a specific scientific domain, has cool applications like retrosynthesis (how do I get to this molecule) or reaction prediction (what do I get from A + B?), and everything is open source ! submitted by /u/clefourrier [link] [comments]

reddit.com6/5/2025

As many of you probably know, Town of Salem is a popular game. If you don't know what I'm talking about, you can read the game_rules.yaml in the repo. My personal preference has always been to moderate rather than play among friends. Two weeks ago, I had the idea to make LLMs play this game to have fun and see who is the best. Imo, this is a great way to measure LLM capabilities across several crucial areas: contextual understanding, managing information privacy, developing sophisticated strategies, employing deception, and demonstrating persuasive skills. I'll be sharing charts based on a simulation of 100 games. For a deeper dive into the methodology, more detailed results and more charts, please visit the repo https://github.com/summersonnn/Town-Of-Salem-with-LLMs Total dollars spent: ~60$ - half of which spent on new Claude models. Looking at the results, I see those 30$ spent for nothing :D Vampire points are calculated as follows : If vampires win and a vampire is alive at the end, that vampire earns 1 point If vampires win but the vampire is dead, they receive 0.5 points Peasant survival rate is calculated as follows: sum the total number of rounds survived across all games that this model/player has participated in and divide by the total number of rounds played in those same games. Win Ratios are self-explanatory. Quick observations: - New Deepseek, even the distilled Qwen is very good at this game. - Claude models and Grok are worst - GPT 4.1 is also very successful. - Gemini models are average in general but performs best when peasant Overall win ratios: - Vampires win ratio: 34/100 : 34% - Peasants win ratio: 45/100 : 45% - Clown win ratio: 21/100 : 21% submitted by /u/kyazoglu [link] [comments]

reddit.com6/5/2025

submitted by /u/xenovatech [link] [comments]

reddit.com6/4/2025

I'm considering putting together a system with 7x 5060 Ti to get the most cost-effective VRAM. This will have to be an open frame with riser cables and an Epyc server motherboard with 7 PCIe slots. The idea was to have capacity for medium size models that exceed 24GB but fit in ~100GB VRAM. I think I can put this machine together for between $10k and $15k. For simplicity I was going to go with Windows and Ollama. Inference speed is not critical but crawling along at CPU speeds is not going to be viable. I don't really know what I'm doing. Is this dumb? Go ahead and roast my plan as long as you can propose something better. submitted by /u/vector76 [link] [comments]

reddit.com6/5/2025

I basically want Internet-level knowledge when my phone is not connected to the internet (camping etc). I've heard good things about Gemma 2 2b for creative writing. But is it still the best model for things like world knowledge? Questions like: - How to identify different clam species - How to clean clam that you caught - Easy clam recipes while camping (Can you tell I'm planning to go clamming while camping?) Or others like: - When is low tide typically in June in X location - Good restaurants near X campsite - is it okay to put food inside my car overnight when camping in a place with bears? Etc BONUS POINTS IF ITS MULTIMODAL (so I can send pics of my clams to identify lol) submitted by /u/clavidk [link] [comments]

reddit.com6/5/2025

Hello my dear friends of opensource llms. I unfortunately encountered a situation to which I can't find any solution. I want to use tensor parallelism with exl2, as i have two rtx 3060. But exl2 quantization only uses on gpu by design, which results in oom errors for me. If somebody could convert the qwen long (https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B) into exl 2 around 4-4.5 bpw, I'd come in my pants. submitted by /u/Flashy_Management962 [link] [comments]

reddit.com6/5/2025

Dont have a real point here, just the title, food for thought. I think it would be a pretty cool thing to do. at this point it's extremely out of date, so they wouldn't be loosing any "edge", it would just be a cool thing to do/have and would be a nice throwback. openAI's 10th year anniversary is coming up in december, would be a pretty cool thing to do, just sayin. submitted by /u/Expensive-Apricot-25 [link] [comments]

reddit.com6/5/2025

I wrote a little script to automate commit messages This might be pretty lame, but this is the first time I've actually done any scripting with LLMs to do some task for me. This is just for a personal project git repo, so the stakes are as low as can be for the accuracy of these commit messages. I feel like this is a big upgrade over the quality of my usual messages for a project like this. I found that the outputs for qwen3 8b Q4_K_M were much better than gemma3 4b Q4_K_M, possibly to nobody's suprise. I hope this might be of use to someone out there! ```bash ! /bin/bash NO_CONFIRM=false if [[ "$1" == "-y" ]]; then NO_CONFIRM=true fi diff_output=$(git diff --staged) echo if [ -z "${diff_output}" ]; then if $NO_CONFIRM; then git add * else read -p "No files staged. Add all and proceed? [y/n] " -n 1 -r if [[ $REPLY =~ [Yy]$ ]]; then git add * else exit 1 fi fi fi diff_output=$(git diff --staged) prompt="\no-think [INSTRUCTIONS] Write a git commit message for this diff output in the form of a bulleted list, describing the changes to each individual file. Do not include ANY formatting e.g. bold text (**). [DIFF]: $diff_output" response=$(echo "$prompt" | ollama.exe run qwen3) message=$(echo "$response" | sed -e '/<think>/d' -e '/</think>/d' -e "/$/d") git status echo "Commit message:" echo "$message" echo if $NO_CONFIRM; then echo "$message" | git commit -qF - git push else read -p "Proceed with commit? [y/n] " -n 1 -r echo if [[ $REPLY =~ [Yy]$ ]]; then echo "$message" | git commit -qF - git push else git reset HEAD -- . fi fi ``` submitted by /u/aiueka [link] [comments]

reddit.com6/5/2025

What has been your experience and what are the pro/cons? submitted by /u/GreenTreeAndBlueSky [link] [comments]

reddit.com6/5/2025

I've got LM Studio running on my PC and I'm wondering if anyone knows a way to connect to it from iPhone? I've looked around and tried several apps but haven't found one that lets you specify the API URL. submitted by /u/NonYa_exe [link] [comments]

reddit.com6/5/2025

Back in the day I used to use gpt2 but tensorflow has moved on and it's not longer properly supported. Are there any good replacements? I don't need an excellent model at all, something as simple and weak as gpt2 is ideal (I would much rather faster training). It'll be unlearning all its written language anyways: I'm tackling a similar project to the guy a while back that generated Pokemon sprites fine-tuning gpt2. submitted by /u/Lucario1296 [link] [comments]

reddit.com6/5/2025

I want to make for myself a chat assistant that would use qwen3 8b for reasoning tokens and then stop when it gets the end of thought token, then feed that to qwen3 30b for the rest. The idea being that i dont mind reading while the text is being generated but dont like to wait for it to load. I know there is no free luch and performance will be reduced. Has anybody tried this? Is it a bad idea? submitted by /u/GreenTreeAndBlueSky [link] [comments]

reddit.com6/5/2025

asked this in a recent comment but curious what others think. i could be missing it, but why aren’t more niche on device products being built? not talking wrappers or playgrounds, i mean real, useful tools powered by local LLMs. models are getting small enough, 3B and below is workable for a lot of tasks. the potential upside is clear to me, so what’s the blocker? compute? distribution? user experience? submitted by /u/mindfulbyte [link] [comments]

reddit.com6/5/2025

Hey folks, I’m a senior tech lead with 8+ years of experience, and for the last ~3 I’ve been knee-deep in building LLM-powered systems — RAG pipelines, agentic apps, text2SQL engines. We’ve shipped real products in manufacturing, sports analytics, NGOs, legal… you name it. After doing this again and again, I got tired of the same story: building ingestion from scratch, duct-taping vector DBs, dealing with prompt spaghetti, and debugging hallucinations without proper logs. So we built ragbits — a toolbox of reliable, type-safe, modular building blocks for GenAI apps. What started as an internal accelerator is now fully open-sourced (v1.0.0) and ready to use. Why we built it: We wanted repeatability. RAG isn’t magic — but building it cleanly every time takes effort. We needed to move fast for PoCs, without sacrificing structure. We hated black boxes — ragbits integrates easily with your observability stack (OpenTelemetry, CLI debugging, prompt testing). And most importantly, we wanted to scale apps without turning the codebase into a dumpster fire. I’m happy to answer questions about RAG, our approach, gotchas from real deployments, or the internals of ragbits. No fluff — just real lessons from shipping LLM systems in production. We’re looking for feedback, contributors, and people who want to build better GenAI apps. If that sounds like you, take ragbits for a spin. Let’s talk 👇 submitted by /u/Loud_Picture_1877 [link] [comments]

reddit.com6/4/2025

Hello Reddit! Our "AI" computer now has 4x 7900 XTX and 1x 7800 XT. Llama-server works well, and we successfully launched Qwen3-235B-A22B-UD-Q2_K_XL with a 40,960 context length. GPU Backend Input OutPut 4x7900 xtx HIP llama-server, -fa 160 t/s (356 tokens) 20 t/s (328 tokens) 4x7900 xtx HIP llama-server, -fa --parallel 2 for 2 request in one time 130 t/s (58t/s + 72t//s) 13.5 t/s (7t/s + 6.5t/s) 3x7900 xtx + 1x7800xt HIP llama-server, -fa ... 16-18 token/s Question to discuss: Is it possible to run this model from Unsloth AI faster using VLLM on amd or no ways to launch GGUF? Can we offload layers to each GPU in a smarter way? If you've run a similar model (even on different GPUs), please share your results. If you're considering setting up a test (perhaps even on AMD hardware), feel free to ask any relevant questions here. ___ llama-swap config models: "qwen3-235b-a22b:Q2_K_XL": env: - "HSA_OVERRIDE_GFX_VERSION=11.0.0" - "CUDA_VISIBLE_DEVICES=0,1,2,3,4" - "HIP_VISIBLE_DEVICES=0,1,2,3,4" - "AMD_DIRECT_DISPATCH=1" aliases: - Qwen3-235B-A22B-Thinking cmd: > /opt/llama-cpp/llama-hip/build/bin/llama-server --model /mnt/tb_disk/llm/models/235B-Q2_K_XL/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf --main-gpu 0 --temp 0.6 --top-k 20 --min-p 0.0 --top-p 0.95 --gpu-layers 99 --tensor-split 22.5,22,22,22,0 --ctx-size 40960 --host 0.0.0.0 --port ${PORT} --cache-type-k q8_0 --cache-type-v q8_0 --flash-attn --device ROCm0,ROCm1,ROCm2,ROCm3,ROCm4 --parallel 2 submitted by /u/djdeniro [link] [comments]

reddit.com6/5/2025

Cannot get response from the support and all API requests have been failing for weeks. submitted by /u/punkpeye [link] [comments]

reddit.com6/5/2025

Both end up being about the same size and fit just enough on the vram provided the kv cache is offloaded. I tried looking for performance of models at equal memory footprint but was unable to. Any advice is much appreciated. submitted by /u/GreenTreeAndBlueSky [link] [comments]

reddit.com6/5/2025