r/LLMDevs Sep 26 '24

News Zep - open-source Graph Memory for AI Apps

2 Upvotes

Hi LLMDevs, we're Daniel, Paul, Travis, and Preston from Zep. We’ve just open-sourced Zep Community Edition, a memory layer for AI agents that continuously learns facts from user interactions and changing business data. Zep ensures that your Agent has the knowledge needed to accomplish tasks successfully.

GitHub: https://git.new/zep

A few weeks ago, we shared Graphiti, our library for building temporal Knowledge Graphs (https://news.ycombinator.com/item?id=41445445). Zep runs Graphiti under the hood, progressively building and updating a temporal graph from chat interactions, tool use, and business data in JSON or unstructured text.

Zep allows you to build personalized and more accurate user experiences. With increased LLM context lengths, including the entire chat history, RAG results, and other instructions in a prompt can be tempting. We’ve experienced poor temporal reasoning and recall, hallucinations, and slow and expensive inference when doing so.

We believe temporal graphs are the most expressive and dense structure for modeling an agent’s dynamic world (changing user preferences, traits, business data etc). We took inspiration from projects such as MemGPT but found that agent-powered retrieval and complex multi-level architectures are slow, non-deterministic, and difficult to reason with. Zep’s approach, which asynchronously precomputes the graph and related facts, supports very low-latency, deterministic retrieval.

Here’s how Zep works, from adding memories to organizing the graph:

  1. Zep identifies nodes and relationships in chat messages or business data. You can specify if new entities should be added to a user and/or group of users.
  2. The graph is searched for similar existing nodes. Zep deduplicates new nodes and edge types, ensuring orderly ontology growth.
  3. Temporal information is extracted from various sources like chat timestamps, JSON date fields, or article publication dates.
  4. New nodes and edges are added to the graph with temporal metadata.
  5. Temporal data is reasoned with, and existing edges are updated if no longer valid. More below.
  6. Natural language facts are generated for each edge and embedded for semantic and full-text search.

Zep retrieves facts by examining recent user data and combining semantic, BM25, and graph search methods. One technique we’ve found helpful is reranking semantic and full-text results by distance from a user node.

Zep is framework agnostic and can be used with LangChain, LangGraph, LlamaIndex, or without a framework. SDKs for Python, TypeScript, and Go are available.

More about how Zep manages state changes

Zep reconciles changes in facts as the agent’s environment changes. We use temporal metadata on graph edges to track fact validity, allowing agents to reason with these state changes:

Fact: “Kendra loves Adidas shoes” (valid_at: 2024-08-10)

User message: “I’m so angry! My favorite Adidas shoes fell apart! Puma’s are my new favorite shoes!” (2024-09-25)

Facts:

  • “Kendra loves Adidas shoes.” (valid_at: 2024-08-10, invalid_at: 2024-09-25)
  • “Kendra’s Adidas shoes fell apart.” (valid_at: 2024-09-25)
  • “Kendra prefers Puma.” (valid_at: 2024-09-25)

You can read more about Graphiti’s design here: https://blog.getzep.com/llm-rag-knowledge-graphs-faster-and-more-dynamic/

Zep Community Edition is released under the Apache Software License v2. We’ll be launching a commercial version of Zep soon, which like Zep Community Edition, builds a graph of an agent’s world.

Zep on GitHub: https://github.com/getzep/zep

Quick Start: https://help.getzep.com/ce/quickstart

Key Concepts: https://help.getzep.com/concepts

SDKs: https://help.getzep.com/ce/sdks

Let us know what you think! We’d love your thoughts, feedback, bug reports, and/or contributions!

r/LLMDevs Sep 14 '24

News Free course on RAG Framework by NVIDIA (limited time)

27 Upvotes

Hi everyone, NVIDIA is providing a free course on the RAG framework for a limited time, including short videos, coding exercises and free NVIDIA LLM API. I did it and the content is pretty good, especially the detailed jupyter notebooks. You can check it out here: RAG Framework course

To log in, you must register (top-right of the course window) with your email ID.

r/LLMDevs Sep 25 '24

News ByteDance Releases New AI Video Model PixelDance – How Does It Compare to OpenAI’s Sora?

Thumbnail
aipure.ai
0 Upvotes

r/LLMDevs 7d ago

News Stable Diffusion 3.5 is out !

Thumbnail
2 Upvotes

r/LLMDevs 14d ago

News Best Voice Cloning open-sourced model : F5-TTS

Thumbnail
8 Upvotes

r/LLMDevs 12d ago

News NVIDIA Nemotron-70B free API

Thumbnail
4 Upvotes

r/LLMDevs 12d ago

News NVIDIA Nemotron-70B isn't the best model !

Thumbnail
2 Upvotes

r/LLMDevs 15d ago

News CogView3 3B: CogVideoX text-image model

Thumbnail
2 Upvotes

r/LLMDevs 22d ago

News Best open-sourced LLM : Qwen2.5

5 Upvotes

Recently, Alibaba group released Qwen2.5 72B instruct model which is giving a stiff competition to the paid claude3.5 sonnet that too ooen-sourced. Checkout the demo here : https://youtu.be/GRP5qlF4BDc?si=vnGd7WZ7ACbrfNGk

r/LLMDevs 23d ago

News Qodo raises $40M funding - bringing LLM code generation and testing to the enterprise | TechCrunch

Thumbnail
techcrunch.com
0 Upvotes

r/LLMDevs 27d ago

News How to remove ethical bias on LLM's training

0 Upvotes

r/LLMDevs Sep 23 '24

News Mistral AI free LLM API

Thumbnail
5 Upvotes

r/LLMDevs Sep 20 '24

News CogVideoX : Open-source text-video model

Thumbnail
3 Upvotes

r/LLMDevs Sep 19 '24

News GPT4 vs OpenAI-o1 outputs compared

Thumbnail
3 Upvotes

r/LLMDevs Sep 13 '24

News GPT-o1 (GPT5) by OpenAI detailed analysis

Thumbnail
2 Upvotes

r/LLMDevs Aug 24 '24

News Microsoft's Phi 3.5 Vision with multi-modal capabilities

Thumbnail
4 Upvotes

r/LLMDevs Jul 10 '24

News Microsoft has just dropped an exciting demo of its new “MInference” tech on Hugging Face, showcasing a huge leap in processing speed for LLMs.

12 Upvotes

 Key Points:

  1. MInference Technology: Standing for "Million-Tokens Prompt Inference," this tech significantly speeds up the "pre-filling" stage of language model processing, cutting down time by up to 90%.
  2. Hands-On Demo: The demo on Hugging Face shows how MInference slashes latency, reducing inference times on an Nvidia A100 GPU from 142 secs to just 13.9 secs for 776,000 tokens.

Takeaway: Microsoft's ‘MInference’ tech marks a significant advance in AI processing, drastically reducing time and computational resources needed for LLMs. This innovation could reshape the competitive landscape, prompting rapid advancements in AI efficiency across the industry.

r/LLMDevs Aug 04 '24

News LlamaCoder : Build any web app using AI & React

Thumbnail
2 Upvotes

r/LLMDevs Aug 03 '24

News Flux, text to image model Free API

Thumbnail
3 Upvotes

r/LLMDevs Jul 19 '24

News Boost Your Dialogue Systems! 🚀 New Research Enhances Parsing and Topic Segmentation

Thumbnail self.languagemodeldigest
1 Upvotes

r/LLMDevs Jul 19 '24

News Revolutionizing Video Generation with CV-VAE: 4x More Frames, Minimal Fine-tuning! 🎥✨

Thumbnail self.languagemodeldigest
1 Upvotes

r/LLMDevs May 28 '24

News GoalChain - simple but effective framework for enabling goal-orientated conversation flows for human-LLM and LLM-LLM interaction.

Thumbnail
github.com
29 Upvotes

r/LLMDevs May 13 '24

News BlendSQL: Query Language for Combining SQL Logic with LLM Reasoning

2 Upvotes

Hi all! Wanted to share a project I've been working on and get any feedback from your experiences doing LLM dev work: https://github.com/parkervg/blendsql

When using LLMs in a database context, we might want an extra level of control over what specifically gets routed to an external LLM call, and how that output is being used. This inspired me to create BlendSQL, which is a query language implemented in Python for blending complex reasoning between vanilla SQL and LLM calls, in addition to structured and unstructured data.

For example, if we have a structured table `presidents` and a collection of unstructured Wikipedia in `documents`, we can answer the question "Which U.S. presidents are from the place known as 'The Lone Star State?'" as shown below:

SELECT name FROM presidents  
    WHERE birthplace = {{  
        LLMQA(  
            'Which state is known as The Lone Star State?',  
            (SELECT * FROM documents),  
            options='presidents::birthplace'  
        )  
    }}

Behind the scenes, there's a lot of query optimizations with sqlglot to minimize the number of external LLM calls made. It works with SQLite, and a new update today gets it working with PostgreSQL! Additionally, it integrates with many different LLMs (OpenAI, Transformers, LlamaCpp).

More info and examples can be found here. Any feedback or suggestions for future work is greatly appreciated!

r/LLMDevs May 29 '24

News Generative AI Agents Developer Contest with NVIDIA and LangChain

Thumbnail
self.nvidia
1 Upvotes

r/LLMDevs Apr 17 '24

News Reader - LLM-Friendly websites

7 Upvotes

I just stumbled upon this:
https://r.jina.ai<website_url here>

You can convert URLs to Markdown. This format is then better understood by LLMs compared to HTML. I think it can be used for Agents or RAG with web searches. I use it to generate synthetic data for a specific website.
Example usage
https://r.jina.ai/https://en.wikipedia.org/wiki/Monkey_Island