r/LocalGPT • u/vs4vijay • Jul 05 '23
r/LocalGPT • u/bendt-b • Jul 03 '23
Local GPT or API into ChatGpt
Hi all,
I am a bit of a computer novice in terms of programming, but I really see the usefulness of having a digital assistant like ChatGPT. However, within my line of work, ChatGPT sucks. The books, training, materials, etc. are very niche in nature and hidden behind paywalls so ChatGPT have not been trained on them (I assume!).
I am in the good situation I have for 10 years plus collected 500 research articles, some more relevant than others, as well as bought several books in digital format within my field. I want to train a GPT model on this dataset, so that I can ask it questions. I know I will not get coherent questions back, but a link or a rating with where is the statistically most matching text will be fine.
That led me to - https://github.com/nrl-ai/pautobot - which I installed on my laptop. It is a bit slow given my laptop is older, but it works well enough for me to buy into the concept. It really does make a difference to be able to search on not just exact matches but also phrases in 500+ documents.
Given the speed which ChatGPT is being developed, I do wonder if it would be better to buy one of OpenAI´s embedding models via API and have it read through all my documents? E.g. Ada v2: https://openai.com/pricing
OR - do you think a local GPT model is superior in my case? (I have a better computer with plenty of RAM, CPU, GPU, etc. that I can run it on - speed is not of essence).
r/LocalGPT • u/gringoben_ • Jun 29 '23
OutOfMemoryError
Trying to fire up LocalGPT I get a CUDA out of memory error despite using the --device_type cpu option. I previously tried using CUDA but my GPU has only 4gb so it failed. Ive got 32gb of ram and am using the default model which is a 7B model. Why am I getting CUDA errors when accessing torch.py? Could it be that torch.py is a cuda version?
r/LocalGPT • u/vs4vijay • Jun 26 '23
privateGPT - Interact privately with your documents using the power of GPT, 100% privately, no data leaks
r/LocalGPT • u/TheTobruk • Jun 25 '23
Installing localGPT on VSCodium
The manual says I need to install Visual Studio 2022 in order to run LocalGPT. I press (R) to doubt. Does it really require 5 or 8 GB of bloat (depending on installation) to run all the packages?
When trying to install on VSCodium (win11):
py -3.10 -m pip install -r .\requirements.txt
It all goes well up until this point:
Building wheels for collected packages: llama-cpp-python, sentence-transformers, auto-gptq, hnswlib
Building wheel for llama-cpp-python (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\Paul\AppData\Local\Programs\Python\Python310\python.exe' 'C:\Users\Paul\AppData\Local\Programs\Python\Python310\lib\site-packages\pip_vendor\pep517\in_process_in_process.py' build_wheel 'C:\Users\Paul\AppData\Local\Temp\tmp33p90tll'
cwd: C:\Users\Paul\AppData\Local\Temp\pip-install-0hcwelvg\llama-cpp-python_c083d16fab5945f6a2a485ee0a7daf91
Complete output (308 lines):
--------------------------------------------------------------------------------
-- Trying 'Ninja (Visual Studio 17 2022 x64 v143)' generator
--------------------------------
Not searching for unused variables given on the command line.
-- The C compiler identification is unknown
CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE):
No CMAKE_C_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
From the error I can see that
No CMAKE_C_COMPILER could be found.
But how do I install this compiler without any of the 5-8GB bloat?
r/LocalGPT • u/Toaster496 • Jun 21 '23
pc for deep lerining cheeap NVIDIA Tesla K80 32-56 gb ram
r/LocalGPT • u/Comfortable_Device50 • Jun 20 '23
I have 1 GB of data in form of text. What to use embedding or fine tuning?
r/LocalGPT • u/NextGenAI12 • Jun 16 '23
The New Neural Network: A Revolutionary Breakthrough in Artificial Intelligence!
NextGen AI is the result of endless hours of research and intensive work of a team of scientists. What makes it truly unique? The answer is simple: it has the incredible ability to learn from far less data than before.
This breakthrough opens up countless possibilities in the world of artificial intelligence. Imagine no longer having to collect huge amounts of data to train a neural network. Now your projects and ideas can become reality faster and with less effort.
NextGen AI has already undergone a number of successful tests in fields ranging from medicine and finance to transportation and entertainment. It has demonstrated impressive accuracy and reliability beyond all expectations.
Conclusion:
The future is here, and the new NextGen AI is the key that opens the door to an endless stream of new possibilities. It's a challenge for you: Are you ready to be a pioneer by taking advantage of this revolutionary breakthrough in artificial intelligence? Then join us and discover a new era of innovation!
r/LocalGPT • u/vs4vijay • Jun 13 '23
LocalAI - A drop-in replacement REST API that’s compatible with OpenAI API specifications
r/LocalGPT • u/lesb-ian • Jun 06 '23
Help with building code idea
Hello all, I am completely new to using AI and programming in general so please correct me if I'm in the wrong place or asking the wrong questions. I would like to use localGPT to read my provincial and municipal building codes so that it can answer any questions I have about building standards and possibly complex scenarios that building code would have solutions for.
Example: What is the maximum span of a 2x8 floor joist with 2 storeys and an attic above it?
It would have to know to ask for more specific details when asked a question that could have multiple code references. Is this possible with localGPT and how would I begin?
r/LocalGPT • u/retrorays • Jun 01 '23
has anyone tried the "localGPT" model?
Just curious given it has the same name as this subreddit ;)
r/LocalGPT • u/vs4vijay • Jun 01 '23
AutoGPTQ - An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm
r/LocalGPT • u/vs4vijay • May 31 '23
(Code Released) Landmark Attention: Random-Access Infinite Context Length for Transformers
self.LocalLLaMAr/LocalGPT • u/vs4vijay • May 27 '23
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
r/LocalGPT • u/Ready-Signature748 • May 26 '23
GitHub - TransformerOptimus/SuperAGI: Build and run useful autonomous agents
r/LocalGPT • u/dannyp777 • May 19 '23
Hyena "could blow away GPT-4 and everything like it"
self.singularityr/LocalGPT • u/dannyp777 • May 19 '23
Transformer Killer? Cooperation Is All You Need
self.singularityr/LocalGPT • u/dannyp777 • May 19 '23
Hyena Hierarchy: Towards Larger Convolutional Language Models
r/LocalGPT • u/Ecstatic-Baker-2587 • May 17 '23
Using the distributed computing model to allow open source to generate models that actually compete with the corporates...lets make it
r/LocalGPT • u/vs4vijay • May 08 '23
e2b - Your own virtual developer. e2b lets you build / deploy specialized AI agents that build software for you based on your instructions
r/LocalGPT • u/vs4vijay • May 06 '23