List Question
10 TechQA 2023-10-13 16:11:26Is it possible to fine tune or use RAG on the CoreML version of Llama2?
222 views
Asked by Mike Ike
Compare two strings by meaning using LLMs
1.7k views
Asked by root
Implementation (and working) differences between AutoModelForCausalLMWithValueHead vs AutoModelForCausalLM?
326 views
Asked by Deshwal
How do I know the right data format for different LLMs finetuning?
121 views
Asked by John
CUDA OutOfMemoryError but free memory is always half of required memory in error message
294 views
Asked by olivarb
Query with my own data using langchain and pinecone
793 views
Asked by javascript-wtf
Could not find a version that satisfies the requirement python-magic-bin
384 views
Asked by Debrup Paul
Any possibility to increase performance of querying chromadb persisted locally
564 views
Asked by mlee_jordan
Grid based decision making with Llama 2
61 views
Asked by skvp