![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8aead832-799f-4d34-a20d-eae5b621a9b1.jpeg)
I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.
I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.
No trying to get around anything. No funny instructions like my grandma singing a lullaby about illegal activities. Just using instructions to tell a story. Even things like having a superhero in a fight is enough to trigger this. Also doesn’t explain why regen makes it continue.
A vector search converts your query into magic numbers, and then searches the database for other magic numbers that are “similar” (closet to it in the vector space, which is basically an N-dimensional graph of points and directions). These results are then returned as snippets to the LLM and it does stuff from that point.
The effectiveness of the vector search depends on how Open WebUI breaks up the documents into smaller sections, and how good the embeddings are.
I’m not exactly sure what you want to achieve, but you might have success in using an LLM to summarize the documents beforehand, using a specific prompt to extract the info you want, then feed that into the vector DB. This would require some scripting, of course.
The easiest thing to do is try it. See if Open WebUI’s vector search is able to handle it. Make sure to use a good embedding model like nomic-embed-text (can be found on ollama.com). You can change the vector search settings in the documents settings from the workspace on OpenWebUI.
Open WebUI’s document management loads everything into a vector database. When you use the hashtag, it will trigger a search against the vector database based on your prompt. These results are run feed into the LLM. Open WebUI should generate a hashtag that can reference all the documents. But the quality of the results will be influenced by the embeddings and the LLM that responds to you.
Install ollama. It has ROCm support (on Linux at least). Then hook it up to your favorite client. It has its own API and an openai compatible one.
KoboldCPP has ban tokens that prevent those tokens from being output. Otherwise just put it in the prompt and it should probably work.
Even the smell of Olives causes me to gag. I absolutely cannot eat them. Olive oil is fine. But actual olives, no. Doesn’t matter if they’re old, new, canned, fresh. They’re absolutely disgusting. One of the few foods I outright cannot and will not eat.
Adam Sandler?
It’s the opening of the Canterbury Tales.
Makes sense when many of the spiders in Australia are dangerous, though.
I use a Misskey fork for micro blogging and I can’t even get Lemmy posts to load. The profiles of communities do, but that’s it.
Ah right. What I really meant to ask was if it can do protocols other than http.
Which I don’t think it can…
Are you able to tunnel ports other than 80 and 443 through Cloudflare?
How will you handle the planned rewrite of Iceshrimp?
Right. I agree.
You mean the part about people citing laws like GDPR is dead on?
Definitely a good way to do it. Photoprism supports uploading to WebDAV for sharing. Could front a CDN upload with a web dav server 🤔
Yeah, that sounds like a good idea. I am using photoprism for photo management. It doesn’t really support S3 or any CDN. You could use a fuse filesystem or something, but it’s very slow.
Where are you uploading galleries? Just your own HDD connected to a static website?
The only problem I really have, is context size. It’s harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.