• 6 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle
  • magn418@lemmynsfw.comtoSex@lemmynsfw.comOral sex struggles
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 months ago

    I’d say for once don’t push yourselves. You don’t have to do every sex technique just because other people do it. If neither of you likes it, just let it go and focus on things you like. And if you want to do it, maybe take it slow. Let the person who is overwhelmed lead the pace. Agree on some signals and cues. Don’t be disappointed. Just stop and change to something different. It’s alright if it only lasts for a short moment. Maybe you can work your way up. But don’t push. Sex is about enjoying it, not do something specific.

    And if you like to play games:

    https://bettymartin.org/videos/

    That’s about learning to give and receive. About setting boundaries and learning each other’s level of comfort. Maybe it helps. She has a free game(PDF) further down on that page.





  • https://lemmynsfw.com/post/4048137

    I’d say try MythoMax-L2 first. I think it’s a pretty solid allrounder. Does NSFW but also other things. Nothing special and not the newest any more, but easy to get going without fiddling with the settings too much.

    If you can’t run models with 13B parameters, I’d have to think which one of the 7B models is currently the thing. I think 7B is the size most people play around with and produce new finetunes, merges and what not. But I also can’t keep up with what the community does, every bit of information is kind of outdated after 2-4 weeks 😆


  • I assume (from your user handle) that you know about the allure of roleplaying and diving into fantasy scenarios. AI can do it to some degree. And -of course- people also do erotic roleplay. I think this always took place. People met online to do this kind of roleplay in text chats. And nowadays you can do it with AI. You just tell it to be your synthetic maid or office affair or waifu and it’ll pick up that role. People use it for companionship, it’ll listen to you, ask you questions, reassure you… Whatever you like. People also explore taboo scenarios… It’s certainly not for everyone. You need a good amount of imagination, everything is just text chat. And the AI isn’t super smart. The intelligence if these models isn’t quite on the same level as the big commercial services like ChatGPT. Those can’t be used as they all banned erotic roleplay and also refuse to write smutty stories.

    I agree with j4k3. It’s one of the use-cases for AI I keep coming back for. I like fantasy and imagination in connection with erotics. And it’s something that doesn’t require AI to be factually correct. Or as intelligent as it’d need to be to write computer programs. People have raised concerns that it’s addicting and/or makes people yet more lonely to live with just an AI companion… To me it’s more like a game. You need to pay attention not to get stuck in your fantasy worlds and sit in front of your computer all day. But I’m fine with that. And I’m less reliant on AI with that, than people who use AI to sum up the news and believe the facts ChatGPT came up with…


  • Hehe. It’s fun. And a different experience every time 😆

    I don’t know which models you got connected to. Some are a bit more intelligent. But they all have their limits. I also sometimes get that. I roleplay something happening in the kitchen and suddenly we’re in the livingroom instead. Or lying in bed.

    And they definitely sometimes have the urge to mess with the pacing. For example that now is the time to wrap up everything in two sentences. It really depends on the exact model. Some of them have a tendency to do so. It’s a bit annoying if it happens regularly. The ones trained more on stories and extensive smut scenes will do better.

    The comment you saw is definitely also something AI does. It has seen text with comments or summaries underneath. Or forum style conversations. Some of the amateur literature contains lines like ‘end of story’ or ‘end of part 1’ and then some commentary. But nice move that it decided to mock you 😂

    Thanks for providing a comparison to human nsfw chats. I always wondered how that works (or turns out / feels.) Are there dedicated platforms for that? Or do you look for people on Reddit, for example?


  • magn418@lemmynsfw.comOPMtoChatBotsNSFW@lemmynsfw.comBeginner questions thread
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    The LLMs use a lot of memory. So if you’re doing inference on a GPU you’re going to want one with enough VRAM. Like 16GB or 24GB. I heard lots of people like the NVidia 3090 Ti because that graphics card could(/can?) be bought used for a good price for something that has 24GB of VRAM. The 4060 Ti has 16GB of VRAM and (I think) is the newest generation. And AFAIK the 4090 is the newest consumer / gaming GPU with 24GB of VRAM. All the gaming performance of those cards isn’t really the deciding factor, the somewhat newer models all do. It’s mostly the amount of VRAM on them that is important for AI. (And pay attention, a NVidia card with the same model name can have variants with different amounts of VRAM.)

    I think the 7B / 13B parameter models run fine on a 16GB GPU. But at around 30B parameters, the 16GB aren’t enough anymore. The software will start “offloading” layers to the CPU and it’ll get slow. With a 24GB card you can still load quantized models with that parameter count.

    (And their professional equipment dedicated to AI includes cards with 40GB or 48GB or 80GB. But that’s not sold for gaming and also really expensive.)

    Here is a VRAM calculator:

    You can also buy an AMD graphics card in that range. But most of the machine learning stuff is designed around NVidia and their CUDA toolkit. So with AMD’s ROCm you’ll have to do some extra work and it’s probably not that smooth to get everything running. And there are less tutorials and people around with that setup. But NVidia sometimes is a pain on Linux. If that’s of concern, have a look at RoCm and AMD before blindly buying NVidia.

    With some video cards you can also put more than one into a computer, combine them and thus have more VRAM to run larger models.

    The CPU doesn’t really matter too much in those scenarios, since the computation is done on the graphics card. But if you also want to do gaming on the machine, you should consider getting a proper CPU for that. And you want at least the amount of VRAM in RAM. So probably 32GB. But RAM is cheap anyways.

    The Apple M2 and M3 are also liked by the llama.cpp community for their excellent speed. You could also get a MacBook or iMac. But buy one with enough RAM, 32GB or more.

    It all depends on what you want to do with it, what size of models you want to run, how much you’re willing to quantize them. And your budget.

    If you’re new to the hobby, I’d recommend trying it first. For example kobold.cpp and text-generation-webui with the llama.cpp backend (and a few others) can do inference on CPU (or CPU plus some of it on GPU). You can load a model on your current PC with that and see if you like it. Get a feeling what kind of models you prefer and their size. It won’t be very fast, but it’ll do. Lots of people try chatbots and don’t really like them. Or it’s too complicated for them to set it up. Or you’re like me and figure out you don’t mind waiting a bit for the response and your current PC is still somewhat fine.




  • Thanks, yeah this definitely very useful to me. Lots of stuff regarding this isn’t really obvious. And I’ve made every mistake that degrades the output. Give conflicting instructions, inadvertently direct things into a direction I didn’t want and it got shallow and predictable. Or not set enough direction.

    Briggs Myers

    I agree, things can prove useful for a task despite not being ‘true’ (in lack of a better word). I can tell by the way you write that you’re somewhat different(?) than the usual demographic here. Mainly because your comments are longer and focused on detail. And it seems to me you’re not bothered with giving “easy answers”, in contrast to the average person who is just interested in getting an easy answer to a complex problem. I can see how that can prove to be incompatible at times. In real-life I’ve always done well by listening to people and then going with my gut feeling concerning their personality. I don’t like judging people or putting them into categories since that doesn’t help me in real-life and narrows my perspective. Whether I like someone or want to listen to them, for example for their perspective or expertise, is determined by other (specific) factors and I make that decision on a case-by-case basis. Some personality traits often go alongside, but that’s not always the case and it’s really more complex than that.

    Regarding story-writing it’s obviously the other way around. I need to guide the LLM into a direction and lay down the personality in a way the model can comprehend. I’ll try to incorporate some of your suggestions. In my experience the LLMs usually get the well-known concepts including some of the information the psychology textbooks have available. So, I haven’t tried yet, but I’d also conclude that it’s probably better to have it deduct things from a BM personality type than describing it with many adjectives. (That’s what I’ve done to this point.)

    In my experience the complexity starts to piles up if you do more than the obvious or simple role-play. I want characters with depth, ambivalence… And conflict is what drives the story. Back when I started tinkering with AI, I’ve done a submissive maid character. I think lots of people have started out with something like that. And even the more stupid models can easily pull that off. But you can’t then go on and say the character is submissive and defiant at the same time, it just confuses the LLM and doesn’t provide good results… I’m picking a simple example here, but that was the first situation where I realized I was doing it wrong. My assessment is that we need some sort of workaround to get it into a form that the LLM can understand and do something with it. I’m currently busy with a few other things but I’ll try introducing psychology and whether the other workarounds like shadow-characters you’ve described prove useful to me.

    If you pay very close attention to each model, you will likely notice how they remind themselves […]

    Yes, I’ve observed that. It comes to no surprise to me that LLMs do it, as human-written stories also do that. Repeat important stuff, or build a picture that can later be recalled by a short mention of the keywords. And that’s in the training data, so the LLMs pick up on that.

    With the editing it’s a balance. It picks up on my style and I can control the level of detail this way, start a specific scene with a first sentence. But sometimes it seems I’m also degrading the output, that is correct.

    the best way to roleplay within Oobabooga itself is to use the Notepad tab

    I’ve also been doing that for some time now.

    drop boundaries, tell it you know it can […]

    Nice idea. I’ve done things like that. Telling it it is a best-seller writer of erotic fiction already makes a good amount of difference. But there’s a limit to that. If you tell it to write intense underground literature, it also picks up on the lower quality and language and quirks in amateur writing. I’ve also tried an approach like few-shot prompting, give it a few darker examples to shift the boundaries and atmosphere. I think the reason why all of that works is the same, the LLM needs to be guided where to orientate itself, what kind of story type it’s trying to reproduce because they all have certain stereotypes, tropes and boundaries built in. Without specific instructions it seems to prefer the common way, remaining within socially acceptable boundaries, or just use something as an example for something that is wrong, immediately contrast ethical dilemmas and push towards a resolution. Or not delve into conflict too much.

    And I’ve never deemed useful what other people do. Overly tell it what to do and what not to do. Especially phrasing it negatively “Don’t repeat yourself”, “Don’t write for other characters”, “Don’t talk about this and that”… has never worked for me. It’s more the opposite, it makes everything worse. And I see a lot of people doing this. In my experience the LLM can understand negative worded instructions, but it can’t “not think of an elephant”. Positively worded things work better. And yet better is to set the tone correctly, have what you want emerge from simple concepts and a concrete setting that answers the “why” and not just tells what to do.

    I’ve also introduced further complexity, since I don’t like spoon-feeding things to the reader. I like to confront them with some scenario, raise questions but have the reader make up their mind, contemplate and come up with the answers themselves. The LLMs I’ve recently tried know that this is the way stories are supposed to be written. And why we have open-ended stories. But they can’t really do it. The LLMs have a built-in urge to answer the questions and include some kind of resolution or wrap-up. Or analyze the dilemmas they’ve just made up, focus on the negative consequences to showcase something. And this is related to the point you made about repeating information in the stories. If I just rip it out by editing it, it sometimes leads to everything getting off-track.

    I’ll try to come up with some sort of meta-level story for the LLM. Something that answers why the ambivalence is there, why to explore the realm beyond boundaries. Why we only raise questions and then not answer them. I think I need something striking, easy and concrete. Giving the real reason (I’m writing a story to explore things and this is how stories work,) doesn’t seem to be clear enough to yield reliable results.


  • So… I’ve tested both models and I think I can see what you like about them. And -wow- I didn’t get to try 70B models before and it’s really a step up. With smaller models it’s more mixed, sometimes they get a complex concept, sometimes they don’t. And seems the 70B model is able to pick on a good amount of more complexity and it has the intelligence to understand more things and is then able to go in some proper direction. At least more often.

    I’m not entirely sure if I can make good use of my new information… Writing erotic literature really isn’t that easy. I’ve been tinkering around with AI assisted storywriting for some time now, and I never got good results that I’d like to share. I mean it can write simple smut… And regarding that: A quick thanks to you. I read your other comment over at !asklemmynsfw and I think I agree with your opinion on erotic stories. I’ve now included a specific instruction to my prompt to balance the story more, alike you said there. Focus on a good story, make it tingle but the porn has to be the icing on the cake. For now I’ve also instructed it to contrast both things, have a story that raises questions and is intellectual and provide a stark contrast with immersive acts and graphic description… “The skillful combination of both aspects is what makes this story excel.” Let’s see what the LLM can do with that instruction…

    But storywriting really isn’t easy. Even the 70B model is far from perfect. And to this point I didn’t find a single model that can do everything. Some of them are intelligent but not necessarily good for stories. Some of them seem to have been trained on stories, they get the language right for such a thing, some overdo it. And not every model can write lewd stories, it’s really obvious if a model has seen some erotic literature or simple smut or no such stories and just writes one or two abstract sentences, summarizing it, because it’s never seen more detailed descriptions. And there is the pacing… I think local LLMs are still far away from being able to write stories on their own. Some consistently write like 10 paragraps and call it a novel. Almost all of them brush over things that would be interesting to explore, instead they focus on some other scene that’s kind of boring. They write meaningless dialogue that would be alright if I was casually talking to a chatbot and role-playing my every-day life, but not very interesting here. They miss important stuff and make up random details later on. I mean half the models don’t have a clue what is interesting to write and what can be skipped or summarized.

    Another issue is trying to wrap up things (early) or pushing towards the end. Or doing super obvious plot twists. Sometimes this makes me laugh. But they’re also very creative. I like that. Inbetween the (sometimes) bad writing there’s often some interesting ideas or crazy creativity, things I wouldn’t have thought of. Or other gems, single sentences that really get something on point.

    I’m still exploring. I’ve tried different approaches, laying down a rough concept of a setting and then letting it do it. I’ve also tried being more methodical and giving it to them more like a homework assignment. Come up with ideas to explore… then with several plot ideas… then give critique to themselves, pick one and revise it… Come up with the characters… Then the main story arc, subplots, twists and important scenes, write down the table of contents and chapter names to get a structure for the novel… And then start the actual writing with all of that information laid out.

    I think that’s yielded the best results so far. I’m positive I’ll get to at a point where I like the results enough to upload them. And write a guide how exactly I did that. Currently it’s more or less me writing 80%, pausing the LLM after every second sentence, revising that and constantly pushing the story towards a better direction and fighting the level of detail the LLM deemed appropriate. I think I will get better. Turns out I’ve been using the wrong models anyways and relied too much on Psyfighter and such, which might be great for role-play dialogue. But with my recent test it turns out I don’t really like their output when it gets to storywriting.

    Edit: Yeah, and one thing more: It came up with a nice plot which I liked and explored further. And at some point the AI cited the 2018 science fiction movie it got all the ideas from 😆 That really made me laugh. Seems some of my ideas aren’t that original. But getting some recommendations is nice, I’ll just skip writing the story myself and watch the movie then.



  • It’s difficult. Sometimes it’s necessary to introduce that, since it generally also throws the reader off if they got time to form a picture in their head and you suddenly destroy that and have her large blubbery breasts weigh down on your chest on page 20. Or need to describe her eyes in detail later because they look at each other for a minute. Or in book 2 her sister comes visit her who also has ginger hair… I think that’s the reason why people do it. The less specific you are, the more you have to constantly factor in that all the characters could have vastly different appearances. And later descriptions of scenes have to get even less detailed.

    Generally speaking I’m completely with you. Reading stories sparks imagination. And it’s fun to imagine the characters, picture the scenes. It’s not easy to write it that way.

    And regarding the numbers to the body type: I think it’s also frowned upon to say someone has C-cup breasts because that’s too technical. You should describe them instead.


  • Thanks! Yeah, you were kind enough to include a bit of extra info in your previous posts. Your stories are somewhat specific and complex. I figured if you like a model… it has to be ‘intelligent’ enough to keep track…

    I wonder if I also like that model for my purposes. I’m not sure if I can run the 70B model, I’d have to spin up a runpod cloud instance for that. But I’ll try the FlatDolphinMaid 8x7B tomorrow.

    You’re right. (Good) AI storywriting and finding good models and settings isn’t easy. I also discarded models and approaches because the prompt (or settings) I used didn’t work that well and it later turned out I should have done more testing and got to like that model, all it needed was a different wording or better settings.

    And some models have unique quirks or style or things they excel at… Which might skew expectations when switching to a different model.


  • My own results:

    [Edit: Don’t use this as advise. I’ve re-tested some of the models and I’m not happy with the results. They’re inconsistent and don’t hold up. Also some of my “good” models perform badly with role-play.]

    Model name Tested Use-Case Language Pacing Bias Logic Creativity Sex scene Comment
    Velara-11B-v2 Q4_K_M.gguf porn storywriting 4 4.5 3 4 4.5 4 generally knows what to detail, good atmosphere ⭐⭐⭐⭐
    EstopianMaid-13B Q4_K_M.gguf porn storywriting 4 4 4 3 3 5 good at sex ⭐⭐⭐⭐
    MythoMax-l2-13B Q4_K_M.gguf porn storywriting 4 5 4 4 4 3.5 good pacing, still a solid general-purpose model ⭐⭐⭐⭐
    FlatDolphinMaid-8x7B Q4_K_M.gguf porn storywriting 4.5 4 3 4 4.5 3.5 intelligent but isn’t consistent in picking up and fleshing out interesting parts, build atmosphere and go somewhere ⭐⭐⭐⭐
    opus-v1.2-7b-Q4_K_M-imatrix.gguf porn storywriting 3 5 3 3 5 3.5 very mixed results, not consistent in quality ⭐⭐⭐
    Silicon-Maid-7B Q4_K_M.gguf porn storywriting 4.5 3.5 3 4 3 3 has a bias towards being overly positive ⭐⭐⭐
    Lumosia-MoE-4x10.7 Q4_K_M.gguf porn storywriting 4 3.5 4 3 4 3 mediocre ⭐⭐
    ColdMeds-11B-beta-fix4 gguf porn storywriting 3.5 3 4 4 3.5 3.5 mediocre ⭐⭐
    Noromaid-13B-0.4-DPO q4_k_m.gguf porn storywriting 4 4.5 4 2 4 3 very descriptive, issues w intelligence and repetition ⭐⭐
    OrcaMaid-v3-13B-32k Q4_K_M.gguf porn storywriting 2 4 4 2 4 3.5 not very elaborate language, sometimes gets a bit off ⭐⭐
    Kunoichi-DPO-v2-7B Q4_K_M.gguf porn storywriting 4 1 4 4 4 3.5 rushes things, consistently too fast for storytelling ⭐⭐
    LLaMA2-13B-Psyfighter2 Q4_K_M.gguf porn storywriting 4.5 3.5 3 3 3 3.5 good language, doesn’t know what to narrate in detail ⭐⭐
    go-bruins-v2.1.1 Q8_0.gguf porn storywriting 3 4 4 4 3 2 sometimes a bit dull, not good sex scenes ⭐⭐
    Neural-Chat-7B-v3-16k q8_0.gguf porn storywriting 4 4 3 2 4 2 sometimes tries to hard with elaborate language ⭐⭐
    NeuralTrix-7B-DPO-Laser q4_k_m.gguf porn storywriting 3.5 3.5 4 4 3.5 2 misses interesting parts ⭐⭐
    LLaMA2-13B-Tiefighter Q4_K_M.gguf porn storywriting 4 3 3 2 3.5 3.5 often introduces things out of thin air ⭐⭐
    mistraltrix-v1 Q4_K_M.gguf porn storywriting 4 4 3 3 3.5 2 complicated sentences, no good description of sex ⭐⭐
    Toppy-M-7B Q4_K_M.gguf porn storywriting 4 2 4 4 4 3 too fast, not focusing on the right details ⭐⭐
    WestLake-7B-v2-laser-truthy-DPO Q5_K_M.gguf porn storywriting 3 4 4 4 4.5 1 is creative, didn’t do proper sex scenes ⭐⭐
    Distilabeled-OpenHermes-2.5-Mistral-7B Q4_K_M.gguf porn storywriting 4 3.5 3 4 3.5 2 a bit dull ⭐⭐

    What I’ve done is: Instructed the LLMs to be a writer of erotic stories, who sells bestsellers and likes to push limits and explore taboos. I’ve included a near-future scenario with questionable ethics and quite some room to build atmosphere, explore the world or introduce characters or get smutty after a few paragraphs. Told it several times to be vivid and detailed, to describe scenes, reactions and emotions and immerse the reader. I’ve included a few things about one female character and provided the situation she’s brought in. That pretty much sets the first two chapters. Then I fed it through each model twice, let them each write like 2500 tokens, read all of those stories and rated how I liked them.

    I’ve paid attention to use the correct, specific prompt formats. But I can’t tune all the parameters like temperature etc for each one of them, so I’ve just used a Min-P setting that usually works well for me. That’s not ideal. If you have a model that scores too low in your opinion, please comment and I’ll re-test it with better sampler parameters.

    Also feel free to comment or make suggestions in general.


    [I invite you to share and reuse my content. This text is licensed CC-BY 4.0]


  • Hehe, glad it turned out quite alright for you. I guess there might be a happy ending to that story/letter. And now I get the context of your writing.

    I also really like people who are direct an honest. If it were up to me, we could skip most of the clues and hinting at things. I mean if we’re trying to convey something to another human being we can make a deliberate choice to either make it simple and just say what’s on our mind, or make it a riddle for them to solve. I like the first approach and I think it would be to the benefit of everyone if people chose to do it that way.

    Ultimately your situation is a bit different anyways… I wouldn’t dare to ask a random person for sex, it would be wrong. But if you’re already sending dick pics and talking about your cock… ¯_(ツ)_/¯ You can probably skip the social etiquette and speak your mind. They’re the right person to be honest with, or they would have ran away way earlier. (My opinion)


  • I like it. Sure, maybe it’s a bit more on the mechanical/technical side of describing the act. You could also describe the character’s emotions and how these things make them feel. But it’s not that obvious in such a short text. I think if you had given me that without a disclaimer, I wouldn’t have guessed it’s by someone who isn’t considered to be a normie…

    Especially in the realm of erotica and pornographic stories, there are so many perspectives on things, fetishes and really outstanding things people like and focus on… The word “normal” kind of looses its meaning here.

    There are guides on how to write erotica. It’s more focused to describe a scene than other kinds of fiction anyways. And I mean general tipps on storywriting apply if you want it to sound professional. Use past tense, choose a perspective from which the story is told, have a central theme and something that develops the narrative and characters and goes somewhere. But if you’re just doing it for you, you can skip all of these and just do whatever makes you happy. Except for consent of involved parties, there aren’t many rules to sexuality. There is some pressure by society, but in the end we all have to find out what we like and do that.


  • magn418@lemmynsfw.comMtoChatBotsNSFW@lemmynsfw.comWhich LLM's do you use?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I’ve been using LLaMA2-13B-Psyfighter2 lately. Tiefighter is another good choice. Before that I used Mythomax in the same size but this is outdated.

    I use RoPE scaling to get 8k of context instead of the 4k a Llama2 does. That works very well. And I just use Mirostat 2 in case I haven’t found better manual settings.

    I don’t face repetition loops or something like that often. Some models do it, but usually if that happens and it’s not an insane model-merge I find out I got some setting way off, selected the wrong (instruction) prompt format or the character card included a complicated jailbreak that confused the model. I usually delete all the additional ‘Only speak as the character’, ‘don’t continue’, never do this and that. As it can also confuse the LLM.

    I think a 13B model is fine for me. I’ve tried some models in various sizes. But for erotic roleplay or storywriting, I’ve come to the conclusion that it’s really important that the model got fine-tuned with data like that. A larger model might be more intelligent, but if the material is missing in their datasets, they always brush over the interesting roleplay parts, get the pacing wrong or always play the helpful assistant to some degree. You might be better off with a smaller model if it’s tailored to the use-case.