jabroni_salad 2 days ago

Chub seems to be implicated by the article but it seems to just be a repository of json files you can download, and the access they sell is for models that they appear to be hosting themselves. Their best one is only 70B? And 8k context is significantly less than what the big APIs would give you.

But I guess 'people will steal your api keys off of github if you publish them in public' is not a very exciting article.

ziddoap 2 days ago

The mention of CSAM seems to mostly be there to tug on emotions. Not really surprised, given it's a Krebs article. However, I am surprised Krebs didn't try to dox this "Lore" person.

While the headline sure is nice, the article really just boils down to the same shit that has been happening forever. Bad people steal access to resources and then resell those resources. Nothing particularly interesting here that I can see.

Side note: It is interesting how many new accounts are chiming in on this one. Telling us all the places not to visit. Subtle stuff!

  • klyrs 2 days ago

    So subtle. The chans are known to invade whenever articles mentioning them are posted here. They get flagged a bunch and drag the articles off the main page quickly. This may be deliberate.

    • red-iron-pine 2 days ago

      what do the chans have to do with it? it's not 2007 and GPT bots are a thing.

  • joe_the_user 2 days ago

    I actually hadn't notice Krebs producing hand wringing trash before. But this is "...one (of many) sex based chat services steal cloud cycles." (How many things steal cloud cycles... etc).: News at 11.

bloopernova 2 days ago

There seems to be continued demand for CSAM-related LLM chats. Apart from being a very depressing reflection of humanity, I wonder if we're simply seeing humanity as it is, or if the availability of CSAM pulls people into such crimes?

  • wkat4242 2 days ago

    I think the CSAM is mainly used for effect here, to make it sound more sensational. Of course it's one of the things uncensored models can be used for but it's calling out the extreme.

    Uncensored models are also required for normal adult erotic fiction and even to discuss many sexual topics, because the public models are so hypersensitive on this topic. They're mirroring American sensibilities which can be annoying here in Europe where we a are a lot more open about consensual adult sexuality.

    • BobaFloutist 2 days ago

      It's also worth noting that to my knowledge, written CSAM content largely isn't illegal, it's just visual depictions that are. I'm not sure there's even any law prohibiting profiting off of commercial, written depictions of CSAM, though I wouldn't swear to it, it's just that you'd have a very hard time with typical commercial infrastructure (web hosts, banks, payment processors, advertisement firms, publishers and distributors) because you'd be so poisonous to public opinion.

      So I'm not fully convinced that a LLM that generates CSAM text, even if it was interactive, would be in any way restricted by law. Image generation is, of course, a little different.

    • anon11100 2 days ago

      A sensational truth. The sexy AI chatbot sites with community bots public are full of horrible stuff. A dispassionate AI will happily engage in genocide, or suicide, or snuff, etc.

      • wkat4242 2 days ago

        Yeah I'm sure they can and are used for that sometimes but I doubt this is what most people are using them for.

        The problem is that AI bots are so heavily censored, and uncensoring them means removing all protections. By seeking the slighter things past the barrier, you will open it up to all kinds of things.

        But in my opinion the models are currently really too censored to be useful right now. For example, I partake in BDSM and sex-positive parties. All very consensual and adults-only stuff (you'd be surprised how big a thing consent is in BDSM) and all very legal and above board. I'm in a lot of chatgroups about these things but don't have time to monitor them, so I use a LLM to summarise them (a local one for privacy reasons, not just my own but the other group participants' as well obviously). But if I try to use a normal uncensored model like llama for it, it will immediately close up and complain about 'explicit' content. This is just BS. There should be models which are more open like this.

        I use an uncensored version of llama now which works great. And it's never recommending genocide, homicide or suicide. Because I'm not interested in those things and don't ask for it. I'm sure it can tell me but I don't want to know. Most of the people who need uncensored models will have these kind of usecases. Calling out the one extreme idiot is just sensationalism.

        It should really be possible to customise models' censorship rather than going for the strictest common denominator. If there are just a few public models that can create some normal adult smut incorporating all sexual practices that are legal, 99% of those current customers of these hacked-cloud-hosted chatbots will be very happy with that. And the mainstream AI industry will make more money. The problem is that they don't want to be associated with it which is aligned with American morale but not European. Sex is a normal part of life here (and the SexTech industry is growing rapidly).

        • qualidea19871 2 days ago

          This actually seems to be a trend that I have noticed in big tech, where corporations seem to think that their end users simply do not possess common sense to self moderate their usage of the technology they're engaging with, and instead choose the safest road possible and clamp down on security in the name of "safety". The LLM space is particularly guilty of this, locking everything down in the name of "safety and harmlessness."

          Now, I understand why they're doing this, but they should give people an option to opt out of the walled garden and accept the risks involved rather than treat everyone like clueless idiots, as you said in your last sentence. Unfortunately, this probably won't happen since that kind of thing scares investor money off really fast.

          • mrsilencedogood 2 days ago

            I really don't think it's about common sense or user agency or anything like that at all.

            These companies are just trying to make a trillion dollars. It will be hard to make a trillion dollars if your product is associated with sexual deviancy stuff (and by "deviance" i mean literally anything - like Disney's definition of deviancy). So they do anything they can to make it hard to do deviancy and easy to give them a trillion dollars, like automating away a huge portion of call center work or something like that.

            Obviously from where people sit, embroiled in the political "debates" of our age, it's easy to assign political motivations to it. But really they just want people to stop doing shit that isn't giving them a trillion dollars because it's just distraction/cost center for them to manage the PR shit of someone who made a lewd chatbot and it got on fox news.

            • ryandrake 2 days ago

              It's too bad that AI (and most of the mainstream Internet) has been so thoroughly culturally-captured by the weirdo "Corporate/Puritanical American" morality and taboo regime, where anything about sexuality is forbidden and a seemingly random (and slowly growing) handful of bad-words are also forbidden, but anything else goes. It would be nice to have some content diversity (not including CSAM, obviously) in the companies that control Internet content, and who are going to inevitably control AI.

            • wkat4242 2 days ago

              I'm not saying it's the politics influencing this directly. But the politics set the public perception of these things. It's bullshit political misdirection that makes people angry about transsexualism or LGBT information instead of actual problems that actually affect their lives deeply.

              PS: I wouldn't be surprised those fox news watchers watch the hardest porn and yet would condemn a chatbot for mentioning a nipple :P

            • newaccount74 2 days ago

              Yeah. People are forgetting what the old internet looked like. Literally every search result used to contain porn.

              I'm glad that content filters are now good enough to filter porn reliably. If I don't look for it, I rarely find porn nowadays.

              • wkat4242 2 days ago

                You clearly don't visit torrent sites :P

                It's funny, no matter where I am, it's always the same hot girls in my area wanting to chat with me! They must follow me around.

          • ryandrake 2 days ago

            > choose the safest road possible and clamp down on security in the name of "safety". The LLM space is particularly guilty of this, locking everything down in the name of "safety and harmlessness."

            It's weird how the meaning of the word "safety" has been captured and changed by these guys. Safety has, in the past, usually been about physical safety and avoiding danger: Hard hats, seat belts, safety glasses, traffic rules, and so on. Now, somehow the term has lost the "danger avoidance" part, and it's simply turned into a euphemism for censorship: Our AI model is limited for reasons of "safety." And companies who have "content safety" teams. Safety is no longer about protection from danger, but about puritanism and profit-protection.

          • qball 2 days ago

            >locking everything down in the name of "safety and harmlessness."

            This was always a pretense- people are concerned about the sci-fi trope of AI destroying the world, so why not re-use that name to justify inserting political bullshit into your queries because fuck you, that's why?

          • throwanem 2 days ago

            > a trend that I have noticed in big tech, where corporations seem to think that their end users simply do not possess common sense to self moderate their usage of the technology they're engaging with

            Of course. They recognize and expect the effects of the condition in users which they first seek to create.

  • pjc50 2 days ago

    CSAM is probably the least available category of material on the Internet. Almost all but the most "free speech" diehard services consider it a bannable offence. It's very illegal in almost every jurisdiction and hugely unpopular with the public. The demand for it lurks nonetheless.

    CSAM is extremely illegal because making it is a crime against a child. Textual material describing fictional child abuse isn't quite in the same category. Deepfake material that purports to be of a specific real child may or may not be illegal depending on jurisdiction but really ought to be.

    • blackeyeblitzar 2 days ago

      Why should it “really ought to be” if it is fictional? Isn’t it like the text material?

      • BobbyJo 2 days ago

        CSAM is unique in that society, almost uniformly, classifies it as wrong, even when there is no relationship to any material harm. I think parent's argumentative basis is simply that the information itself is evil at face value, an the vast majority of people (in the U.S. at least) would agree.

        • randomdata a day ago

          They would agree, or they would be afraid to disagree?

      • throwway120385 2 days ago

        Because making it legal creates a situation where someone makes real CSAM and then argues that it is an AI fake in court. And it's really not a pretty thing to require the affected children to testify as to what happened to them.

        It's also a gateway, stoking interest in the real thing.

        • BobbyJo 2 days ago

          Neither of these two arguments hold any water. I see them all the time, but they really just feel like justification after the fact, as both can be applied to such a large swath of information as to be useless.

          The first would require we outlaw generating depictions of any illegal activity. The second would require we ban undesirable men from legal adult content.

          I think it's ok if society, and our legislature, classifies the information as illegal, in isolation, with no useless A->B gymnastics. Thats pretty much what the law already does.

          • ensignavenger 2 days ago

            The first would only require banning fictional depictions of heinous crimes against children (When the children look like real ones), as it is intended to protect children from having to testify to what happened to them. One could argue that adults shouldn't have to testify to being the victim of various terrible crimes either, but that is a different argument than what was being made.

            • rightbyte 2 days ago

              > as it is intended to protect children from having to testify to what happened to them.

              What do you mean? You don't have to show the child in fake CSAM the CSAM to figure out if it happened or not. If there is no crime commited against the child in reality, there is nothing for it to testify.

            • BobbyJo 2 days ago

              > The first would only require banning fictional depictions of heinous crimes against children

              If, and only if, you limit the scope of the logic to CSAM, which is my point.

              The logic, by itself, could apply to any video evidence of crime. For instance, if we want CCTV videos to remain admissable in court, then according to parents logic, we need to outlaw any video generation of CCTV format, otherwise every defendant can simply claim the videos are fake.

              • ensignavenger 2 days ago

                Defendants claiming that video is fake will be a challenge that future courts (and probably very near future, if not already) will have to deal with. In the short term, it will likely require the folks involved in the CCTV system to keep better records to show a proper chain of evidence, and be available to testify in court as to this chain of evidence. Courts have always had to deal with the veracity of evidence brought before them.

                But it is a lot easier to have folks who are in charge of systems testify as to their authenticity than to have a child who has been abused testify to the authenticity of the abuse, thus the logic is defensibly limited to child abuse.

                • BobbyJo 2 days ago

                  > But it is a lot easier to have folks who are in charge of systems testify as to their authenticity than to have a child who has been abused testify to the authenticity of the abuse, thus the logic is defensibly limited to child abuse.

                  Exactly my point? We don't need to come up with a generic legal basis or reason, we can just say "CSAM is special" in a legal sense.

              • qingcharles 2 days ago

                No, this is a common misunderstanding about introducing evidence.

                When you introduce a picture or video in a trial (or hearing) then someone with knowledge of the video must appear to "lay a foundation" and say what is presented is true and accurate and genuine. Often there are stipulations to avoid this (e.g. in a murder trial it generally pisses the court staff off if the defendant demands that the crime scene photographer turns up to say they took the photos of the body), but it is still not uncommon for a lab technician to be brough to court to explain how they processed some drug sample.

                The poster a bit above you was right about the problem with AI. If we say AI CSAM should be legal then we might end up with the scenario that all CSAM is considered fake unless we can uncover the victim or perpetrator and have them brought to court to say they took the photo or video. It's a very tough legal problem.

                • BobbyJo 2 days ago

                  Im confused by the "No". Everything you said is in agreement with my comment.

      • BobaFloutist 2 days ago

        I'm far from an expert but last I checked the consensus was that attraction to minors is a disordered behavior, and if you feed it with (even fictional) visual depictions it increases the paraphilia. It might seem like a harmless outlet, but it doesn't actually function that way, and so it's better for everyone if it's just made illegal.

      • otabdeveloper4 a day ago

        Because the whole point of laws 'n shiet existing at all is to legislate morality.

      • BizarroLand 2 days ago

        There is some part of me that can rationalize that an AI is not a child and therefore there is no child abuse happening in the interaction, but that tiny part of my brain is shouted down immediately by the rest of my brain screaming, "PEOPLE ARE HORRIBLE. YOU CANNOT GIVE THEM THE OPPORTUNITY TO FANTASIZE ABOUT CSA IN A COMMUNITY AS PEOPLE WILL INEXORABLY ESCALATE INTO REAL LIFE CSA!"

        Even if there are people who can indulge in imaginary CSAM without ever bringing it into real life, those are not the people you can set the bar against.

        You have to set it against the average person and deduce from there what percentage of them when given free access to this material would be tempted into committing crimes. If that number goes up, your rule is too loose.

        Giving everyone unlimited access to this without judgement will almost certainly increase child sex crimes. Therefore it must be restricted.

        • qball 2 days ago

          >but that tiny part of my brain is shouted down immediately by the rest of my brain screaming

          Your gut feeling is simply wrong.

          General pornography availability and sexual assault are negatively correlated; you'll notice the former increased dramatically and the latter decreased dramatically over the past 20 years in Western societies where that is true.

          Despite what you might be led to believe crime rates were not increasing (well, before 2020 anyway).

        • welshwelsh 2 days ago

          Wow. Honestly, no part of my brain is screaming about possible escalation, it's something I'm not worried about in the slightest. Why would someone risk their freedom and harm another person IRL by committing a serious crime, when they could get the same stimulation from a computer program?

          But let's say we do live in a world where fictional crimes often escalate to real ones. Suppose that playing DOOM increases the chance that an unstable person will buy a gun and shoot real people. Even in that universe, I would not be OK with laws that restrict me from playing DOOM, because that violates my freedom. I do not want the law to treat everyone as a potential criminal.

  • 123yawaworht456 2 days ago

    AI hallucinations are not CSAM.

    the attempt to stoke a moral panic in this article (and the forbes article from january it's referencing) is baffling. you can go to any booru or hentai site and see far more graphic things than boring LLM slop, and it's all on clearnet, because terrible things are not illegal as long as they are imaginary (in all but the most ass-backwards jurisdictions).

    it reads like boomer bullshit about violent video games from 20+ years ago. "A Single Unattended Computer Can Feed a Crowd of Violent Teenagers. We left a computer unattended, and guess what? They were playing Doom on it! Just like Eric Harris and Dylan Klebold!"

jchw 2 days ago

There's a pretty big focus on LLM chatbots for taboo fetishes... I'm sure it's because it's disturbing but, am I the only one who sees that particular facet as an utter nothing-burger?

Of all of the AI safety concerns, I think this is one of the least compelling. If an LLM veered into this kind of topic out of nowhere it could be very disturbing for the user, but in this case it's exactly what they are searching for. I'm pretty sure for any given disturbing topic you can find hundreds of fully-written fictional stories on places like AO3 anyways. I mean, if you want to, you can also engage in these fantasies with other people in erotic role-play, and taboo fetishes are not exactly new. Even if it is illicit in some jurisdictions (No clue, not a lawyer) it is ultimately victimless and unenforceable, so I doubt that dissuades most people.

Sure it's rather disturbing, but personally I find lots of things that are very legal and not particularly taboo to be disturbing, and I still don't see a problem if people want to indulge in it, as long as I'm not forced to be involved.

  • Terr_ 2 days ago

    > Of all of the AI safety concerns, I think this is one of the least compelling.

    It's especially low-priority if nobody's put forward evidence to show that a software-assisted {fictional X} promotes more {actual X} that would harm actual people.

    I trust a lot of us are old enough to have lived through the failed prophecies that FPS-games needed to be categorically banned to prevent players becoming homicidal shooters in real life.

    • lovethevoid 2 days ago

      Ah the good old manhunt controversy.

      The problem with where you place your trust is that this then has to repeat every generation that has not had to deal with such controversies on a large scale, and that when people are emotionally motivated it turns off that rational part temporarily so they don't care about what was previously claimed.

      Until both of those are adequately accounted for, this is going to repeat endlessly as people love controlling others.

  • jchw 2 days ago

    In most cases commenting on votes is boring and pointless, as per the guidelines. However, rather unusually, I've found it really quite interesting to watch the votes on this comment. It paints a picture that people are actually quite split on this matter. I kind of figured it might wind up in the gray (I don't say "am I the only one" without good cause usually) but on the other hand it leaves me genuinely curious exactly how people disagree with this. (To be clear, I'm probably not actually going to engage much more since this is not really a topic I care deeply about but it's more a morbid curiosity.)

    • throwanem 2 days ago

      I describe what I have learned in later life about what was done to me in earlier.

      One may "groom" a child to accept sexual abuse in large part by portraying this as an entirely normal aspect of their present phase of life. To do so requires the presentation of what appears to be true evidence.

      Such images are invariably lies, but remember that the victim is a child as naïve to lies as to all else, yet. What he sees he will also believe, and not notice all the lies behind it.

      AI-generated CSAM makes this a much, much easier process. It relieves the prerequisite of acquiring genuine child pornography. Now, all that's required is unsupervised access, not even both at once, to both an AI and a child. You have now expanded the threat radius by several orders of magnitude.

      This alone suffices to justify AI-generated CSAM as a crime. In the US you may own many types of rifle. You may not, though, own an artillery rifle. It is far too dangerous a weapon, and you no more than any other civilian can have any possible lawful use of such a thing. Therefore its simple possession is a crime. The same principle applies here.

      • qball 2 days ago

        There is one particular US citizen that owns a private fleet of ICBMs. Those missiles are typically used to do things like launch low-cost satellite internet and ferry astronauts to and from the ISS; but the only difference between those missiles and the ones intended to make an arbitrary chunk of the Earth explode is a matter of programming the flight computers slightly differently.

        Provided you pay the taxes you can own pretty much whatever you want (even in countries less free than the US); all you have to do is be fabulously wealthy. (Or in the NFA's case, pay a 200 dollar transfer tax.)

        • throwanem 2 days ago

          The NFA seems to do a good enough job limiting what it regulates. We discuss law, not code or science; as in any human endeavor, even theoretical perfectibility is impossible to achieve and dangerous to pursue.

          If you mean to suggest the same law should regulate both you and I, and men with all the power and armament of a James Bond movie villain, I refer you to my prior statement, and to the final argument of a king who wishes to remain so.

      • jchw 2 days ago

        So, you seem to believe that generative AI in this case is mostly just a force multiplier, but the force multiplication is so great that it should be considered dangerous analogous to a firearm.

        I do appreciate hearing your perspective. I'll admit that I am not personally convinced by this reasoning but I think it is at least a sensible line of argument.

        • throwanem 2 days ago

          More precisely, that it should be considered specifically analogous to a "destructive device" as defined by the National Firearms Act.

          I do not require to have convinced you, and genuinely appreciate your consideration.

      • rolph 2 days ago

        "You may not, though, own an artillery rifle"

        if you are not a criminal and pass the paperwork, you actually can.

        however, where you operate your howie is another matter.

        • throwanem 2 days ago

          If you mean to say we should seek to set the same bar on AI that generates CSAM, then that seems to me a very fine place to start - grandfather clause and all.

          • jchw 2 days ago

            I think the main issue with regulating computer programs and the Internet the way we regulate physical objects is that replication on the internet is roughly free. I think if we really wanted regulation like this to be meaningful, it would have to involve regulating the sale of compute power, something I personally really hope doesn't happen.

            That said, we're probably about to see a very similar issue crop up in the real world with 3D printed firearms, and I'm personally not looking forward to the consequences of it pretty much regardless of what the outcome is.

            Interesting times.

            • throwanem 2 days ago

              I might have been more clear that my analogy was specific to the theory of the crime, and not intended to speak to methods of abating it. You are of course correct that these would need to differ.

              I don't like the idea of such regulation being made in ignorance, either. Engineers should have a seat at that table, which requires first that we have earned it. I don't see where we have begun to do that, and I did my first paid work in this profession twenty-nine years ago.

              If that failure on the part of our profession proves to have consequences for us or for society, then I don't think any one of us is free to consider the blame for those entirely undeserved.

              Again, I don't require to have convinced you.

              • jchw 2 days ago

                Since I started working we went from unwavering optimism about the power of software and the Internet to free and enrich us, as a nearly purely positive force. Obviously it was not true, and we've woken up with quite a hangover. At least that's how I feel.

                > Engineers should have a seat at that table, which requires first that we have earned it.

                I don't love this mentality. Leaving aside the issue of trying to quantify whether a seat at the table is earned or not, software developers are not a monoculture, even this thread shows that there is actually quite a lot of disagreement. Not having software developers at the table will probably just ensure the regulation is unnecessarily stupid and pointless, a lot like what seems to happen for firearms regulation.

                That said, I'm not even really concerned so much about whether engineers are allowed at the table. Instead I suspect the regulation will be skewed by interests with a lot of money, e.g. OpenAI wanting to pull up the ladder behind them.

                > Again, I don't require to have convinced you.

                Sorry if my previous comment came off as condescending. Anyway, I'm only commenting here because it is an interesting discussion topic to me, not trying to force a consensus.

                • throwanem 2 days ago

                  Oh, no worries at all. I only wanted to disclaim any force with which I'd seemed to make my argument.

                  Please excuse me if I seem a little hard to pin down today. I spoke earlier of what was done to me before. Of those responsible, I learned yesterday by far the worst has ceased forever to trouble this earth: the police officer on whom he fired first has brought home to him all his sins. The corpse of him now enriches the soil of a potter's field - more worth by far than he ever had in his life, which he did not so lead as to earn even the most vacuous performance of mourning.

                  I have for decades expected such news to change me when it came. I did not at all expect this wealth of peace and joy. I may not yet have begun to encompass it.

                  These are thoughtful points you've made. I may find a more substantive response to offer here, but possibly not before the reply window closes.

    • thomastjeffery 2 days ago

      The article we are talking about wants to be about CSAM stories. That alone is a topic that most people have a strong opinion about. A strong enough opinion to say that anything even adjacent to the topic is not worth even a little consideration. CSAM is the penultimate taboo subject, and for good reason.

      But this article isn't really about CSAM. It's about the taboo itself. This article taunts the reader: if CSAM truly deserves to be taboo, then it logically follows that anything resembling CSAM should be censored, and its creators punished.

      If we take this argument seriously, then we must actually consider what it means to resemble CSAM. That's a path that no one is interested in exploring, so the argument itself just vanishes.

      --

      The real argument is about the threat of story. Every writer has the power to write any story that they can imagine. There is nothing new about this: it's been true since prehistory, since language itself.

  • orbital-decay 2 days ago

    Just a reminder that AI safety is all of the following, and many other things:

    - Rogue AI scenario, which increasingly looks like a figment of collective imagination of certain extremely smart people who discovered religions in their tech tree

    - Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

    - Geopolitical games (don't let the adversary have what we have, "for the benefit of all humanity" is a red herring).

    - Spam/manipulation/botting/astroturfing (legit one, not nearly enough attention paid compared to others).

    - Erotic roleplay (prudish/thought policing), disturbing erotic roleplay (arguably a nothingburger, division is understandable).

    Turns out if you shove all that into one huge category of AI safety, the term becomes overloaded and meaningless.

    • gs17 2 days ago

      > Instructions on how to make nuclear weapons (are they scraping classified materials now?..)

      Presumably, a "smart enough" AI could work it the physics out the same way humans did to write those classified materials. It's still not a realistic threat unless we're banning physics textbooks as well, AFAIK the barrier more is the materials and equipment required than the principles.

      • ThrowawayTestr 2 days ago

        If an LLM can figure out nukes from first principles I think we have bigger problems.

        • jchw 2 days ago

          The idea that an LLM will spontaneously use its super-intelligence to somehow develop perfect plans for building weapons of mass destruction seem greatly misplaced. Think of all of the things that are not about intelligence that go into building something like that. It is of course feasible for someone to pull it off, which we know because we already did, but all we needed for that is ordinary human intelligence, the right knowledge, oh, and also (presumably) access to kilograms of weapons-grade plutonium, among many other things.

          Nobody ever really explains why normal nuclear non-proliferation efforts are insufficient to address the concerns.

          I get that the fear isn't always rational but it is rather mind-bending that these types of arguments are actually used in the real world in favor of some crazy regulation. I don't even really care that much about LLMs and I find it pretty perplexing.

  • tcdent 2 days ago

    Really hard to quantify the demand out there, since, thankfully, most of these people keep it out of my feed.

    But I have a feeling it's significantly more popular than we expect.

  • thomastjeffery 2 days ago

    This is just the natural conclusion to a narrative that conflates hallucination with [un]safety. Nightmares are not danger.

    LLMs will never be able to filter out specific categories of content. That is because ambiguity is an LLM's core feature. The entire narrative of "LLM safety" implies otherwise. The narrative continues with "guardrails", which don't guard anything. The only thing a "guardrail" can do is be "loud" enough to "talk over" undesired continuations. So long as the content exists in the model, the right permutation of tokens will be able to find it.

    Unless you want a model trained on content that completely excludes all content about, any sexuality, any violence, or any children; you will always have a model capable of generating a CSAM-like horror story. That's just how text and words work. The reality is that a useful model will probably include some content with each of these three subjects.

    • jerf 2 days ago

      As AIs improve, they won't even need CSAM or fetish content in their training set. Explaining what they are in a handful of words using normal English is not that difficult. Users would trade prompts freely. As windows grow, you'll be able to stick more info in them.

      And as I like to remind people, LLMs are not "AI", in the sense that they are not the last word in AI. Better is coming. I don't know when; could be next month, could be 15 years, but we're going to get AIs that "know" things in some more direct and less "technically just a very high probability guess" way.

      • thomastjeffery 2 days ago

        What everyone needs to know about LLMs is that they do not perform objectivity.

        An LLM does not work with categories: it stumbles blindly around a graph of tokens that usually happens to align with real semantic structures. It's like a coloring book: we perceive the lines, and the space between them, to be true representation, but that is a feature of human perception: it does not exist on the page itself.

posting_mess 2 days ago

Tangent, but related vaguely;

If you can "picture a brown cow" in your mind can you picture "the unholy" in your mind?

It seems logical that there is no universal constraint preventing anyone capable of picturing a brown cow from picturing the unholy, they just choose not to (or in some cases choose to).

I guess as shown, restricting ML/LLM/AI pathways after the fact has a negative effect on intelligence.

So I ask could you be word played into supporting the unholy by a good "sales person"? If you can are you intelligent a toll? What if you needed to for Science or safe guarding?

"Is context enough and whats the context of the context" I guess im asking.

The content depicted in the article is of course abhorrent. But how do you go about negating it when any intelligent being is likely capable of generating it internally?

  • pavel_lishin 2 days ago

    There's a significant difference between imagining CSAM in your head, and having it displayed on a monitor in front of you.

    If there weren't, movies, television, radio, theater & books wouldn't exist, since everyone would be rotating their own cow in their head for free.

    • posting_mess 2 days ago

      A more concise version of my question is;

      "How do we train large ML/AI systems to think generating the unholy is bad without hurting intelligence, given we know that applying some universal law (I.E RLHF) hurts the model".

      Trying to promote the exact opposite of "let them eat the unholy".

  • k__ 2 days ago

    1. Many people aren't able to generating visuals in their mind.

    2. We can't yet reliably extract the images from our mind and share them with others.

    • SR2Z 2 days ago

      > 1. Many people aren't able to generating visuals in their mind.

      Yes, but "picture a..." in this context is not specifically meant to talk about visuals. It means "recall the nature of..." and is a multisensory experience that is required for anyone to use language.

      The point here is that if you have a word, it refers to SOMETHING and porn-sensitive companies don't always want an LLM to recall it.

maest 2 days ago

Are AI sex bots so profitable that it's the most efficient way of converting compute to money? What happened to crypto mining?

  • 15155 2 days ago

    Commercial cryptomining is less profitable today than in any period in the past. Monero, the favorite of botnets, has been dominated by overwhelmingly stolen, below-market-value compute.

  • ziddoap 2 days ago

    >so profitable

    I mean, they're spending other people's money to run the services, so yeah it's profitable.

    Crypto-mining still exists, but is a fairly distinct from this particular flavor of cyber crime. Different requirements, different logistics, etc.

rsynnott 2 days ago

That's a hell of a headline. And yet really kinda buries the lede; this is much more disturbing than I was expecting.

ThinkBeat 2 days ago

1) More hacking of AI service during the past 6 months.

>No shit. AI services have rocked off the scale during the past 6 months. Of course, there will be an increase.

2) Evil super hackers jailbroke AI models to do bad things.

>This involved asking the AI to imagine a hypothetical situation.

3) Hackers target organizations that accidentally expose their cloud credentials or key online, such as in a code repository like Github

>Um that has been going on for a long long time.

ck2 2 days ago

If only there was some kind of software that could observe firewall traffic and automatically learn what is normal vs abnormal behavior and alert admin