Star Wars Roleplay: Chaos

Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

Question We're working on a button you can click to have AI summarize long posts - Do we want that?

Status
Not open for further replies.
It's funny to me that ya'll have hate for AI so much but you've got a lot of experience with it messing up your writing. Let's not even start with all the AI art up in here.

Oh, but it's different! Is it? Really?

I see a lot of screw the little guy and I don't trust people to do their jobs if they have a summary.

Smh.

I mean respectfully, a quick ctrl+f found no mentions of art in this discussion so far, so I'm not sure where that fits in the current discussion outside of maybe projecting some feelings received elsewhere into here.

But also to expand upon what I was saying, it's not about not trusting someone to do their job with a summary, it's about the fact that no matter how well-intentioned they are from the jump, there is a non-zero chance that the summary itself will not accurately paraphrase the nuance being conveyed by a human-created post.

Whether the person who clicked the button is capable of being good faith or not isn't even the topic of discussion at that point.
 
L E F T _ H A N D _ B A N E
It's funny to me that ya'll have hate for AI so much but you've got a lot of experience with it messing up your writing. Let's not even start with all the AI art up in here.

Oh, but it's different! Is it? Really?

I see a lot of screw the little guy and I don't trust people to do their jobs if they have a summary.

Smh.
fun fact
i also dislike AI art
 
Roger yeah that checks out...So a lot of words later... We don't trust someone to do their jobs if they have access to summaries. Roger, Roger.

We're discussing AI and to pretend that art isn't just as controversial is some funky blindness. It's not like I suddenly started talking about Scooby Doo.

This must be what it feels like to be dunked on and dismissed, woe is me.
 
L E F T _ H A N D _ B A N E
you're the one that brought up AI art?????
lol
you might've been as well talking about Scooby Doo because no one was talking about AI art.

most, if not all, folk here that are anti-AI writing are also anti-AI art.

and yea, by and large, I don't wanna write with someone who is only engaging in summaries of my work.
I write for those I'm writing with.
It's insulting to have someone put in the minimal effort when replying to me or anyone else.
 
Tefka Tefka Hey! You said the other day that we don't have the mental capacity to oppose AI on moral grounds. I'm here to disagree with you, partly (I do think you're right, that AI is here to stay, and I also think there are cool uses for it). I see that a lot of the appeals to art and the hobby of writing itself haven't quite gotten there, so I'd like to take a more material approach when it comes to voicing my opposition.

Also, I apologize! My response exceeds 400 words. I hope you'll grant me the honor of being read by human eyes.

OpenAI has a partnership with Anduril, an AI weapons company. They're currently mostly producing AI "defense" systems for targeting unmanned drones and helping with surveillance/drone management for the U.S. military. Defense contracts have become a hot commodity for AI companies in the past year, to say nothing of how AI has and is being used against Palestinians in Gaza and the West Bank.

I am not a U.S. citizen, and even if I were, I would not support my work in any capacity going to support weapons companies. AI already is being deployed as a tool of domination in so many capacities, and I do not support it in any of them.

I suspect the response will be, "your RP will not train AI to kill people", and you're probably right. This is small beans for an AI company. But it doesn't matter -- any amount of support for AI weapons development is bad. The material support may not come from the words I've written, but it might come from the money you'll need to spend to have any kind of solid integration of OpenAI or another AI tool.

The next step is, "it's already happening -- AI is scraping the Internet, scraping bots, etc.", and right again! But once again, it's happening, and that's bad. I do not think we should support AI companies in any capacity; that it's possible that my and others' work is already in use training AI models certainly doesn't make me happy, but that doesn't mean I'd like to make it easier for them. And again, any financial support that goes to these companies that can be avoided, should be avoided.

The final step is a mirror of the prior one: "it's already happening, your labor/work/products already support these things", and for the third time, you are correct, and again I say, that sucks. I do not like that I am in a position where no matter what I do, I feed the machine. I don't like that my taxes fund wars in other countries. I don't like that the land I live on is stolen. And I don't like that my work might be used to support a company that hates me, that does not value my work, that will use the clout and power they gain from taking from me, even if it is miniscule, to hurt other people.

AI does not exist in a vacuum. The tools and accessibility features you've talked about could be genuinely cool, but I am not keen on supporting technologies and companies that are invested in building things that kill people. Most of these companies are backing tracks -- they aren't firing the guns yet, but they're feeding the shooters ammo, and I don't doubt they're next in line on the guns.

Last night, when I read your message in that thread yesterday, I remember being very sad -- surprisingly so. It wasn't anything you said there necessarily, but rather the message in your signature, strangely enough. I guess I found it kind of tragic that someone who understands even the impact a single person can make on a community would be willing to act in support of the newest frontier of domination in the cultural space. I know this is small beans, I know an RP forum won't change the world, or shift the tides on how AI is moving -- but what I read from "be the change you want to see" is that everything matters, that every action pushes one way or another. I hope you know that too.

I'm going to practice what I preach, I hope, because it seems like you might be serious about this and about your conviction that AI is inevitable in this space and others. So, I'm going to lay out some principles I'd like to see hewed to if there is any implementation of AI tools on the site.

  1. Do not use AI tools from any company willing to use their technology for the production of weapons technologies. If we use a service from a company that had a policy against their technology being used for weapons, and they change that policy, we should discontinue the use of that service immediately.
  2. The above can also apply for any other particular concern -- for example, Yieldstar is an AI tool being used by landlords that works off otherwise confidential information to help landlords increase rents. There's currently a complaint against them by the DOJ. It's wrecking people in my city right now, and that sucks; obviously this is very specific and not super applicable to an RP forum, but I don't doubt there are companies that produce products like Yieldstar who might also produce LLMs. I would hope we avoid them.
  3. Any feature that will be reading/using work from the site should only be able to access content produced after the implementation. As an accessibility tool, I think it makes sense, and also respects the work of writers who are no longer active who might not want their work to be fed into an LLM, rather than assuming their consent.
  4. You should be able to opt out. If you are one of the people who absolutely wants nothing to do with AI, that you are 100% against any LLM being able to take and evaluate what you've written, we should be able to respect that. That being said, I think this would be pretty rare, frankly -- the fact that Kyric Kyric , who's taken a pretty strong stance against AI a couple times now, would be down to have it as an accessibility tool, I think is a pretty clear indicator that the line is not as hard as it seems.
I am sure there are technical problems with this that I am not aware of. This is also not an exhaustive list, of course, and I'm sure there are things that you could add -- I trust you to figure them out, and I hope there are robust systems in place to respect people's privacy and work if this is the route we go down.

Thanks for reading all that. Take care.
 
I’m not a fan of this idea.

I’m not a complete AI hater. It has its place as a tool. I use it for art generation, for prompts, for grammar. I understand my info is being fed to the robots whether I want it to be or not. I can empathize with the accessibility argument. The tool is there, off site, if you need it.

Guess my concern is that if the button is right there, for 400+ words (which isn’t near a bible post, in my opinion), it becomes the default. It feels a little disrespectful of the time and effort spent by your partners at that point. I’d be pretty bummed to know the art I spent my free time creating was constantly reduce to a two sentence summary, and I wouldn’t use the button, not wanting to disrespect anyone else’s craft.
 
I'm going to point this out because there seems to be a belief that anything and everything we as individuals 'feed' to ChatGPT or an OpenAI product will be used as training.

OpenAI wants to train their programs on curated lists its engineers oversee not whatever is thrown into the daily conversations by the open internet. Its massive quantities of junk metadata otherwise that can lead to unhinged response patterns as infamously happened when ChatGPT was taking internet-wide training information.

And at the end of all of that, if you still are not convinced that OpenAI would almost certainly not use Chaos RP posts in the training of its LLMs you can opt out of that possibility entirely: https://openai.com/policies/terms-of-use/

I understand reservations or dislike of LLMs, but I don't think this particular point should be considered for our potential use.
 
I can see the CNN headline now, "SEVERAL SWRP CHAOS BOMBS HAVE LEVELED HOSPITALS IN THE MIDDLE EAST, TEFKA UNAVAILABLE FOR COMMENT"

Edit: IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNERIM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNERIM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER IM A ROLEPLAY FORUM OWNER
 
I mean, I personally don't like the idea of it being there, but I'm not going to, like, be upset if someone runs my stuff through a bot to condense my text to a small summary. I will be upset if people start using that kind of a tool to respond to a summary that may or may not leave out pertinent information (edit for clarity: and miss that information) that said tool may have thought wasn't important but actually was (I've never used one to condense text into a summary so I have no first-hand experience with AI being used to do that, but) having seen facebook, google search AI summaries, Fandom wikia AI summaries, and youtube summaries make pretty egregious mistakes I don't have much confidence in such a tool being accurate enough to just substitute reading the whole post.

I get that isn't what the intended use necessarily is, but I also know people have asked me for a sparknotes versions of my own (not very long, <1000 word) posts during big threads like invasions where we were directly interacting with each other and understand this is how people will behave once it becomes convenient for them because of human nature.

I also realize that tools like this existing already means this is a moot point to complain about, technically speaking, but I think it'll be less likely to happen if it isn't literally a button in every post of whatever arbitrary word count (or character count) too (I'll also be less likely to assume missing something in a post has to do with a summarizing tool and more to do with a genuine mistake while reading because tools like those are out-of-sight and out-of-mind).

tl;dr I don't necessarily want it, no, but I won't be upset if it exists
 
Last edited:
Tefka Tefka Hey!

I have no dog in this fight, just wanted to shitpost:

PonsdAS.png
 
The Illuminated, Chosen Of The Maker
I say this as a person who has trouble reading the larger posts. This idea seems like its going to cause alot of miss comunication. Compared to the other AI suggestions I think the cons of this out weigh the pros.

So i will be voting no
 
Step 1: Add AI summaries for posts longer than 400 words.

Step 2: AI is really bad at this and the summaries really suck more often than not.

Step 3: More than enough people start inserting weird things into their posts that a casual reader doesn't pick up, but AI does. Like me, I will definitely be doing that for shits and giggles.
Step 4: AI summaries become pointless.

Step 5: People still have to read on an RP forum.

As long as it's not used to replace actual reading when it comes to judging threads, do it, don't do it, shouldn't matter as long as it's not a mandatory thing or something that pops on your screen no matter what. People can already link to threads in ChatGPT or paste posts there to ask for the summaries so it won't add or take anything away other than maybe a few seconds of slipping through tabs.
 
Putting aside for a moment whether it is good or bad to enhance your writing with AI, services like ChatGPT are already super accessible. You can just open a tab and paste the post you want to analyse into it.

My opinion is that since the summaries are often unreliable, it wouldn't be a good idea to directly incorporate it as a site feature. That creates implicit trust from the user that what the summary spits out is accurate. That said, the person can still summarise that post in ChatGPT themselves, and that way the onus remains solely on them in interpreting and responding to the post correctly.

TL;DR Just continue on our current track, no new feature is required.
 
Status
Not open for further replies.

Users who are viewing this thread

Top Bottom