

That’s basically my question. If the standards of code are different, AI slop may be acceptable in one scenario but unacceptable in another.
Reddit refuge
That’s basically my question. If the standards of code are different, AI slop may be acceptable in one scenario but unacceptable in another.
So I was trying to make a statement that the developers of AI for coding may not have the high bar for quality and optimization that closed source developers would have, then was told that the major market was internal business code.
So, I asked, do companies need code that runs quickly on the systems that they are installed on to perform their function. For instance, can an unqualified programmer use AI code to build an internal corporate system rather than have to pay for a more qualified programmer’s time either as an internal hire or producing.
I didn’t say AI, I said LLM.
Does business internal software need to be optimized?
As a dumb question from someone who doesn’t code, what if closed source organizations have different needs than open source projects?
Open source projects seem to hinge a lot more on incremental improvements and change only for the benefit of users. In contrast, closed source organizations seem to use code more to quickly develop a new product or change that justifies money. Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?
The value isn’t as important and is more misleading.
You can create your own instance and give yourself as much karma as you want.
There are no internal Lemmy structures to prevent mass upvoting or downvoting.
Culture and voting rules across Lemmy are different and without a centralized authority.
Since the number is useless, it is made unavailable.
It depends on what you can do with the fill coming off the mountain.
A combination of personal vetting via analyzing output and the vetting of others. For instance, the Pentium calculation error was in the news. Otherwise, calculation by computer processor is understood and the technology is acceptable to be used for cases involving human lives.
In contrast, there are several documented cases where LLM’s have been incorrect in the news to a point where I don’t need personal vetting. No one is anywhere close to stating that LLM’s can be used in cases involving human lives.
By the mid 1800’s, chattel slavery was around in the USA for over 200 years. Even after slavery ended, an enforced caste system was put in place.
You would need the robots a lot earlier to prevent the slave trade.
“Honey, I’m flattered, but I don’t have the same feelings for you. Even if I did, it would be inappropriate for us to have a relationship. I hope you find someone else closer to your age.”
On the use side, a lot of users want an answer to their question, not a list of pages that may have the answer, with the answer made more obscure via SEO optimization. AI is just a continuation of this.
Yeah, I figure this isn’t going to be an American only problem.
How are other countries different?
Trump did tweet that he was Sith.
No. In this case, Shatner knew the kiss was going to be historic and therefore wanted to be the one to go down in history for the kiss.
That said, Shatner did what he could to keep the scene from getting cut.
I think that, starting with the Marquis, Trek was trying to address issues that might arise even with luxury space communism.
How are other countries handling it? I can’t imagine AI being an American only education issue.
If you want to compare a calculator to an LLM, you could at least reasonably expect the calculator result to be accurate.
Because, as noted by another replier, open source wants working code and closed source just want code that runs.