Underrated comment
Everyone’s conspiring folks. What’s hard to measure, is who’s conspiring
Underrated comment
Everyone’s conspiring folks. What’s hard to measure, is who’s conspiring
Laplacian Edge detection? beautiful meme
I wish more guys just said they didn’t know something instead of clearly not knowing what they’re talking about and running their mouth based on vibes
I sort of agree, but I think it depends on effort.
Type one word in and try and sell the easiest generated image? Low value.
But typing the right combo to create assets to create something larger than the model is capable of? That’s more valuable.
Criticizing AI or artists that leverage AI is like criticizing an artist for using a printer instead of drawing by hand
Or saying someone’s digital work is inferior because they used a tool to help make their image…
On that note, when working on a large project, is an AI artist as pretentious as the artist in the comic because they got some help generating the project from an AI instead of another human? Or is someone’s work ethic less credible for Google searching instead of asking a person? Are works of art valuable because they’re entirely original and uninfluenced by anything else but the artist themself? Because with that metric no artists are valuable since nothing is entirely original anyways
25% of reddit comments are chatgpt trash if not worse. It used to be an excellent Open Source Intelligence tool but now it’s just a bunch of fake supportive and/or politically biased bots
I will miss reddits extremely niche communities, but I believe Lemmy has reached the inflection point to eventually reach the same level of niche communities
Don’t tell him, if too many people get ad blockers they’re just going to keep evolving
You’re right, we need water fountains with milk instead
Meanwhile: NixOS
538s model was a good estimator that year too, they leaned towards Hillary (and to be fair, she did win the popular vote) but certainly kept a trump win in the swing states within margin of error.
270 to win is another good site
Fake. My parents didn’t have a stable marriage
I’ll look into LN more, I’m familiar with the centralization concerns (but still think they’re able to be mitigate until more upgrades), but am not familiar with the costs you’re bringing up. Fee estimators notoriously round up, I’ve never spent more than a dollar but that’s anecdotal
BCH is still an attempt at centralization from bitmain, a company which literally installed kill switches in their miners without telling anyone, and ran botting attacks in /r/Bitcoin and /r/BTC during that fiasco - the hard fork they created is absolutely more centralized than Bitcoin
There will be a time to do something as risky as hard fork for a block size upgrade, but to do it for the sake of just one upgrade that serious doesn’t make sense to me. If a hard fork must happen there might as well include other bips that necessitate a hard fork like drivechain.
Soft fork upgrades which enable more efficient algorithms like schnorr / SegWit in the meantime have scaled tps without having to waste block space. Bch is cheap because there’s no demand or usage.
Fiat makes itself obsolete
Bitcoin cash was an attempt at centralized control by Jihan Wu. Just because the block size is bigger doesn’t mean it’s better for decentralization. In fact, the increased costs of maintaining a node just makes it harder for people in (typically poorer) oppressive countries to self verify
They are still increasing the TPS, lightning network isn’t perfect, but it can scale beyond visa until more upgrades are implemented
Ollama (+ web-ui but ollama serve & && ollama run
is all you need) then compare and contrast the various models
I’ve had luck with Mistral for example
Russia (allegedly) has elections too however
We might as well change the baseline for ADHD since technology has hammered everyone’s dopamine receptors
Again, those are my ideals. Realistically, not everything can be decentralized in a trustless way.
That said, much of our current system of signing documents to verify it was done by a certain Identity can be automated. Enforcement and neorealism are a separate issue to mitigate, but the delegation of authority to humans can be automated without human involvement
After 6 months of trying to get this to work in NixOS I finally cracked and posted on discourse.nixos.org right before I figured out how to export the appropriate library in the shellHooks function, go figure
Yes, (most) everything is feasible in smaller populations (not nuclear maintenance for example). But without technology, they’ve been isolated, uncoordinated, and easily bullied by those larger organized authoritarian bodies. There are billions of people, and narcissists make up about 1 in 5 of those billions of people. A smaller subset lack basic empathy, and an even smaller subset are intellectually competent. Multiply whatever that probability is by billions of people, and you have a guaranteed concern for every single government on the planet.
I agree with wanting smaller businesses as well. Capitalism isn’t bad (communism is state capitalism after all), but corporatism is the emerging problem from right libertarianism that most people conflate as problems with capitalism
My point being isn’t that I don’t like leftism, they are my ideals. I just don’t believe we live in an ideal world, so practically I follow a different set of beliefs. Thay said, I do think leftism is compatible with libertarianism in a way that it can compete in the global arena. And that starts off with solving how a decentralized governmental body “identifies” one and only one person to their “identity” (otherwise you get Sybil attacks)
Thanks for the feedback! I also asked a similar question on the ai stack exchange thread and got some helpful feedback there
It was a great project for brushing up on seq2seq modeling, but I decided to shelve it since someone released a polished website doing the same thing.
The idea was the vocabulary of music composition are chords and the sentences / paragraphs that are measures are sequences of chords or sequences of measures
I think it’s a great project because the limited vocab size and max sequence length are much shorter than what is typical for transformers applied to LLM tasks like digesting novels for example. So for consumer grade harder (12GB VRam) it’s feasible to train a couple different model architectures in tandem
Additionally, nothing sounds bad in music composition, it’s up to the musician to find a creative way to make it sound good. So even if the model is poorly trained, so long as it doesn’t output EOS immediately after BOS, and the sequences are unique enough, it’s pretty hard to find something that isn’t different that still works.
It’s also fairly easy to gather data from a site like iRealPro
The repo is still disorganized, but if you’re curious the main script is scrape.py
https://github.com/Yanall-Boutros/pyRealFakeProducer