☆ Yσɠƚԋσʂ ☆
- 16.7K Posts
- 12.2K Comments
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
news@hexbear.net•Looks like the rescue mission in Iran was cover for an operation to seize uranium that ultimately failedEnglish
26·6 hours agoExactly, if they had the pilot, Trump would be parading them like it was the second coming.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
4·6 hours agohaha if I come up with anything nifty, I’ll be sure to share here :)
updated to add it
That’s precisely what makes it so valuable for sharing with libs who screech about China’s work culture though.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
4·8 hours agoyeah exactly
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
4·8 hours agoWhat’s happening currently is a bubble that’s not in any way sustainable. And energy prices going through the roof thanks to the war could even be the catalyst that pops it. But as I noted earlier, we’ve gone through this cycle many times in tech world. New tech often requires a ton of resources to run which creates the mainframe era, then it gets optimized overtime, and moves to edge devices. I don’t see anything special about LLMs here. We’re just in very early stages of new technology.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
6·9 hours agoThat’s my thinking as well. The LLM is basically an interface to the world that can handle ambiguity and novel contexts. Meanwhile, symbolic AI provides a really solid foundation for actual thinking. And LLMs solve the core problem of building ontologies on the fly that’s been the main roadblock for symbolic engines. The really exciting part about using symbolic logic is that you can actually ask the model how it arrived at a solution, you can tell it that a specific step is wrong and change it, and have it actually learn things in a reliable way. It would be really neat if the LLM could spin up little VMs for a particular context, train the logic engine to solve that problem, and then save them in a library of skills for later user. Then when it encounters a similar problem, it could dust off an existing skill and apply it. And the LLM bit of the engine could also deal with stuff like transfer learning, where it could normalize inputs from different contexts into a common format used in the symbolic engine too. There are just so many possibilities here.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
7·9 hours agoThat’s just the Silicon Valley model though. Look at China for contrast. Companies there treat models as foundational infrastructure, and they’re not trying to monetize them directly. Hence why we see so much open source work coming out of there right now. It’s a similar situation we see with Linux incidentally. A lot of companies contribute to its development, but they monetize things like AWS that are built on top of it. However, even without company engagement, people will continue to work on open source as they always have. It doesn’t really matter if it goes mainstream or not.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
5·11 hours agoThat’s the neat part, once you drop the cost of running these things sufficiently, you don’t need datacenters because people can just run the models locally. We’re currently in the mainframe era of AI, but the pendulum is already swinging towards personal computing as local models continue to improve. These kinds of breakthroughs are going to accelerate the process.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
10·11 hours agoExactly, it’s like people got a hammer and everything looks like a nail. Like, yes, LLMs can be contorted to do a lot of different tasks, but that doesn’t mean they’re the optimal tool for accomplishing these tasks. It seems that as we’re starting to hit limits of what you can squeeze out of LLMs, the hype is starting to die down and people are rediscovering other well known techniques that can be combined with them.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
5·11 hours agoand that’s why it’s important for this tech to be developed in open source, community driven fashion
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
16·13 hours agoI don’t get the impression that the goal is to apply the model to a hyperspecific domain, rather the idea seems to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can’t have a comprehensive ontology of the world because it’s inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that’s done, it’s perfectly possible to use a logic engine to solve problems within that context. The goal here is to optimize a particular set of tasks which can be expressed as a set of logical steps.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
technology@hexbear.net•Logic-driven AI could slash energy use by 100x while outperforming today’s most powerful systemsEnglish
11·13 hours agoI really think that LLM coupled with a logic engine running in a REPL environment could be an amazing thing.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
Technology@lemmygrad.ml•China Built a $20 Billion “Ghost Port” — No Humans, No Noise, No Sleep
7·18 hours agoreally highlights the difference between being a military empire and a productive super power
Yeah, going to be interesting to watch. Given that Philippines is now all of a sudden recognizing one China policy, I’m guessing they’re gonna to pivot to China sooner than later now. There’s really nothing the US can offer.
☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOPto
Technology@lemmygrad.ml•China Built a $20 Billion “Ghost Port” — No Humans, No Noise, No Sleep
24·1 day agoThe level of automation in China is absolutely incredible. Now you have ports, factories, mines, farming, road building, etc., all becoming largely automated and integrated. Nothing even remotely similar is happening anywhere else in the world.
Although, I imagine they will make deals with Russia and Iran for oil. The problem the vassals have is that the US won’t allow them to do that.
Europe, occupied Korea, and Japan seem to be the most vulnerable regions. And I’m guessing things are going to get a lot worse by the end of the month because that’s when new tankers would’ve arrived if Hormuz wasn’t closed.





















updated to add the archive