I’ve been wondering about this for a while and haven’t really found a great answer for it. From what I understand, WASM is:

  • Faster than JavaScript

  • Has a smaller file size

  • Can be compiled to from pretty much any programming language

  • Can be used outside of the browser easier thanks to WASI

So why aren’t most websites starting to try replacing (most) JS with WASM now that it’s supported by every major browser? The most compelling argument I heard is that WASM can’t manipulate the DOM and a lot of people don’t want to deal with gluing JS code to it, but aside from that, is there something I’m missing?

  • fiat_lux@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    1 year ago

    How will we be making WASM-based UI accessible for people using screen readers, screen zoom applications, text to speech and voice input users, etc.?

    The Web is hostile enough to people with disabilities, despite its intent, and developers are already unfamiliar with how to make proper semantic and accessible websites which use JS. Throwing the baby out with the bathwater by replacing everything with WASM in its current form seems about as good an idea as Google’s Web Environment Integrity proposal.

    • AnarchoYeasty@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      JS has little to do with accessibility. Most web accessibility comes from the Dom and aria attributes as well as semantic tags. You can do all of that with wasm too.

      Are you asking about how it will work with wgpu based applications? This will work the same as it does on desktop applications. The program calls out to libraries that support talking to screen readers. I know rust the language with the best support for and ecosystem around wasm libraries like this already exist and ui frameworks like egui already have some support built in.