The Lemmy Club
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
Pro@programming.devM to AI - Artificial intelligence@programming.devEnglish · 7 days ago

LLMs factor in unrelated information when recommending medical treatments

news.mit.edu

external-link
message-square
0
link
fedilink
  • cross-posted to:
  • [email protected]
  • [email protected]
  • [email protected]
19
external-link

LLMs factor in unrelated information when recommending medical treatments

news.mit.edu

Pro@programming.devM to AI - Artificial intelligence@programming.devEnglish · 7 days ago
message-square
0
link
fedilink
  • cross-posted to:
  • [email protected]
  • [email protected]
  • [email protected]
An MIT study finds non-clinical information in patient messages, like typos, extra whitespace, or colorful language, can reduce the accuracy of a large language model deployed to make treatment recommendations. The LLMs were consistently less accurate for female patients, even when all gender markers were removed from the text.
alert-triangle
You must log in or # to comment.

AI - Artificial intelligence@programming.dev

Aii@programming.dev

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]

AI related news and articles.

Rules:

  • No Videos.
  • No self promotion: Don’t post links to your articles.
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 3 users / day
  • 39 users / week
  • 48 users / month
  • 48 users / 6 months
  • 1 local subscriber
  • 56 subscribers
  • 61 Posts
  • 3 Comments
  • Modlog
  • mods:
  • Pro@programming.dev
  • BE: 0.19.12
  • Modlog
  • Legal
  • Instances
  • Docs
  • Code
  • join-lemmy.org