- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
You must log in or register to comment.
It’s interesting how words with similar semantic content are close together in the abstract vector space. Those groupings seem like they could reveal the idological content of the training data.
Lol I don’t have it on hand, but someone did a centroid analysis of another smaller but open GPT model, and found the “word” at the “center” of the model space was “a man’s penis”
I found a YouTube link in your post. Here are links to the same video on alternative frontends that protect your privacy: