2 Comments

Ok but: doesn't bias, in the negative way lay people typically use that word in relation to algorithms (racial bias, capitalist mercenary bias), happen on top of / underneath / alongside (not sure which is most accurate) the "pruning by popularity" mode of bias that you're talking about in this post? For example, when fresh new baby algorithms get "trained" at the start of their lives, before interacting themselves with any mass data, they are fed training sets that have been selected just for them by their makers, who have visions of what they can and should do in the world that? While those training sets may have emerged out of "pruning by popularity," they are chosen carefully, no?

And perhaps they are also fed models of likely use (or desired use) that are more intentionally crafted ...? Doesn't the negative form of bias slip in, there?

Expand full comment