Rage Against the Algorithms
An especially interesting thing about machine learning algorithms is that even understanding the algorithm itself will often tell you nothing about its biases: the biases often come from the training data. The article talks about this a little bit, but I think it's a very important point for programmers in particular. For one, this means that even an open source algorithm, with the best intentions, might have significantly detrimental biases.
This also explains why I personally do not find machine learning satisfying. It's obviously very useful, and I'm not making a judgement about the field, but I just find solving a problem with machine learning to often feel empty. You get a solution, sure, but you get no additional insight on the problem itself. And I'm often far more interested in this insight than in any given problem itself.
I certainly find the underlying principles fascinating, but that fascination usually does not translate to whatever field machine learning is used for. Writing a system for identifying cat pictures will teach you quite a bit about "identifying", but not very much about cats.
Ultimately, all this just means that you have to pay attention to what your machine learning algorithm is doing right now, even if you understand the algorithm itself really well.
The thing about not auto-correcting abortion is, I speculate, actually because it'd be really unpopular if other misspellings were mis-corrected to read abortion. I think there's likely a short blacklist of words that are in the dictionary but aren't auto-corrected to.
> A recent survey found that 76 percent of consumers check online reviews before buying
Bullshit.
More transparecy into credit scoring algorithms would be nice.