They don’t have decent filters on what they fed the first generation of AI, and they haven’t really improved the filtering much since then, because: on the Internet nobody knows you’re a dog.
Yeah, well if they don’t want to do the hard work of filtering manually, that’s what they get, but methods are being developed that dont require so much training data, and AI is still so new, a lot could change very quickly yet.
It is a hard problem. Any “human” based filtering will inevitably introduce bias, and some bias (fact vs fiction masquerading as fact) is desirable. The problem is: human determination of what is fact vs what is opinion is… flawed.
They don’t have decent filters on what they fed the first generation of AI, and they haven’t really improved the filtering much since then, because: on the Internet nobody knows you’re a dog.
Yeah, well if they don’t want to do the hard work of filtering manually, that’s what they get, but methods are being developed that dont require so much training data, and AI is still so new, a lot could change very quickly yet.
when you flood the internet with content you don’t want, but can’t detect, that is quite difficult
It is a hard problem. Any “human” based filtering will inevitably introduce bias, and some bias (fact vs fiction masquerading as fact) is desirable. The problem is: human determination of what is fact vs what is opinion is… flawed.