The development of AI has advanced far enough where certain pieces of news can actually be written up by a computer without any input from humans. This can be dangerous because it also means that since computers don’t need sleep, it could be weaponized where it could be used to churn out fake news after fake news.
There is the saying of fighting fire with fire, which is what researchers at Harvard University and the MIT-IBM Watson AI Lab have done, where they’ve developed a new AI tool that has the ability to spot text that might have been generated using another type of AI. Dubbed the Giant Language Model Test Room, this tool relies on the fact that AI typically generates text relying on statistical patterns.
Basically, the idea is that if a piece of text seems to be too predictable to have been written by a human, there is a chance that it could be written by a computer. The tool can be used to help detect things like fake text and misinformation, which could help fight against fake news and bots.
There are already websites and organizations that have dedicated themselves to fact-checking fake news and images, but hopefully with the use of AI, it will become a lot more efficient.
Filed in AI (Artificial Intelligence). Source: technologyreview
. Read more about