|How "fake news" could get even worse|
|By Thom Holwerda on 2017-07-16 22:52:20|
No. Mr Astley did not rework his song. An artist called Mario Klingemann did, using clever software. The video is a particularly obvious example of generated media, which uses quick and basic techniques. More sophisticated technology is on the verge of being able to generate credible video and audio of anyone saying anything. This is down to progress in an artificial intelligence (AI) technique called machine learning, which allows for the generation of imagery and audio. One particular set-up, known as a generative adversarial network (GAN), works by setting a piece of software (the generative network) to make repeated attempts to create images that look real, while a separate piece of software (the adversarial network) is set up in opposition. The adversary looks at the generated images and judges whether they are "real", which is measured by similarity to those in the generative software's training database. In trying to fool the adversary, the generative software learns from its errors. Generated images currently require vast computing power, and only work at low resolution. For now.
People aren't even intelligent enough to spot obviously fake nonsense written stories, and those were enough to have an impact on the US elections. The current US president managed to "win" the elections by spouting an endless barrage of obvious lies, and the entire Brexit campaign was built on a web of obvious deceit and dishonesty.
Now imagine adding fake video into the mix where anyone can be made to say anything.
- Why paper jams persist - 2018-02-13
- Wrong dropdown menu selection led to false missile warning - 2018-01-15
- Everything is too complicated - 2018-01-08
- Bitcoin could cost us our clean-energy future - 2017-12-06
- More related articles