for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
What is ‘Deepfakes’ and why is everyone freaking out about it?
Nat says: The age of open technology and information transparency is ironically bringing forth the age of distrust. Forget about “fake news” – we are now in the age of “deepfakes”, an online phenomenon in which open source software allows you to replace a face in a video, manipulating how a user perceives that video. The most convincing example in recent times has been Jordan Peele’s take on Obama.
If you watch this video on your mobile phone, where issues of quality can be disguised as issues of streaming, you could easily be lead to believe that Obama said that “President Trump is a total and complete dipshit”. Of course, the video is a complete fabrication, but such use in a different context could convince us of its lie.
Our technology has always shaped our reality, but we are now living in the age of technological distortion – of going beyond mere sophistry and deception, to actively gnarling our senses. The deepfake movement, oddly enough, has often been discussed in reference to pornography – of people using the software to put-in celebrity faces in lieu of the pornstars themselves. You can even put your own face in such videos, with some saying the fantasy of seeing oneself in that context can help those with physical disabilities who otherwise would not be able to experience that type of simulation (or stimulation!).
Joking aside, the deepfake movement brings forth many questions regarding what impact this deception is going to have in society. Technology is only going to improve its ability to deceive us. What if someone created a video using your face which had you saying something you never said? What if you were watching a video, supposedly from your boss, and they asked you to do something illegal?
The possibilities of this technology are vast. What it calls for is a greater need of information literacy — an education of sorts for people being able to question the source of the information they receive or are exposed to; of people needing to question what others have said, and the context in which they are saying such things. Our technology enables us to do new and better things all the time, but with each iteration, technological advances also debilitate us in some form. The question of deepfakes is not just one concerning the outcomes of its usage, but the question of technology overall as continually redefining our reality and sense of self as human beings.
Google just gave a stunning demo of Assistant making an actual phone call
Jakkii says: So many reactions to this: amazement, wonder, and worry.
Although different, there’s a clear correlation to some of the concerns called out in Nat’s shared piece (above) about “deepfakes.” This is more general than specific (such as, what if the video was of me?) in that this is an AI Assistant purporting to be human. In the demonstration it even uses “umms” and “ahhs” to sound more humanistic. This kind of deception is deeply troubling. As techno-sociologist Zeynep Tufekci put it on Twitter:
As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, delilberate deception. Not okay. https://t.co/XTUn4tvDik
— zeynep tufekci (@zeynep) May 9, 2018
Zeynep also made the point in the Twitter thread that negative experiences with and reaction to such deceptive AI may actually negatively impact those amongst us who could most benefit from the accessibility such an assistant provides – an important point worth our consideration. Verge posted a follow up article addressing some of the broader outcry on Twitter over the deceptive nature of the Assistant. It’s worth a read as well to round out the picture.
What do you think? Should we be concerned about making AI tools look, sound or seem too much like humans? Are we having the right discussions about ethics in technology and the intersection between technology and humanity?
For the love of Facebook
Helen says: Facebook seems to have hardly skipped a beat. Despite being plagued by significant data privacy scandals, after a brief dip in share value it has rebounded and continues to rise. Enter the next frontier for Facebook – the dating game. Different from other dating sites that develop user profiles from surveys completed by members, Facebook will match people according to their Facebook digital footprint. This point of difference may make all the difference to compatibility predictions.
Evita March, Senior Lecturer in Psychology, Federation University Australia, suggests there is no scientific evidence that matching services actually work. True or not, this hasn’t prevented Tinder from achieving a $3 billion valuation, or Facebook’s announcement causing a 20% drop in Tinder’s stock price. Clearly the market thinks Facebook has something to offer.
I find it interesting that despite the very recent concerns expressed, and evidence tabled, about misplaced trust and the misuse of personal information by Facebook, something as intimate as dating will likely become their very next commercial success story.
Google Lens is turning into what Google Glass never was
Joel says: We’ve all heard of the Google Glass. We even went hands on and got to use the now mythical device back in 2014. Now it seems it’s numerous trips back to the drawing board have resulted in Google creating a new product, the Google Lens and it seems to be trying to achieve all of the things people dreamed of when they first heard about the Google Glass.
In a detailed commentary piece over on CNET you can learn about their experience with the device after they saw it at last weeks Google I/O developers conference. They describe the new device as being “closer than ever to what we thought magical real-time augmented reality would be”:
Recognizing text and translating or copying it. Recognizing objects and searching for related matches. Seeing posters and popping out videos and related news. Getting directions and information that floats in front of my vision.
At the Google I/O developer conference where bigger-scale VR news has thus far been minimal, Google’s continued evolution of Google Lens stole the show. Google Lens is Google’s camera-based AR technology that expands the powers in Android phone’s existing camera app. It’s not a separate app, it’s baked into Android.
If you’re interested in AR technology this is just a taste of what the folks over at CNET had to say about the Google Lens. Check out the full article for an interesting look into what we’ll be able to achieve in the near future with AR.
Sydney Business Insights – The Future This Week Podcast
This week: I’m um, a bot; sexist spaces; and Japan making stuff in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
Other stories we bring up: