for W3c validation
Content warning: this week, some articles may contain distressing content or images, particularly in relation to the events in Christchurch.
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
The role of algorithms and why we’re asking the wrong questions about social networks after New Zealand attacks
Anne says: Sometimes it takes tragic events to amplify issues that we’ve been vaguely aware of. This week was one of those times.
The horrific tragedy in Christchurch, NZ, streamed live on Facebook has amplified the discussions of sharing via social media and user behaviour (why would we want to watch innocent people being executed? And why would we repost and share it? That’s another blog post, for another time). But underlying this behaviour is something I find more ominous – it’s how algorithms, AI, are being used to amplify our behaviours.
The first article from Scientific American was something I was aware of – myself and other members of the team had written about similar themes previously – particularly works by Tristan Harris. While highlighting the wonderful aspect of YouTube, people creating videos and sharing their experiences, particularly popular with the DIY, fix-it-yourself set – there’s a dark side. Have you noticed how immediately after you finish watching the video you searched for, another is about to start? It’s a recommender system, powered by artificial intelligence (AI). But unlike other recommender systems (if you liked this content, you’re likely to find this interesting/valuable etc), the YouTube algorithm is pushing content it “deems as engaging” – and here comes the dark side: engaging content can be very dark…
According to the article, Google (they own YouTube, in case you didn’t know) has a model that incentivises content that is watched the most. And now, think about what happened in New Zealand… The second article, an opinion piece from the New York Times, highlights this structural problem.
“…the graphic, high-definition video of the attack was uploaded by users 1.5 million times in the first 24 hours. Of those 1.5 million copies of the video, Facebook’s automatic detection systems automatically blocked 1.2 million. That left roughly 300,000 copies ricocheting around the platform to be viewed, liked, shared and commented on by Facebook’s more than two billion users.”
Facebook has been quite open about their attempts to stick a finger in the dyke but with figures like this, it’s almost impossible to avoid leakage. Meanwhile over at YouTube:
“YouTube took “unprecedented steps” to stanch the flow of copies of the video that were mirrored, re-uploaded and, in some cases, repackaged and edited to elude moderation filters.”
So we’re not just dealing with the raw materials, we’re also now dealing with re-packaged, re-purposed materials. And the more hits, the more the algorithm pushes through the “engaging” content. However, the key point highlighted in this article is where we need to focus our attention:
“The horror of the New Zealand massacre should be a wake-up call for Big Tech and an occasion to interrogate the architecture of social networks that incentivize and reward the creation of extremist communities and content.”
Exactly!! This is the issue – the type of content and users that are playing the algorithm to distribute their extremist views. The author talks about the need for the big social media platforms to take responsibility. But we – as the people who are being pushed content AND watching it AND sharing it – need to take a role in bringing these platforms to account and demanding changes to their incentivised algorithm systems.
And take a lesson from Jacinta Ardern, New Zealand’s Prime Minister, don’t watch it, don’t encourage them and please don’t share it.
Why companies want to mine the secrets in your voice
Helen says: We have witnessed rapid development in the area of voice technology and its application is now prevalent in many homes thanks to Alexa, Google Home and smartphones. We can switch on lights, change a tune or phone a friend, all from a voice command. Now with the aid of voice mining technology, some companies are looking beyond simple voice capture and into more detailed voice analysis, and their research is demonstrating how the voice can reveal a surprising level of information about ourselves, and not through words – our voice is a rich data set. By using algorithms and AI to analyse complex voice features and speech patterns and continually learning from this, it is conceivable that during a real-time conversation, our voice could be monitored, and we be evaluated for anything from having a particular disease, suffering from stress or depression, whether we are likely to default on a loan or what percentage risk you are of leaving your job.
The potential for some really good outcomes in the area of mental health management are noted and I am sure there will be many other positive applications possible but what measures will be taken to ensure that by simply using my voice I am not giving away personal information without my knowledge? How will I know what information has been collected about me, whether it is true and what it will be used for? When it comes to the application of other AI technology and the use of algorithms, these questions are not new, but I would like to learn more about what is being done to address them.
How Will Google Overcome Stadia’s Biggest Obstacle?
Joel says: In case you hadn’t heard yet, earlier this week at the Games Developers Conference in San Francisco Google revealed their new game streaming platform titled Stadia. Think of it as a Netflix service for games. Stadia will allow any person with access to a device that can use Google Chrome to stream and play games at 60 frames per second and at 4K resolution – if you have an internet connection that can handle the bandwidth required.
Gone may be the days of physically buying a game on disc and installing it on your home console to play. Users of Stadia will be able to play the latest games with no install times streamed straight from Google servers right to your browser or Chromecast Ultra. Well, that’s how it would work in an ideal world. But it seems this Gizmodo article tackles the biggest issue people will have with Stadia: the internet requirements needed to get the best experience out of the service.
The problem with a gaming platform that relies entirely on the internet is that it relies on the internet.
The article mentions that even the journalists at the demo event for Stadia were having trouble getting the most out of the system due to the convention centre not meeting the internet requirements in a stable capacity. And although the piece is written from a US perspective, the same issues will be very much present for us here in Australia – perhaps even more so. Google released requirements for Stadia stating it would require a connection of at least 25mbps to stream in 4K at 60fps meaning unless you have an NBN or decent cable connection here you’re going into the experience with a handicap.
Google have confirmed since the presentation that Stadia will begin its roll-out later this year in “US, Canada, UK and most of Europe” and confirmed that Australia will not be getting the service initially.
While I love the idea of the service, unfortunately, the realisation is that much of the world may not be ready for Stadia quite yet. The US and Australian internet infrastructure are not yet up to scratch to ensure everyone that buys it gets a quality experience with reasonable resolutions and minimal latency for controller input. I do believe an ‘all-digital’ future for games is the future, but I think it’s the far future. Unfortunately for Google, we may be playing catch up when it comes to Stadia here in Australia.
If you have a spare hour and want to check out the full Google Stadia presentation you can do so by watching the video, above. The tech is certainly impressive.
5 steps to conquer impostor syndrome
Jakkii says: Ah, imposter syndrome, you sneaky little devil. I know, practically speaking, that I am not alone in experiencing bouts of any one of these five types of imposter syndrome, but there are times when it definitely feels like you’re the only one who has ever felt that way because you are, after all, the only imposter. What we let our minds tell us is, objectively, really quite ridiculous – and destructive – at times, and yet, here we mostly all are, navigating a variety of negative thoughts and behaviours we’ve somehow developed, learned and trained ourselves into – whether via nature, ‘nurture’, or both.
If you, like me, are inclined to an occasional bout of imposter syndrome – and statistically it is quite likely, given one study found 2/3 of women in the UK have suffered from it at work (and are 18% more likely than men to experience it), while another finding instead suggests that men are more likely to experience imposter syndrome than women – then have I got an article for you! Whoever you are and however you suffer from thinking you’re not good enough, this piece in Quartz looks at 5 steps to conquer imposter syndrome. Woohoo!
- Get your validation from within
- Give yourself permission to say no
- Remember, others are not thinking about you as much as you think they are
- Have empathy toward others who also may not feel good enough
- Let yourself be vulnerable
The whole article is worth a read as it expands on each of the steps. What do you think, my fellow imposters – do these sound like they could be effective for you?
This week in social media
Politics, democracy and regulation
- Facebook is not a monopoly, but it should be broken up
- Facebook’s tremendous size was its greatest asset. Now it may be its biggest problem
- Rep. Devin Nunes’s bizarre $250 million lawsuit against Twitter, explained
- When Trump blocks you on Twitter, he’s violating the First Amendment
- Facebook, Twitter, TikTok asked to draft a Code of Ethics for the 2019 [Indian] election
- Facebook does have to respect civil-rights legislation, after all
- What it’s like to be thrown in jail for posting on Facebook
- Facebook’s plan to protect the European elections comes up short
Privacy and data
- How Cambridge Analytica sparked the great privacy awakening
- Facebook’s crisis management algorithm runs on outrage
- Facebook stored hundreds of millions of passwords in plain text
Society and culture
- The Christchurch Shooter and the Distorting Power of the Internet
- Social media are a mass shooter’s best friend
- Social media is ruining our memories
- Praising for pay: WeChat adoration groups for hire on Taobao
- Inside the super positive community of competitive YouTube water drinkers
- Being an Instagram influencer is hard work, so this guy made a bot to do it for him
- How influencers on WeChat are driving NYC’s restaurant scene
- You can now get hired by McDonald’s on Snapchat
- Inside the secret Facebook war for Mormon hearts and minds
- How a viral comet crash into Jupiter helped popularize the internet
Cybersecurity and safety
- The social apps teenagers are using — and keeping them safe online
- Snapchat admits its age verification safeguards are effectively useless
- Kidfluencers’ rampant YouTube marketing creates minefield for Google
- Facebook is relying on AI to tackle revenge porn
Christchurch and hate speech
- Questions about policing online hate are much bigger than Facebook and YouTube
- 4chan, 8chan blocked by Australian and NZ ISPs for hosting shooting video
- Why can’t YouTube automatically catch re-uploads of disturbing footage?
- The far right Rise in UK use of far-right online forums as anti-Muslim hate increases
- Hard-right activists move to ‘free-speech’ platform after Twitter, Facebook boot them off
- What it was like to be a NZ moderator on Reddit during the Christchurch shootings
- Anti-muslim hate has been rampant on Reddit since the New Zealand shooting
Moderation and misinformation
- The punishing ecstasy of being a Reddit moderator
- Twitter as an Information Battlefield – Venezuela; A Case Study
- Reddit operates exactly as it was designed — and that’s a problem
- ‘Flat Earthers’ are embarking on a bizarre journey to Antarctica to prove that YouTube makes you stupid
- Instagram is the latest hotbed for conspiracy theories
Marketing and advertising
- Instagram will now let you buy products directly inside the app
- ‘I looked like a clown’: the truth about shopping on Instagram
- LinkedIn explores the ‘enlightened buyer’ and how marketers can influence the purchase process
- Snap Inc.’s new chief business officer plans to overhaul the company’s ads business
- Government proposes junk food ad ban on YouTube and Facebook
- Facebook is replacing relevance score with three new metrics
- Google introduces shoppable ads on Google Images
- Facebook agrees to overhaul of paid advertising in United States
- LinkedIn adds lookalike targeting, Bing tie-in and B2B templates to ad platform
- Remember MySpace? Yeah, well it says it lost up to 12 years of users’ uploaded files
- Telegram gets 3M new signups during Facebook apps’ outage
- Twitter will let you subscribe to conversations you’re interested in
- Facebook Messenger gets threaded replies
- Snapchat may launch its own gaming service next month
- TikTok parent Bytedance is getting serious about games
- Facebook is giving its gaming efforts more prominent placement on mobile devices
- Pinterest is experimenting with a video tab
- A week with Twitter’s attempt at a more civil internet
Sydney Business Insights – The Future This Week Podcast
This week: big tech breakup, Amazon power and a Facebook rethink. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
00:45 – Breaking up big tech
16:41 – Zuckerberg rethinks Facebook’s privacy
23:08 – Amazon’s platform power
29:11 – Robot of the week: The Guardian
The stories this week:
Other stories we bring up:
Robot of the week: