for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
Why the future of work is human
Anne says: The robots aren’t going to take our jobs!
That’s according to Deloitte Australia’s recently published report: Why the future of work is human. But how can that be? Just about every other media outlet is telling us they are! And there’s a whole industry building up to reskill workers impacted by robots, learning to work alongside robots, robot operators etc etc etc. What are Deloitte thinking? How did they arrive at this contrarian position? Here’s the response:
Where new technologies do take effect, they generally create as many jobs as they kill. It’s just that the ones that they kill are easily spotted, while the ones they create are hiding in plain sight.
This is a kind of feel-good report, incorporating a number of warnings – in particular, for Australia (it’s part of Deloitte’s Building the Lucky Country series) – while identifying the shift in the job market is away from hands (manual labour) to our heads and hearts, jobs where human skills like creativity, innovation, caring for others, and collaboration are currently in demand. There’s an interactive graph on the skills shortages across industries currently, then extrapolated out to future skills shortages over the next decade or so. The full report goes into more detail about the skills shortages and the speed of change amplifying the skills gaps. Most organisations appear to be lagging behind these changes and are preparing people for now – not the future. Most workers (according to the report) lack at least 2 of the 18 essential skills in the current market.
The report looks at some of the wider trends, including flexible working, self-employment, and job stability. Another interesting aspect introduced in the report highlights roles that are routine and non-routine, predicting that non-routine roles are the ones least likely to be impacted by robots, but also greatly enhanced by AI and data analytics.
There’s a lot more in the report that reviews the use of office space, the role of experience in jobs – however, the key emphasis is the skills shortages and the need for Australia to address these gaps urgently.
It’s fascinating and somewhat alarming when you consider their future of work and the skills gaps identified. What will you be doing to ensure your future of work is assured?
Can you spot the photoshopped face?
hHelen says: Last week, Joel wrote about deepfakes and some of their possible negative consequences and he touched on a tool being developed to help identify these fakes. UC Berkeley was one of the publishers of the research paper mentioned in the article referenced and was funded by Google, Microsoft and US Government agency DARPA. Its title, “Protecting World Leaders Against Deep Fakes,” highlights a huge vulnerability. If seeing is believing, the importance of ongoing R&D work in this field cannot be overstated.
UC Berkeley also teamed up with Adobe to research how to detect facial manipulations in Adobe Photoshop. They have successfully developed a tool that detects altered images 99% of the time compared to the human detection rate of 53%. Adobe published this blog that I’m sharing with you this week that details their research framework and findings. Acknowledging the importance of being able to trust what we see, and ‘identify and discourage misuse’ was the reasoning behind their research, and whilst this initial work is limited only to images made using Photoshop, Adobe has similar projects underway on other digital media created with their products.
It is more likely that this effort by Adobe is about risk mitigation for the company, but in any case, it is good to see a business giving consideration to the ethical implications of the technology they create. I think it should become commonplace not only after development but also during the development phase.
Younger generations are growing horns in the back of their head
Up to half of all young people could be developing horn-like growths in the backs of their heads, startling Australian research suggests.
Joel says: When I first saw the headline and opening statement of this piece I thought I was going to be reading another interesting piece about evolution, this time centring around humans progressively growing horns. What do we need horns for? Who knows! Perhaps the hat industry needs a bit of a shakeup. But after diving deeper into the piece It became a far more serious piece about the impact of modern technology habits on our health.
The findings of two professors from the University of the Sunshine Coast have discovered that 41% of people aged between 18 and 30 had developed a horn-like lump on the back of their skull, some of them up to 30 millimetres in size. The discovery was made after analysing 218 X-Ray images and further MRI scans ruled out that the growths were caused by genetics or injury.
The researchers hypothesise that these new growths are likely caused by poor posture due to extensive mobile and gadget use.
“Shifting the head forwards results in the transfer of the head’s weight from the bones of the spine to the muscles at the back of the neck and head.”
What started out as an article of intrigue quickly became quite serious when the researchers mentioned that these types of growths are normally found in the elderly with long term poor posture. And that it typically starts out painless but we could all be setting ourselves up for a future of chronic pain.
With many of us spending our free time browsing or reading off mobile devices, especially those travelling to and from work via public transport, it’s definitely highlighted for me the importance of keeping your head upright when possible and trying to make a conscious effort to maintain good posture.
VR is training cops to empathize with the people they might kill
Jakkii says: One of the interesting things about technologies like virtual reality (VR) is that we don’t always have a good feel for “real-world” applications beyond gaming, and often the possibilities seem more hype than practical. That’s one of the reasons I really like coming across articles like this, that give you some insight into how VR is being used.
You’re most likely familiar with a number of high-profile police shootings in the US – if no others, I’m sure you’ve at least heard about the case of Australian-American Justine Damond, who was killed in a police shooting; the officer who fatally shot her was recently found guilty in his trial on charges of third-degree murder and second-degree manslaughter (he was acquitted on a further charge of second-degree intentional murder). More often, police shootings involve a person with a disability or a person of colour, and all situations where a police officer fatally shoots someone – especially an unarmed person – should cause us all grave concern. The police do a difficult job, but for good reason, they are not judge, jury and executioner, and fatal shootings should be rare, not common.
So, what can we do about it? Training of officers in the US varies widely and is likely to be particularly reliant on size and available budget of the relevant jurisdiction (certainly this article suggests that may be the case). Questions of consistency and ensuring all officers are trained – and continually so – are bigger issues than the subject of my piece this week. Instead, the focus here is more about how technologies might be used to enhance training. In this case, the technology in question is VR, and the use case is its immersive nature, enabling police officers to see and, to some degree, feel the world around them through the eyes of someone with a disability (autism) or suffering a profound mental health issue (schizophrenia). The intent is to help police to develop empathy with people in those situations whom the police may confront in their policing and have need to engage with in an attempt to de-escalate a situation, provide assistance and, if necessary, take them into custody – without needing to employ force.
What’s quite interesting to me is that the training in question was developed by Axon, maker of tasers. There’s probably a cynical view to be taken about that, but I think even the idea that they’ve gone down this path solely to help prevent potential PR disasters when tasers are used and result in harm, still ends up in a positive place – if training such as this can help reduce escalations and – in the end, harmful or fatal outcomes – because police officers develop a better idea of how the world might look to someone else, I’m all for it. That said, as the article points out, one of the potential flaws here is that there’s minimal evidence that using VR in this fashion to instil empathy is actually effective. In fact, in addition to potential issues of intergroup empathy bias, the article points out that some prior research has indicated that people can be less empathetic towards their competitors, while researchers caution empathy exercises may backfire if the person concludes things ‘aren’t that bad’ for the subject of the exercise.
Overall, though, my sense is there’s some interesting potential there. It certainly then leads me to think about other ways in which similar, but tailored, programs could be used in order to help workers in other scenarios – or maybe even for executives to empathise a bit better with their employees’ actual experience of the workplace. 😉 What do you think – how might VR be useful in your workplace for learning and development (or other applications!)?
This Week in Social Media
This is how the most popular social media networks have changed over time pic.twitter.com/bywkpk0qk4— How Things Work (@ThingsWork) June 17, 2019
Politics, democracy and regulation
- New bill would make Facebook, Twitter liable for political bias
- Big Tech needs regulation, but DC must go to school before it goes to work
- How could deepfakes impact the 2020 U.S. elections?
- Russia sought to use social media to influence E.U. Vote, report finds
- Sudan’s social media presence can’t be suppressed
- India is still hounding WhatsApp to make its messages traceable
Privacy and data
- When social media privacy has its limits
- Facebook can be used to learn a lot about you—including hints about your medical conditions
- Facebook usage falling after privacy scandals, data suggests
- Twitter is removing precise location data on tweets — a small win for privacy but a small loss for journalists and researchers
- Facebook under oath: you have no expectation of privacy
- GDPR’s impact on privacy rights – the good, the bad and the downright complacent
- Creepiness–Convenience tradeoff
- Why Gary Vaynerchuk thinks the death of privacy is a good thing
- Popular soccer app spied on fans through phone microphone to catch bars pirating game streams
Cybersecurity and safety
- Claims YouTube illegally tracked kids reportedly spark US federal investigation
- YouTube executives reportedly mulling over removing all children’s content from main site
- YouTube faces FTC investigation into its kids practices, report says
Society and culture
- Sudan and the Instagram tragedy hustle
- Inside the highly organized lives of ‘planner addicts,’ a massive Instagram community of women who make beautiful to-do lists
- Social media can threaten medical experiments
- How Pinterest’s Head of Inclusion And Diversity is sparking change
- People with disabilities are finding empowerment from Instagram communities
- Hyped-up science is a problem. One clever Twitter account is pushing back.
- (2014) How the internet uses nostalgia
Extremism and hate speech
- Facebook, Twitter and Google will testify to Congress on terrorist content
- Google CEO: YouTube is too big to fix completely
- ‘YouTube recommendations are toxic,’ says dev who worked on the algorithm
- Hundreds of active and former police officers are part of extremist Facebook groups
- Could digital assistants be our personalized toxicity filters for social media?
- Would replacing anonymity with a single universal social media ID fix the web’s toxicity?
- A visual web: would an Instagram without text solve the web’s toxicity problem?
Moderation and misinformation
- Bodies in seats
- Kim Kardashian can get a deepfake taken off YouTube. It’s much harder for you
- Scientists track cryptocurrency discussions on Reddit to learn how disinformation spreads
- Jordan Peterson announces new social media platform amid Pinterest controversy
- Tech Tent: Facebook’s deepfake dilemma
- YouTube CEO Susan Wojcicki says vetting videos before they go up isn’t the right answer
Marketing, advertising and PR
- The hired guns of Instagram
- How image recognition is going to improve your social media ads
- Pinterest’s visual search capabilities took the next step forward with Complete The Look
- YouTube will soon let you try out makeup with AR
- Is TikTok the next big thing in Influencer marketing?
- Reddit partners with Oracle for a much-needed brand safety boost, but is it enough?
- TikTok is taking a less flashy approach its first Cannes
- Content and social are key to both marketing and sales
- Don’t know which toaster to buy? There’s a website for that.
- TikTok hit $9M in in-app purchases last month, up 500% over last year
- How to leverage immersive reality to support your social media strategy
- The company that owns TikTok now has one billion users and many are outside China
- Instagram tests ‘Suggestions For You’ in DMs
- YouTube is testing hiding comments in its Android app
- Facebook employees are not as happy with Mark Zuckerberg as they used to be
- The TikTok strategy: using AI platforms to take over the world
- Would Facebook be better if we paid for it?
- Meet TikTok: How The Washington Post, NBC News, and The Dallas Morning News are using the of-the-moment platform
- Interview nerves getting the best of you? LinkedIn’s latest tools may help
- Facebook is creating photorealistic homes for AI to work and learn in
- Facebook will rank comments to make conversations more meaningful
- LinkedIn officially launches photo tagging, adds video within messaging
- Telegram faces DDoS attack in China… again
Libra – Facebook’s cryptocurrency
We’ve shared links in past Friday Faves about Facebook’s move into cryptocurrency starting with the first hints back in December 2018. This week the news really hit the mainstream when Facebook released its white paper on Libra, planned to launch in 2020.
- Libra white paper
- What is Libra? All you need to know about Facebook’s new cryptocurrency
- Facebook announces Libra, digital currency on the blockchain that doesn’t actually need to be on the blockchain
- The ambitious plan behind Facebook’s cryptocurrency, Libra
- Facebook’s crypto plan borrows from China
- Facebook’s Libra: Is it the Western We Chat or the New Dollar?
- Facebook’s Libra ‘cryptocurrency’ is missing one thing: monetary policy
- Sorry, Facebook: Cryptomoney won’t catch on in Australia, RBA says
- Facebook called before Senate panel over digital currency project
- European regulators are already pressing Facebook about its cryptocurrency
- Facebook’s new cryptocurrency raises privacy concerns
- Libra currently looks more like a fiat currency than a cryptocurrency
- Facebook looks set to make bitcoin more valuable
- Facebook’s daring endgame is to build a whole new financial system
- Facebook currency ‘clearly a threat’ to big banks, finance sector
- Facebook may have too many users for its cryptocurrency to fail — even if you don’t trust it
- Facebook’s Libra blockchain could become the cryptocurrency for self-driving cars
- Facebook’s blockchain lead David Marcus: “I want Libra coin to last hundreds of years”
- Facebook’s Sandberg says cryptocurrency is a ‘long way from launch’
Sydney Business Insights – The Future This Week Podcast
This week: flying cars, China pushes electric vehicles, and the future of the automobile. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
Other stories we bring up: