for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
Women in Business #IWD19
Anne says: Today is International Women’s Day and there are so many ways to celebrate, recognise and take action towards women in the workplace, women in tech etc etc. I found this article a reminder of how openly women will share their experience with others. There’s a response from Jennifer Moss to the question: “What stereotypes/assumptions of women in business would you like to see broken?” –
That women stop needing to hedge their language so people won’t assume they are “bossy”.
This is a telling statement that reveals an unhealthy dose of unconscious bias. The fact that women are editing their behaviour to fit into workplaces acceptable norms of behaviour, reeks of a control system that needs a reboot.
I know Nat, our sadly missed colleague, would have written a post about women in academia, or women in technology so I have intentionally not selected those topics today. But I know she would have appreciated the comments from these women.
I dedicate this post to Nat and all the women who have inspired others, and who are no longer here but continue to inspire.
Women in STEM
Jakkii says: Every year on International Women’s Day (IWD) we ask the question: why aren’t more women in STEM? 2019, sadly, is no different – google ‘women in STEM’ and browse the news section and you’ll quickly see an abundance of articles both lamenting the lack of women in STEM and, logically, asking the same question we’ve been asking ourselves for some time: ‘how do we get (and keep) more women in STEM?’
While these are important discussions, and we shouldn’t lose sight of the protests and fights that led to IWD and continue today, for my IWD contribution I’d like to take a more celebratory approach, and share a few pieces that promote and celebrate women in STEM.
- 5 Women in STEM who inspire us
An impressive collection of amazing women, including women of colour, in various fields of STEM. Read it and be inspired – and probably learn about a few women you’d not previously known about.
- 10 of Australia’s most influential women in engineering
In fairness, you might not be able to name 10 influential men in engineering, either, but I’m willing to bet you can’t name 10 women in engineering – influential or otherwise. This article’s here to help you correct that.
- Celebrating 7 Australian female tech founders
As a bonus from reading about these fabulous tech founders, you might just learn about some fantastic tech companies you weren’t aware of.
- Meet the women leading Australia’s charge in science and space
Did you know the incoming Chief Scientist for Australia is a woman? This article will introduce you to her, along with some other extraordinary women in science and space.
Finally, because many people don’t seem to be aware, it’s not just IWD – the whole of March is women’s history month. Do yourself a favour and read some articles about some of the amazing women of history – some of whom we’re only just learning about properly now, as their achievements were often attributed to men. To get you started, I’ve included an article with 15 of those women for your reading pleasure, below.
The Hipster effect – why anti-conformists always end up looking the same
Anne says: This article is fascinating – and it’s not really all about hipsters! It’s about sub-cultures and anti-conformists and how these patterns of behaviour become synchronised, eventually. But it’s not only restricted to non-conformists – these patterns or effects show up across all sectors of society. (It reminds me of how fireflies synchronise to light up entire areas of forests).
Jonathan Touboul, a mathematician at Brandeis University in Massachusetts studies how information is transmitted through society and how it influences the behaviour of people. He focuses on anti-conformists, or hipsters, who intentionally do the opposite of mainstream society to be, well, not mainstream. But at some point, the anti-conformists synchronise to become conformists. His findings identified a critical point – time delay. Touboul has created some wonderful looking mathematical models to identify how random acts become synchronised.
The outcomes from his models provide insight into the time aspect and predictions when synchronisation will occur, but I’m not sure they identify why the non-conformists react or how they select the action – like growing beards as the behaviour to synchronise.
Touboul suggests that understanding these patterns and being able to identify the point when actions synchronise could have far-reaching implications from financial systems through to health and social sciences.
Now – you could stop reading here – but don’t! There’s a postscript to this article – thanks to Jakkii for sharing this in our internal chat (she wasn’t aware I was reading about hipsters and synchronisation). The article quotes the same research report from Touboul – but – shows how it can play out. It demonstrates just how far the synchronisation can go, in fact, you can’t tell one hipster from another! And apologies to any hipster who takes offence to this claim – read the second article!
Australian Defence Force invests $5 million in ‘killer robots’ research
Joel says: I came across this piece on the ABC this week that I found quite interesting mainly because it combines my personal interests of AI, robotics and drones with some very real ethical issues.
The article states that the Australian Defence Force will be making it’s largest ever investment in AI ethics in an effort to create ‘ethical killing machines’.
Now I’m all for getting soldiers off the front line and investigating alternate ways that wars could be fought that could keep them and our country safe because I’m not so foolish to believe wars will ever just stop. But after reading the article I was left with so many questions and couldn’t help but think about the potential issues that could arise from AI-driven combat missions in future wars.
The article mentions
The accountability is shifting in a pretty significant way.
And I couldn’t agree more. The decision-making process and years of military training will be taken away from the soldiers and programmed or taught to a machine. If it works without a hitch then yes it sounds like a perfect solution. But would designers and engineers of the AI software be able to deal with the impact of their creation? What if there is a bug that causes the death of someone who didn’t meet these algorithms’ requirements of a ‘bad person’? We all know that even the largest software systems in the world come with their share of bugs.
And although it’s often thrown around in a jokey way whenever a new Boston Dynamics video comes out or anytime someone remembers the plot of the Terminator movies, what happens if this AI becomes somewhat sentient? I wrote recently about CIMON, the International Space Station’s AI turning belligerent. Fail-safes would need to be put in place that would have to 100% rule out the possibility of it occurring with these drones for anyone to approve using them surely?
Our laws especially here in Australia have trouble keeping up with the fast-paced evolution of the internet. How are they ever going to come up with whole legislation around what is ethical and what isn’t when it comes to an automated killing machine?
In a perfect world, these drones would go a long way to forever changing how wars are fought and while I love the idea of these AI-powered drones as a concept all I can see is the potential risks involved if they were to be affected by the same issues that affect AI-driven systems today. I’m not alone in seeing risks with AI developed for the military, either – Microsoft employees have protested Microsoft’s contract to develop “battle-read” HoloLens headsets for US soldiers (while Microsoft has defended the work). Similarly, Google employees have protested facial recognition AI programs for detecting people in drone and other footage (a project from which Google later withdrew – sort of).
Unsurprisingly then, with the Australian Defence Force putting big $$ into the research and development (and the US military working on developing ‘autonomous tanks’ at the same time), I’m sure this won’t be the last we’ll hear about ‘ethical killing machines’ during the 6-year project.
A bold idea to replace politicians
Helen says: For the months ahead, in the media, will be endless policy announcements, political speeches and heartfelt promises leading up to the Federal election – not exactly inspirational stuff – but I am sharing something of a political nature with you this week. It’s a thought-provoking TED Talk by César Hidalgo, a physicist, writer, entrepreneur and director of the Collective Learning group at The MIT Media Lab.
Hidalgo explains how people are tired of politicians and tired of having their personal information used to target political propaganda at them. He questions the current model of democracy and suggests that by combining direct democracy with software agents, AI could be used to effectively automate politicians, enabling people to be directly involved in political decision making. This would be achieved by creating a system designed to make political decisions on your behalf. You would create your own avatar, chose a training algorithm and train it to predict how you would vote and you could set it up to automate the voting process or set the controls you want. Hidalgo suggests that if such a system were in place, eventually algorithms could be used to write laws to gain a certain percentage approval.
It would be an understatement to say that using AI to run a government is scary – Hidalgo himself acknowledges it is a crazy idea, but he explains how we could start testing this vision and possibly turn it into something viable that we can trust – maybe not in our lifetime, but not all that far into the future either. I hope you find this talk as interesting as I did.
Meet the ‘Delete Nevers’
When it comes to their stuff, people often have a hard time letting go. When the object of their obsession are rooms full of old clothes or newspapers, it can be unhealthy—even dangerous. But what about a stash that fits on 10 13cm hard drives?
Jakkii says: I’ve often jokingly referred to myself as a ‘digital hoarder’, though in truth what I primarily am is someone averse to deleting emails “in case I need them” (though I’m getting better at that). Other than storing photos, everything else I have either on my hard drive or in the cloud that I don’t need is really there due to laziness. I have no doubt I’m not even close to alone on that!
And after reading this piece – and learning there’s a whole subreddit dedicated to digital hoarding, r/datahoarder – I’m convinced: I’m definitely not a digital hoarder. But I am intrigued by people who are legitimately digital hoarders, and I found this piece a fascinating read. It seems many of these data hoarders don’t see themselves as hoarders at all; rather, performing a public good by virtual of collecting and curating digital data. Yet, a source the piece talked to suggests what is probably an important distinction:
“I would imagine the uber-acquiring of digital media is not impairing the quality of your life, unless that is what you’re spending your life on, is acquiring.”
The source goes on, describing thoughts and self-regulated attempts to control digital hoarding behaviour that I would imagine more closely mirror those related to physical hoarding. It seems then that where physical hoarding is almost exclusively considered a disorder and a mental health issue, digital hoarding may be borne from the same condition – but not necessarily!
What do you think? Having read the piece, do you consider yourself a digital hoarder, even if on a smaller scale? Or are you more like me – for the most part actually just a bit lazy about deleting things?
This week in social media
Politics and regulation
- TikTok’s rise is a test for social media regulation
- Google employees uncover ongoing work on censored China search
- Facebook is banning foreign-funded political ads in Indonesia as elections approach
- Social media firms agree to quickly take down prejudicial posts
- Russians are shunning state-controlled TV for YouTube
- Facebook concerned by Australia’s ‘news regulator’
- Digital, political nomads: meet the Chinese-Americans for Trump
Privacy and data
- Facebook knows Facebook isn’t the future
- Facebook’s US user base declined by 15 million since 2017, according to survey
- WeChat and QQ’s 364 million Chinese users data exposed online
- Facebook won’t let you opt out of its phone number ‘look up’ setting
- Why Facebook still seems to spy on you
- Revealed: Facebook’s global lobbying against data privacy laws
- Twitter Reminds Us How Much Of The World Is Absent From Geotagged Social Media
- Facebook, Apple, Twitter and LinkedIn face investigations for violating European privacy laws
Society and culture
- How to predict Facebook’s future
- LinkedIn reveals recruiters are less likely to click on a woman’s profile when headhunting
- Pinterest feminism and the quietly revolutionary history of bar carts
- Chronic pain patients using Pinterest to cope with symptoms
- Experts give thumbs up to Royal family’s social media policy
- Queen Elizabeth shares very first Instagram post
Cybersecurity and safety
- Momo is as real as we’ve made her
- Instagram biggest for child grooming online – NSPCC finds
- Phishing scam targets Instagram users by offering verified badges
Moderation, misinformation and hate speech
- Facebook outlines plans to curb anti-vax conspiracy theories
- YouTube Is Rolling Out A Feature That Shows Fact-Checks When People Search For Sensitive Topics
- “Men are scum”: inside facebook’s war on hate speech
- The Comment Moderator Is The Most Important Job In The World Right Now
- The Life of a Comment Moderator for a Right-Wing Website
- Facebook Moderation and the Unforeseen Consequences of Scale
- Should Twitter, Facebook, and Others Delete Provably Wrong Material?
- While Two Nuclear Powers Were On The Brink Of War, A Full-Blown Online Misinformation Battle Was Underway
- Facebook, Twitter, and Google still aren’t doing enough about disinformation, EU says
- Facebook, Twitter: We spot trolls based on how they act, not their posts
- Mark Zuckerberg believes Facebook’s future is private messaging
- Pinterest expands personalized shopping recommendations
- Meet the (AU) team: Snapchat talks ads, automation and AR
- Dark mode added to Facebook Messenger, here’s how to enable it
- Reddit Is Testing A New Tipping Feature
- What TikTok’s Chinese predecessor can reveal about its future
- Instagram prototypes video co-watching
- Twitter launches new ‘Timing is Everything’ insights tool
- A beginner’s guide to TikTok
- Facebook and telegram are hoping to succeed where bitcoin failed
- YouTube’s cable TV alternative now has more than 1 million paying subscribers
Sydney Business Insights – The Future This Week Podcast
This week: why data is not like oil, dangerous AI, and a robot that gives sermons. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week:
New AI fake text generator may be too dangerous to release
Other stories we bring up:
Kai-Fu Lee, Chairman and CEO of Sinovation Ventures and former president of Google China on data as the new oil
The Economist argues the world’s most valuable resource is no longer oil, but data
Facebook obtains deeply personal data from Apps
The New York Times discusses data in the context of the Cambridge Analytica’s improper use of Facebook data
Our previous discussion on TFTW of DNA data sharing
Author and historian Yuval Noah Harari on why fascism is so tempting and how your data could power it at TED2018
Language models are unsupervised multitask learners research paper
How OpenAI’s text generator works
The Wired story covering the AI fake text generator too dangerous to make public
Our previous discussions of fake stuff on TFTW, including fake reviews and deepfakes
Microsoft’s politically correct chatbot