Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive

Women in Business #IWD19

Anne says: Today is International Women’s Day and there are so many ways to celebrate, recognise and take action towards women in the workplace, women in tech etc etc. I found this article a reminder of how openly women will share their experience with others. There’s a response from Jennifer Moss to the question: “What stereotypes/assumptions of women in business would you like to see broken?” – 

That women stop needing to hedge their language so people won’t assume they are “bossy”. 

This is a telling statement that reveals an unhealthy dose of unconscious bias. The fact that women are editing their behaviour to fit into workplaces acceptable norms of behaviour, reeks of a control system that needs a reboot.

I know Nat, our sadly missed colleague, would have written a post about women in academia, or women in technology so I have intentionally not selected those topics today. But I know she would have appreciated the comments from these women.

I dedicate this post to Nat and all the women who have inspired others, and who are no longer here but continue to inspire.


Women in STEM

Jakkii says: Every year on International Women’s Day (IWD) we ask the question: why aren’t more women in STEM? 2019, sadly, is no different – google ‘women in STEM’ and browse the news section and you’ll quickly see an abundance of articles both lamenting the lack of women in STEM and, logically, asking the same question we’ve been asking ourselves for some time: ‘how do we get (and keep) more women in STEM?’

While these are important discussions, and we shouldn’t lose sight of the protests and fights that led to IWD and continue today, for my IWD contribution I’d like to take a more celebratory approach, and share a few pieces that promote and celebrate women in STEM.

Finally, because many people don’t seem to be aware, it’s not just IWD – the whole of March is women’s history month. Do yourself a favour and read some articles about some of the amazing women of history – some of whom we’re only just learning about properly now, as their achievements were often attributed to men. To get you started, I’ve included an article with 15 of those women for your reading pleasure, below.


The Hipster effect – why anti-conformists always end up looking the same

Anne says: This article is fascinating – and it’s not really all about hipsters! It’s about sub-cultures and anti-conformists and how these patterns of behaviour become synchronised, eventually. But it’s not only restricted to non-conformists – these patterns or effects show up across all sectors of society. (It reminds me of how fireflies synchronise to light up entire areas of forests).

Jonathan Touboul, a mathematician at Brandeis University in Massachusetts studies how information is transmitted through society and how it influences the behaviour of people. He focuses on anti-conformists, or hipsters, who intentionally do the opposite of mainstream society to be, well, not mainstream. But at some point, the anti-conformists synchronise to become conformists. His findings identified a critical point – time delay. Touboul has created some wonderful looking mathematical models to identify how random acts become synchronised.

The outcomes from his models provide insight into the time aspect and predictions when synchronisation will occur, but I’m not sure they identify why the non-conformists react or how they select the action – like growing beards as the behaviour to synchronise.

Touboul suggests that understanding these patterns and being able to identify the point when actions synchronise could have far-reaching implications from financial systems through to health and social sciences.

Now – you could stop reading here – but don’t! There’s a postscript to this article – thanks to Jakkii for sharing this in our internal chat (she wasn’t aware I was reading about hipsters and synchronisation). The article quotes the same research report from Touboul – but – shows how it can play out. It demonstrates just how far the synchronisation can go, in fact, you can’t tell one hipster from another! And apologies to any hipster who takes offence to this claim – read the second article!


Australian Defence Force invests $5 million in ‘killer robots’ research

Joel says: I came across this piece on the ABC this week that I found quite interesting mainly because it combines my personal interests of AI, robotics and drones with some very real ethical issues.

The article states that the Australian Defence Force will be making it’s largest ever investment in AI ethics in an effort to create ‘ethical killing machines’.

Now I’m all for getting soldiers off the front line and investigating alternate ways that wars could be fought that could keep them and our country safe because I’m not so foolish to believe wars will ever just stop. But after reading the article I was left with so many questions and couldn’t help but think about the potential issues that could arise from AI-driven combat missions in future wars.

The article mentions

The accountability is shifting in a pretty significant way.

And I couldn’t agree more. The decision-making process and years of military training will be taken away from the soldiers and programmed or taught to a machine. If it works without a hitch then yes it sounds like a perfect solution. But would designers and engineers of the AI software be able to deal with the impact of their creation? What if there is a bug that causes the death of someone who didn’t meet these algorithms’ requirements of a ‘bad person’? We all know that even the largest software systems in the world come with their share of bugs.

And although it’s often thrown around in a jokey way whenever a new Boston Dynamics video comes out or anytime someone remembers the plot of the Terminator movies, what happens if this AI becomes somewhat sentient? I wrote recently about CIMON, the International Space Station’s AI turning belligerent. Fail-safes would need to be put in place that would have to 100% rule out the possibility of it occurring with these drones for anyone to approve using them surely?

Our laws especially here in Australia have trouble keeping up with the fast-paced evolution of the internet. How are they ever going to come up with whole legislation around what is ethical and what isn’t when it comes to an automated killing machine?

In a perfect world, these drones would go a long way to forever changing how wars are fought and while I love the idea of these AI-powered drones as a concept all I can see is the potential risks involved if they were to be affected by the same issues that affect AI-driven systems today. I’m not alone in seeing risks with AI developed for the military, either – Microsoft employees have protested Microsoft’s contract to develop “battle-read” HoloLens headsets for US soldiers (while Microsoft has defended the work). Similarly, Google employees have protested facial recognition AI programs for detecting people in drone and other footage (a project from which Google later withdrew – sort of).

Unsurprisingly then, with the Australian Defence Force putting big $$ into the research and development (and the US military working on developing ‘autonomous tanks’ at the same time), I’m sure this won’t be the last we’ll hear about ‘ethical killing machines’ during the 6-year project.


A bold idea to replace politicians

Helen says: For the months ahead, in the media, will be endless policy announcements, political speeches and heartfelt promises leading up to the Federal election – not exactly inspirational stuff – but I am sharing something of a political nature with you this week. It’s a thought-provoking TED Talk by César Hidalgo, a physicist, writer, entrepreneur and director of the Collective Learning group at The MIT Media Lab.

Hidalgo explains how people are tired of politicians and tired of having their personal information used to target political propaganda at them. He questions the current model of democracy and suggests that by combining direct democracy with software agents, AI could be used to effectively automate politicians, enabling people to be directly involved in political decision making. This would be achieved by creating a system designed to make political decisions on your behalf. You would create your own avatar, chose a training algorithm and train it to predict how you would vote and you could set it up to automate the voting process or set the controls you want. Hidalgo suggests that if such a system were in place, eventually algorithms could be used to write laws to gain a certain percentage approval.

It would be an understatement to say that using AI to run a government is scary – Hidalgo himself acknowledges it is a crazy idea, but he explains how we could start testing this vision and possibly turn it into something viable that we can trust – maybe not in our lifetime, but not all that far into the future either. I hope you find this talk as interesting as I did.


Meet the ‘Delete Nevers’

When it comes to their stuff, people often have a hard time letting go. When the object of their obsession are rooms full of old clothes or newspapers, it can be unhealthy—even dangerous. But what about a stash that fits on 10 13cm hard drives?

Jakkii says: I’ve often jokingly referred to myself as a ‘digital hoarder’, though in truth what I primarily am is someone averse to deleting emails “in case I need them” (though I’m getting better at that). Other than storing photos, everything else I have either on my hard drive or in the cloud that I don’t need is really there due to laziness. I have no doubt I’m not even close to alone on that!

And after reading this piece – and learning there’s a whole subreddit dedicated to digital hoarding, r/datahoarder – I’m convinced: I’m definitely not a digital hoarder. But I am intrigued by people who are legitimately digital hoarders, and I found this piece a fascinating read. It seems many of these data hoarders don’t see themselves as hoarders at all; rather, performing a public good by virtual of collecting and curating digital data. Yet, a source the piece talked to suggests what is probably an important distinction:

“I would imagine the uber-acquiring of digital media is not impairing the quality of your life, unless that is what you’re spending your life on, is acquiring.”

The source goes on, describing thoughts and self-regulated attempts to control digital hoarding behaviour that I would imagine more closely mirror those related to physical hoarding. It seems then that where physical hoarding is almost exclusively considered a disorder and a mental health issue, digital hoarding may be borne from the same condition – but not necessarily!

What do you think? Having read the piece, do you consider yourself a digital hoarder, even if on a smaller scale? Or are you more like me – for the most part actually just a bit lazy about deleting things?


This week in social media

Politics and regulation

Privacy and data

Society and culture

Cybersecurity and safety

Moderation, misinformation and hate speech


Sydney Business Insights – The Future This Week Podcast

This week: why data is not like oil, dangerous AI, and a robot that gives sermons. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week:

Data is NOT the new oil

New AI fake text generator may be too dangerous to release

Other stories we bring up:

Kai-Fu Lee, Chairman and CEO of Sinovation Ventures and former president of Google China on data as the new oil

The Economist argues the world’s most valuable resource is no longer oil, but data

Facebook obtains deeply personal data from Apps

The New York Times discusses data in the context of the Cambridge Analytica’s improper use of Facebook data

Our previous discussion on TFTW of DNA data sharing

Author and historian Yuval Noah Harari on why fascism is so tempting and how your data could power it at TED2018

Language models are unsupervised multitask learners research paper

How OpenAI’s text generator works

The Wired story covering the AI fake text generator too dangerous to make public

Our previous discussions of fake stuff on TFTW, including fake reviews and deepfakes

This Person Does Not Exist

Microsoft’s racist chatbot

Microsoft’s politically correct chatbot


Leave a Reply

Your email address will not be published.

You may also like

a half-closed laptop sits on a desk with a mouse next to it and an empty chair

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
a pile of dark grey question marks with one bright yellow and one bright blue question mark standing out from the pile, depicting better questions

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
series of yellow characters on top of yellow rods, focusing on a smiling character

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more
Photo of a crowd of people dressed in business casual walking in a city setting, focusing on a blonde woman in the centre

2 years ago

Friday Faves – What We’re Reading This Week

Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch…

Read more