for W3c validation
Friday Faves is our weekly blog series highlighting a few select pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
Why Robots Should Learn to Build Crappy Ikea Furniture
Anne says: Disclaimer: I don’t think IKEA furniture is crappy – not at all! In fact, I love it, it’s transformed how we purchase affordable furniture. However, when it comes to putting together furniture – hmmm, it’s not an amazing user-experience for me. I always have a couple of screws or something leftover – I had believed these were spares – apparently not!
When I saw the headline for this article (aside from the “crappy” reference, which may reveal more about the author’s ability to assemble furniture than he realises) I was instantly intrigued. Apart from the obvious response – so they can build furniture for me – why can’t robots build IKEA furniture. Here’s where we humans can quietly gloat about our cerebral capabilities to manage novel concept, to interpret and experiment with abstract ideas, to be able to identify how these random pieces will become a piece of furniture. At this point in time, robots are commonly used in manufacturing, but not to complete complex tasks. They work alongside people who do the complex actions.
Now, bring on the IKEA assembly instructions – those diagrams. Tune the Allen keys. Sit back and watch the robots trying to assemble 80 different chairs and bookshelves. The researchers describe actions that seem trivial to us, for example, where to pick up an object, they need to string together a number of actions with multifaceted issues. And of course, IKEA recommend an order or sequence for each unit – we can (mostly) visualise why. The robots have no ability to do this (yet). So allowing them to follow, interpret instructions is a much bigger challenge than you or I assembling IKEA chairs!
Interestingly, they don’t mention how much success they had – but they do conclude with a telling statement:
“…given a few years of training and more than a few broken chairs, our particle-board misery will be the machines’ gain.”
So – remember the claim: Robots are taking our jobs? Not if you become an IKEA assembly specialist, for now. In the meantime, we’re still on our own with our Allen keys – IKEA won’t be sending robots with your furniture delivery!
Read: https://www.wired.com/story/why-robots-should-learn-to-build-crappy-ikea-furniture/
Building Ethical AI for Talent Management
Christoph says: Artificial intelligence is invading pretty much every aspect of our lives – both at home and at work. I admit it upfront: I am ambivalent about this development. On the one hand, I am very excited about the potential and promise of machines making our lives easier. On the other hand, we humans are good at thinking linear but really bad at thinking exponentially. And I believe this is what we are dealing with here. About 15 years ago I gave a presentation about bots in a seminar about Machine Learning at university. In my story, I sent my personal bot to a travel agency bot to find the perfect holiday destination and more importantly to also negotiate the best price with the other bot! Back then that seemed so incredibly absurd, but now I can imagine this scenario taking place in 5-10 years.
But back to the actual story. Humans are biased. This is a fact. Yes, there are ways to surface unbiased behaviour and to address it but that is incredibly hard. Removing bias form AI is hard, too, but probably still more feasible than removing it from humans themselves. Modern HR systems are still fairly dumb. They search incoming applications for keywords, certain academic institutions or previous employers or similar. However, this will not allow the system to predict the probability of the candidate to be the best fit, i.e. most successful in his role. This requires the modelling of data sets, applying the insights and learning from recommendations and ultimately decisions being taken. But here is the crux: How do we make sure that the AI is not biased? The authors of the article suggest the following 4 steps:
- Educate candidates and obtain their consent
Ask candidates to provide their data. Based on that AI systems should be able to predict and explain the reason why one candidate should be preferred over the other - Invest in systems that optimise for fairness and accuracy
Diversity is good. But what if the AI only points to candidates with a similar background? - Develop open-source systems and third-party audits
This would allow others to scrutinise the algorithms behind certain decisions and would the company protect from being accused of being unfair - Follow the same laws — as well as data collection and usage practices — used in traditional hiring
Any information that is not considered in traditional hiring should not be used by AI, e.g. race, physical or emotional conditions
I think it is a start, but not the end of the story. And you know what, maybe the future will look completely different because, for my taste, the authors are thinking along a linear progression. Instead, try to picture the following: As part of your upbringing, you have a coach at your side (based on AI of course). This coach will help you to truly identify your passion, build on your strengths and address your weaknesses. It will know more about you than you do. This coach will help you identify an occupation (maybe we do not work for money any more in the future), which you will really enjoy and excel at. This coach will find and negotiate with organisations’ hiring bots your exact role and responsibilities and present you the results. You choose the most desirable organisation and role and start…maybe next week. Welcome to the brave new world!
Read: https://hbr.org/2019/11/building-ethical-ai-for-talent-management
Most people are bad at arguing. These 2 techniques will make you better.
Anyone who has argued with an opinionated relative or friend about immigration or gun control knows it is often impossible to sway someone with strong views.
Jakkii says: Ah, arguing. Who doesn’t love it? Well, a lot of people actually, and part of that for many of us is because we don’t always feel equipped to argue our point effectively – particularly up against someone with strong opinions (and strong personalities!). If you’re one of those people, well, this article is here to present you with a couple of strategies for putting forward more effective arguments. Perhaps handy the next time you want to ask for a pay rise, or sell your business case to your exec sponsor.
Strategy one: If the argument you find convincing doesn’t resonate with someone else, find out what does
Essentially, this is about reframing your argument into terms that make sense to the other person, rather than relying upon the argument you personally find most compelling or convincing. The article goes further, citing the work of Robb Willer, a professor of sociology and psychology at Stanford University, and Matthew Feinberg, Assistant Professor at the Rotman School of Management at the University of Toronto, and suggests you use your ‘opponents’ morals against them. Admittedly, in a business context, this might be less about morals and more about positioning your argument in a way that aligns with what the other person most needs in order to achieve their (or the business’) goals, but I think the point makes sense.
Strategy two: Listen. Your ideological opponents want to feel like they’ve been heard.
I mean, don’t we all just want to be heard?
Willer and Feinberg’s work suggest there’s a way to change minds on policy. But what about on prejudice? How can you effectively argue a person out of a prejudicial opinion? Because as Vox’s German Lopez explains in great detail, simply calling people racist is a strategy sure to backfire.
This section discusses ‘deep canvassing’ and why it’s an effective strategy in political persuasion. This one is a little more lateral when it comes to the workplace, but I think the core message is the important part: listen. Not only is it hard to bring people around to your ‘side’ if you don’t listen to their objections in the first place, but listening might also just help you adjust your own position to be more rounded, more inclusive, and more effective.
Read: https://www.vox.com/2016/11/23/13708996/argue-better-science
This Week in Social Media
Politics, democracy and regulation
- Why did Iran shut off the internet for the entire country?
- WhatsApp banned over 400,000 accounts during Brazil’s election
- A push to make social media companies liable in defamation is great for newspapers and lawyers, but not you
- YouTube is still struggling to label videos from state-controlled outlets
- US teenager’s TikTok video on Uyghur ‘concentration camps’ in China’s Xinjiang goes viral Update: TikTok apologises, says it was a mistake
Privacy and data
- Facebook must face data breach class action on security, but not damages: US judge
- Facebook and Twitter data was exposed to developers through app store bug
- Facebook agrees to provide additional documents in California AG data privacy probe
- Facebook built a facial recognition app for employees
Cybersecurity and safety
- #Wake Up Instagram
- Twitter finally ditches SMS for two-factor authentication
- New WhatsApp security concern: India cyber cell advises update
Society and culture
- Selfies, influencers and a Twitter president: the decade of the social media celebrity
- When Instagram killed the tabloid star
- Young, Amish, and TikTok famous
- On Instagram, experimenting beyond the human condition
- How queer and trans people are turning the internet into a safe holiday space
Extremism, trolling and hate speech
- Read Sacha Baron Cohen’s scathing attack on Facebook in full: ‘greatest propaganda machine in history’
- Social media gives young climate activists a platform – it also brings out the trolls
Moderation and misinformation
- That uplifting Tweet you just shared? A Russian troll sent it
- A leaked excerpt of TikTok moderation rules shows how political content gets buried
- Kate Klonick on Facebook’s Oversight Board
Marketing, advertising and PR
- Facebook provides effective video content strategy tips
- “It’s about educating and engaging customers”: Inside Bunnings’ YouTube series
- Twitter adds new ‘Conversation Insights’ to Media Studio
- The 50 best social media podcasts
Platforms
- Facebook will pay you to answer market research surveys, and insists it won’t sell the data
- Inside the Instagram AI that fills Explore with fresh, juicy content
- Facebook is building an Instagram-style Close Friends feature
- Why Spotify may soon have a big TikTok problem
- You can now hide replies to your tweets — here’s how
- Former Facebook employees are creating Cocoon, a social media network for your family
- Twitter will remove inactive accounts and free up usernames in December
Sydney Business Insights – The Future This Week Podcast
This week: working less for more, and the future of work not as usual. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
04:50 Experimenting with the 4-day work week
12:18 What’s missing from the future of work debate
Other stories we bring up
Nike’s decision to stop selling merchandise to Amazon
Condé Nast’s CEO still isn’t sure about the impact of Apple News Plus
Why the world is running out of sand
We’d better be ready to rethink the meaning of work, because things are changing fast
What if you had a four-day week?
Microsoft tried a 4-day workweek in Japan
When your boss is an algorithm (1)
When your boss is an algorithm (2)
The Uber study by researchers Mareike Möhlmann and Ola Henfridsson, from Warwick Business School, and Lior Zalmanson
Hospital software tells doctors of their ‘deficiencies’
Listen: http://sbi.sydney.edu.au/the-future-of-work-not-as-usual-on-the-future-this-week/