for W3c validation
Friday Faves is our weekly blog series highlighting a few choice pieces from the REG team’s reading lists. You can catch up on past Friday Faves on the archive.
Will AI Create as Many Jobs as It Eliminates?
Anne says: This week we co-organised 2 events (in Sydney and Melbourne) that related to Leadership in the Digital Workplace. Not surprisingly, the topic of robots, chatbots and AI were raised and vigorously debated. For the most, the conversations centred around concerns: impact to existing workforce, ethics, bias of AI algorithms and if the robots are going to take our jobs, what will we do all day?!
This article from MITSloan Management Review provides the counter-perspective – one that I find refreshing and more optimistic than many others being published.
The findings from a global study with Accenture of 1,000 large organisations already using AI and machine-learning systems, identified the emergence of entirely new roles.
These new roles fall into 3 categories: trainers, explainers, and sustainers (refer to image above). They will be filled by humans – NOT robots – and focus on the work of machines being effective and responsible. Even when, in the future, AI may become self-governing, the article suggests that human ethics compliance managers, for example, will still be critical to ensure the operations of advanced systems.
The concluding comments raise the question of preparing people for these new roles – what type of education will be required? Which in turn raises the issue of HR in organisations being prepared to attract, recruit and train these new types of roles.
For me, the article only scratches the surface of what will be changing, but it is unquestionably the start of conversations we need to have.
(Note: the article is behind a registration wall – you can read the entire article if you register).
Read: http://sloanreview.mit.edu/article/will-ai-create-as-many-jobs-as-it-eliminates
For more on our Digital Leadership panels, the twitter conversations have been captured into a Storify for Sydney and for Melbourne.
This is What Happens When We Debate Ethics in Front of Superintelligent AI
Jakkii says: The short film by The Guardian and the article explore the notion of ethics in AI – how do we teach something as complex and interpretive as ethics and a moral code to AI? Not only is this not reducable to a set of unambiguous rules, but humans don’t necessarily agree on what is or isn’t ethical – let alone follow it. As the film puts it: “We can’t rely on humanity to provide a model for humanity, that goes without saying.” It’s an important debate not only for engineers, designers and programmers, but for all of us – what are the ethical parameters appropriate for a super intelligent AI to ensure help, and not harm? In other words, for the Terminator fans amongst us, how do we avoid creating Skynet (although, according to this article, Skynet is already real, but it won’t destroy us… hopefully)?
Intrigued by this question of ethics and AI, I came across a paper exploring value alignment, using stories to teach human values to AI (see further reading, below). The paper defines value alignment as “a property of an intelligent agent indicating it can only pursue goals that are beneficial to humans.” In their preliminary research, the authors rely on implicit and explicit values held within human stories to ‘enculture’ the AI. In their view, ‘enculturing’ AI is a means of socialising the AI – teaching them to adhere to social values within a particular culture, and avoid “psychotic-looking” behaviours. As we consider the future of work as a coordinated approach between humans and AI, an ethical, moral, “non-psychotic” AI might seem to some to be a far more preferable colleague than, perhaps, some of the human ones we’ve come across in our workplaces to date.
Further reading: Using Stories to Teach Human Values to Artificial Agents
Facebook is Training Its AI to Try and Spot Suicides
Nat says: I completed my undergraduate degree in psychology, and ever since then I have been deeply moved and equally concerned about the statistics, and public stigma, relating to suicide. Facebook has been working on suicide prevention methods since 2015, and they now want to use AI to help prevent people from streaming their suicides via Facebook Live. In Australia, there are broadcasting regulations which limit how much the media can report on suicide, yet with the prevalence of social media, these restrictions are not something that can be easily controlled or monitored by law. What I like about Facebook’s quest is that they are putting suicide on the agenda as something that needs to be talked about more openly, as the discussion of suicide has been stigmatised for centuries.
For example, in Ancient Rome, a person who took their life would be denied the rights to a normal burial and would be instead buried outside of the city without a headstone. In 18th Century France, taking ones life was seen as an abuse of liberty and an act of revolt, and French law even had a penalty for the dead in which their body would be dragged face down through the streets for public shaming. The taboo of suicide is still prevalent in today’s society, even when it is reported that suicide is the leading cause of death for men in Australia under the age of 35, and every 40 seconds someone in the world takes their life. It has even been said that more US and British soldiers die from suicide than they do combat. As we have become a somewhat online society, perhaps trying to prevent or offer help to those suffering via something like a Facebook algorithm is the way forward. What use is any progress in the world if people are unhappy and suffering as an outcome? Perhaps our focus should be on the design and use of technologies that better our lives and the world we live in.
People in need of help can call Lifeline on 13 11 14.
Physical Rehabilitation Goes High Tech with the Help of Virtual Reality
Joel says: As someone who is really into gaming and new technologies VR is right up my alley. I have my own VR setup at home and there are times where the immersion is so realistic I forget i’m just sitting in my lounge room. As more people adopt VR its great to see what some developers are using the tech for beyond gaming. This article tells the story of Australian movement therapist Rohan O’Reilly who is using VR to rehabilitate patients and shows us that we are only just breaking the surface on what can be achieved with VR technology. After the gaming community I truely believe the next industry to embrace VR technology will be the medical field as the practical uses there could revolutionise the way doctors learn.
“If your rehabilitation just tended to be based around the fact that you had to pick up an inanimate object, which you had no real emotional connection to, repetitively … for most people, they would think ‘OK, I can do this for a little while’, but they’re quickly going to run out of steam.
“If you put someone in virtual reality with everything that reminds them of the things that they love to do, they’re essentially just going to give themselves therapy.
Redefining Private Space
“This is where you find me, but I don’t share it on that platform.”
Emilio says: This piece speaks to me as a (social media) marketer on so many levels. And whilst it touches on the many ramifications of digital technology, both current and emerging, changing consumer behaviour and the ‘new rules of play’ in marketing as a result, my focus is in the area I am fascinated with the most – ‘dark social’.
On Instagram, where sharing of the best version of ourselves is the norm, we get a kick out of public adulation. Whilst on Facebook Messenger and other instant messaging apps, our sharing tends to be more real and raw. It seems that we have indeed entered the age of social media where we draw the line between our public and private personas, and choose what we share and which platforms we would share on accordingly. It goes without saying, marketers now need to look at more than the surface; we need to dig deeper.
If we really want ‘authentic’ insights and true social listening to uncover ‘genuine social sentiment’ and find out what people are really saying about our brands, dark social can be key to unlock these. Here are some interesting statistics on dark social, lifted from the piece, including some take-aways from a presentation I heard at AdTech Sydney 2017 two weeks ago:
- Nearly 60% of social sharing in the US now occurs via dark social, and brands are following suit with entire campaigns having been run on WhatsApp and other niche apps like Grindr
- Dark social channels make up over 75% of converted sharers and clickers
- The over 70% of dark social conversions compare with the public social channel conversion rates of approx. 12%-15% on Facebook and 4% on Twitter
- Sharers on dark social channels are 9x more likely to convert
Social media continues to evolve, giving people more flexibility and options to share and tell their stories in varying degrees of intimacy depending on what and who they share with. Dark social platforms are growing rapidly and will continue to grow. The onus is now on marketers to figure out how to navigate this shift in digital behaviour and use it to their advantage. Share your thoughts – whatever you’d be comfortable sharing – on Twitter @esimbie… I would be keen to hear them!
Read: http://www.campaignlive.co.uk/article/redefining-private-space/1428087
A Game About ‘Being’
Nat says: Like many people, I play the odd computer game every now and then. My all-time favourite is the Sims, but this new game called ‘Everything’ takes the somewhat God-like nature of game playing to a whole new level. In the game, you can literally be anything from a strand of DNA, to a lion, to a galaxy. I love the concept of the player being in control with no specific goals attached to the game itself. The game is also narrated by one of my all-time favourite philosophers, Alan Watts, whose entire work was centred on existence and the connected world we all live in. The fact the game can be ‘everything’ you want it to be makes me think of malleable software used in organisations, such as enterprise social software, in which end-users have to somewhat ‘play’ on the technology to figure out its usefulness for themselves. Such usefulness cannot be predicted or assigned prior to people using and engaging with the technology. Perhaps the evolving world of software is giving rise to an undercurrent of existentialism and putting power back in the hands of the individual?
Read: http://www.polygon.com/2017/3/14/14926684/everything-ps4-trailer-alan-watts
The Legal Imagination: Why Judges and Lawyers Need Imagination as Much as Rationality
Jakkii says: Following on from my reading into ethics and AI, I came across this interesting piece on the use of imagination in the legal realm. The author explores four ‘imaginative abilities’:
- Supposing
- Relating
- Image-making
- Empathy (or, as it is described: “the ability to take on the perspectives of other people”)
Littered with case examples illustrating each of these imaginative abilities in action, the article led me to consider not only how imagination might help us with teaching ethics to AI, but indeed how imagination can help each of us in our own work. In design, imagination is broadly assumed as creativity; however I think the more rigorous ideas of imaginative abilities put forth in this piece are more reflective and more useful. Without empathy, without supposing, without relating, without image-making – how would we understand a problem, generate ideas, prototype solutions, test them, understand the outcomes, iterate, prototype and test once more? Further, without the ability to imagine, how do we put forth our best argument for our case, our solution?
How do you apply imagination in your work?
Read: https://aeon.co/essays/why-judges-and-lawyers-need-imagination-as-much-as-rationality
for W3c validation
[…] previous editions of Friday Faves, (18 August, 11 August, 4 August, 7 July & 31March ) we’ve been reviewing articles about how robots are a potential threat to our jobs. According […]
for W3c validation
[…] that could be used for propaganda; to adapt to the ‘pivot to video’ phenomenon; to understand the impact of dark social and more. And we’re still […]